How the placebo effect affects performance and recovery for running, OCR, and endurance sports
Thomas Solomon, PhD.
Updated onReading time approx 10 minutes (2000 words).
What you’ll learn:
The placebo effect probably gives a tiny to small — but still meaningful — boost to time-to-exhaustion and time-trial performance in running and other endurance sports.
Your beliefs matter: if you genuinely expect something to help, that expectation might be part of the reason it does.
The placebo effect is real, but it’s not magic. You’ll get far more benefit by first nailing the big things — training load, sleep, nutrition, and rest — before you play with the fun but magical stuff.
Curious about the how and why? Scroll down for the details, the nuances, and the nerdy bits.
What is the placebo effect?
In the 90s, I moshed my socks off to Brian Molko’s melodic voice at many a Placebo gig. One thing stuck: placebos rock. Dad jokes aside, a placebo is something that looks and feels like a treatment but is not expected to have a direct physical effect — for example, a sugar pill, a salty water injection, or even sham (fake) surgery where the surgeon opens you up, closes you again, and that’s about it.
In a randomised controlled trialThe “gold standard” approach for determining whether a treatment has a causal effect on an outcome of interest. In such a study, a sample of people representing the population of interest is randomised to receive the treatment or a no-treatment placebo (control), and the outcome of interest is measured before and after exposure to the treatment and control., there is a treatment group and a control group. Sometimes the control group is given something that looks just like the treatment so they cannot tell which group they are in — that “fake” treatment is the placebo, and we call this a placebo-controlled trial. If the outcome we care about improves in the placebo group, we might suspect a placebo effect. But to be sure, we’d also need a non-placebo control group that gets nothing at all, so we can check whether the placebo group improves more than “just time passing”.
When a study includes both a placebo group and a non-placebo group, and the placebo group improves more, we say there has been a placebo effect (and Brian Molko can sing about it). If, instead, the placebo group does worse than the no-treatment group, that’s called a nocebo effect — expectations working against you rather than for you.
Imagine this scenario:
You have a big race today. Your coach gives you a supplement that they know is not actually proven to improve performance. You don’t know what it is, but you trust your coach, you want to win, so you take it. During the race you feel like a monster — strong, fast, and laser-focused. You win! After the race, you’re convinced the supplement made the difference. Your mother says it was a coincidence. Your coach points out that you’ve trained well for months, hit personal bests, and were already in great shape — you were always likely to win.
Who is right… You? Your mother (who is usually right)? Or your coach?
If you are right, the supplement could be acting as a placebo and your win could be called a placebo effect. Because you expected a performance boost, you could also call this a belief effect. But a positive expectation is not always required for a placebo effect, and strong positive expectations can sometimes make a placebo effect even bigger.
Two important takeaways here. First, your experiences shape your beliefs, but experience alone (“I took X and then I won”) is not enough — you also need data (your training history, fitness, and results over time) to build useful beliefs. Second, you are the only person responsible for what goes into your body. Never take candy from a stranger your coach without knowing what it is, what it does, and whether it has been independently tested.
Now, that was just a hypothetical story. You’re probably wondering, does the placebo effect exist?
Several systematic reviewsA systematic review answers a specific research question by systematically collating all known experimental evidence, which is collected according to pre-specified eligibility criteria. A systematic review helps inform decisions, guidelines, and policy. and meta-analysesA meta-analysis quantifies the overall effect size of a treatment by compiling effect sizes from all known studies of that treatment. (see here) have reported evidence for placebo effects across a range of diseases in studies where participants were blinded to whether they are receiving a placebo or the real treatment (they don’t know which is which). Two meta-analyses (see here & here) also looked at “open-label” placebos — i.e., when people are openly told they are getting a placebo — and compared them with no treatment. These studies report moderate to large effect sizesAn effect size is a quantitative measure of the magnitude of a relationship or difference between groups in a study. Unlike p-values, effect sizes show how large or meaningful that effect is. Common effect size measures include Cohen’s d, Hedges’ g, eta-squared, and correlation coefficients..
Woah!
But… we have to be careful. In longer-term studies (measured over days, weeks, or months), some conditions naturally improve with time, even without treatment. The same is true in sport: your performance might improve across a training block with a supplement, but it might also have improved without the supplement. Many studies also lack a true non-placebo control group, and there are only a few small “open-label” trials so far. In several of those, participants were given very positive messages along with the placebo — “This will make you feel better” — and then asked how they felt. You can see how that might nudge the answers.
In real-world healthcare, some doctors sometimes prescribe placebos (without telling patients) instead of an active drug. Ethically, that’s pretty sensitive territory, because patients have a right to know what they are taking. Given that the belief effect (how strongly someone believes a treatment will work) can influence outcomes, some argue that instead of secretly prescribing placebos, doctors could focus on boosting belief in genuinely effective treatments — so patients are more motivated to stick to them. But I’m drifting away from running and into medical ethics, so let’s shuffle back to the trails.
In research, “deception” studies — where participants are deliberately misled about a treatment — are hard to get past ethics boards, but there are some interesting examples. In a 2015 study by Ramzy et al., trained 10 km runners were given either no treatment or daily injections of “OxyRBX”. They were told OxyRBX worked like EPO, a hormone that boosts red blood cells and improves endurance performance. Before and after 7 days of treatment/no treatment, runners completed a 3 km race.
On average, runners given OxyRBX improved their 3 km time by 9.73 seconds (95% confidence interval [CI]A measure of uncertainty used in Frequentist statistics. The 95% confidence interval is a plausible range of values within which the true value (for example, the true treatment effect) would be found 95% of the time if the data were repeatedly collected in different samples of people. If this range crosses zero, there is little confidence in the effect. 5.14 to 14.33 seconds), while runners who received no treatment improved by 1.82 seconds (95% CI 2.77 seconds slower to 6.41 seconds faster; between-group comparison P=0.02A p-value is a statistical measure that indicates the probability that the result is at least as extreme as that observed if the null hypothesis were true. If P is small, the observed difference is big enough to disprove (reject) the null hypothesis. In very basic terms, P is the probability that the effect could be explained by random chance. A p-value below 0.05 is often used as a threshold to say the results look promising.). Runners in the OxyRBX group also reported feeling that the race was easier, felt more motivated, and said they recovered better. Classic placebo effect, driven by a strong belief they were getting a powerful performance-enhancing drug.
That’s kinda fun, but now consider this thought experiment:
A study tests the hypothesis that a caffeine-containing beverage improves 3 km running performance in well-trained runners. The runners complete the 3 km time trial four times in a randomised crossoverCrossover means that all subjects completed all interventions (control and treatment) usually with a wash-out period in between. design, at the same time of day following a 5-day period of identical nutrition, sleep, and training. In each trial, runners are given 250 millilitres of cold fluid to ingest 30 minutes before the time trial — the fluid is either water (no treatment), coffee (treatment), decaf coffee (placebo 1), and the same type of decaf coffee (placebo 2). The runners do not see the drinks being prepared and cannot see the drinks until they sip from them. They obviously taste that water is not coffee, but they cannot taste the difference between the coffee and the decaf. However, in one of the decaf coffee trials, runners are told that “This coffee has a lot of caffeine and will massively improve your performance today.” — this is the “belief” trial. When the study is finished, the data show that 3 km performance was equally faster in the coffee and the “belief” decaf trials than in the water and decaf trials, which were not different from one another. The researchers conclude that there was no placebo effect of decaf placebo 1 but there was a placebo effect (or belief effect) of decaf placebo 2 — decaf coffee gave the same performance-enhancing boost as coffee when runners were told it would.
So yes, the placebo effect is complicated, but it clearly has some potential in the athletic world. The obvious next question is…
What is the scientific evidence on the placebo effect’s impact on athletic performance?
The placebo (and nocebo) effect is hard to study properly. To do it well, the placebo has to be indistinguishable from the real treatment, there needs to be a non-placebo control group, and researchers have to control for what participants know or think they know about the treatment. They also need to account for whether participants correctly guess if they are on the real treatment or the placebo. That’s a lot of boxes to tick — and sometimes it’s literally impossible, like trying to “blind” someone to whether they are in a sauna or doing a particular exercise session.
In studies that included both a placebo group and a non-placebo group, the placebo effect explained roughly 50% of the benefit of exercise training on cognitive and psychological outcomes such as anxiety, depression, and mood (Lindheimer et al. 2015: placebo effect sizeAn effect size is a quantitative measure of the magnitude of a relationship or difference between groups in a study. Unlike p-values, effect sizes show how large or meaningful that effect is. Common effect size measures include Cohen’s d, Hedges’ g, eta-squared, and correlation coefficients. = 0.20, 95% CIA measure of uncertainty used in Frequentist statistics. The 95% confidence interval is a plausible range of values within which the true value (for example, the true treatment effect) would be found 95% of the time if the data were repeatedly collected in different samples of people. If this range crosses zero, there is little confidence in the effect. -0.02 to 0.41; exercise effect size = 0.37, 95% CI 0.11 to 0.63). However, these findings need support from more high-quality trials, and none of those studies involved athletes.
When researchers pool different “deception” studies that use nutritional interventions (including caffeine, beta-alanine, sodium bicarbonate, and anabolic steroids) and mechanical interventions (including electrical muscle stimulation, kinesiology tape, blood flow restriction, magnetic bands, and cold water immersion), they see small to moderate placebo effects (summary effect sizeAn effect size is a quantitative measure of the magnitude of a relationship or difference between groups in a study. Unlike p-values, effect sizes show how large or meaningful that effect is. Common effect size measures include Cohen’s d, Hedges’ g, eta-squared, and correlation coefficients. = 0.36) and similar-sized nocebo effects (effect size = 0.37) on sports performance. But the methodological quality of these studies is generally poor, so we need more high-quality randomised controlled trialThe “gold standard” approach for determining whether a treatment has a causal effect on an outcome of interest. In such a study, a sample of people representing the population of interest is randomised to receive the treatment or a no-treatment placebo (control), and the outcome of interest is measured before and after exposure to the treatment and control. before getting too excited.
When researchers specifically look at caffeine and buffering supplements (like sodium bicarbonate) in blinded studies where participants do not know whether they are on placebo or the real treatment, the placebo has a trivial (tiny) but still meaningful effect on performance compared with a non-placebo control (summary effect sizeAn effect size is a quantitative measure of the magnitude of a relationship or difference between groups in a study. Unlike p-values, effect sizes show how large or meaningful that effect is. Common effect size measures include Cohen’s d, Hedges’ g, eta-squared, and correlation coefficients. = 0.09, 95% CIA measure of uncertainty used in Frequentist statistics. The 95% confidence interval is a plausible range of values within which the true value (for example, the true treatment effect) would be found 95% of the time if the data were repeatedly collected in different samples of people. If this range crosses zero, there is little confidence in the effect. 0.01 to 0.17). The actual treatments (caffeine or buffers) have a small effect (effect size = 0.37, 95% CI 0.20 to 0.56) compared with no treatment on running and cycling time-to-exhaustion and time-trial performance. Roughly speaking, about 25% of the performance boost from caffeine or buffering agents seems to be explained by a placebo effect.
Therefore…
Since placebo effects are real and appear to have a small positive impact, it’s sensible for an athlete (and their support team) to try to maximise the placebo effect of proven, legal treatments by building genuine belief in their effectiveness.
If you choose to harness the power of the placebo effect, a reasonable approach is to:
Instil belief in the approaches you use and in the people you work with. When you choose an evidence-based dose of a supplement or recovery method, commit to it and stop doom-scrolling for magic extras. Note: this is based on effective doses used in research.
Also recognise that psychological belief is not a replacement for physiology. If science shows that an approach has harmful effects, belief does not rescue it. For example, many athletes claim that they “recover better” (feel better) when using ice baths (cold water immersion) after exercise. But, although cold water immersion may reduce feelings of soreness and improve short-term feelings of recovery, daily post-exercise cold water immersion over the long term has been shown to blunt training adaptations (see here for more info).
Can the placebo effect enhance athletic performance?
Using the placebo effect is likely to improve performance, but is unlikely to make a big difference to physical recovery. On the flipside, the nocebo effect — going into a session convinced something will make you feel worse or perform worse — is likely to harm performance.
The effect sizeAn effect size is a standardized measure of the magnitude of an effect of an intervention. Unlike p-values, effect sizes show how large the effect is and indicate how meaningful it might be. Common effect size measures include standardised mean difference (SMD), Cohen’s d, Hedges’ g, eta-squared, and correlation coefficients. for the performance benefit from the placebo effect seems to be small on average. It might still matter in tight races where seconds count, but it’s not a massive game-changer on its own.
Due to insufficient research, it is unclear whether the effect is similar between trained athletes and untrained folks, or between males and females.
Keep in mind: the studies are small and few in number, and there is high heterogeneityHeterogeneity shows how much the results in different studies in a meta-analysis vary from each other. It is measured as the percentage of variation (the I2 value). A rule of thumb: if I2 is roughly 25%, that indicates low heterogeneity (good), 50% is moderate, and 75% indicates high heterogeneity (bad). High heterogeneity means there’s more variability in effects between studies and, therefore, a less precise overall effect estimate. (variability) in effects between studies with a high risk of biasRisk of bias in a meta-analysis refers to the potential for systematic errors in the studies included in the analysis. Such errors can lead to misleading/invalid results, and unreliable conclusions. This can arise because of issues with the way participants are selected (randomisation), how data is collected and analysed, and how the results are reported.. So, the overall certainty of evidenceCertainty of evidence tells us how confident we are that the results reflect the true effect. It’s based on factors like study design, risk of bias, consistency, directness, and precision. Low certainty means more doubt and less confidence, and that future studies could easily change the conclusions. High certainty means that the current evidence is so strong and consistent that future studies are unlikely to change conclusions. is lowA low quality of evidence means that, in general, studies in this field have several limitations. This could be due to inconsistency in effects between studies, a large range of effect sizes between studies, and/or a high risk of bias (caused by inappropriate controls, a small number of studies, small numbers of participants, poor/absent randomisation processes, missing data, inappropriate methods/statistics). When the quality of evidence is low, there is more doubt and less confidence in the overall effect of an intervention, and new studies could easily change overall conclusions. The most effective way to enhance the quality of evidence is for scientists to conduct large, well-controlled, high-quality randomised controlled trials.. Therefore, additional high-quality randomised controlled trialsThe “gold standard” approach for determining whether a treatment has a causal effect on an outcome of interest. In such a study, a sample of people representing the population of interest is randomised to receive the treatment or a no-treatment placebo (control), and the outcome of interest is measured before and after exposure to the treatment and control. are needed to increase the certainty (confidence) in the overall effect size of the placebo effect.
The nice part: placebo effects do not seem to harm performance or recovery. If you enjoy a particular (safe, legal) ritual and feel it helps you, there is little downside to using it.
But, remember that the placebo effect does not make an athlete — the placebo effect does not replace training and placebo effects are not additive. I.e., using and believing in 5 supplements that don’t have proven ergogenic effects won't give you 5-times the benefit. Instead, it will give you 5 more factors that cost money and time and could return a positive doping test. That extra stress is the antithesis of the recovery and adaptation required for high performance. Yes, the placebo effect is real, but you chase clever psychological hacks, first invest your valuable time and money in the things that will reliably boost performance: Optimise your training load, and nail your sleep, nutrition, and rest. No tricks. Just learn to understand your body, watch for patterns, and adjust accordingly.
How to use this: Treat the placebo effect as a small bonus on top of evidence-based practice, not as a standalone strategy. Start with proven, legal tools (for example, caffeine or well-supported recovery methods), test them in training, and build genuine belief by noticing how you respond. Avoid wasting energy on sketchy or risky “magic” products, and be especially wary of anything that is untested, banned, or not clearly labelled. In short: use belief to boost good decisions, not to justify bad ones.
Full list of meta-analyses examining the placebo effect for performance.
Here are the meta-analyses I've summarised above:
Placebo and Nocebo Effects in Motor Performance: An Overview of Reviews. Brietzke et al. Brain Behav (2025)
Negative expectations and measurable movement mechanics: a scoping review of the nocebo effect on motor performance. Burgos-Tirado et al. Front Hum Neurosci (2025)
Caffeine Placebo Effect in Sport and Exercise: A Systematic Review. Vega-Muñozet al. (2024) Nutrients.
Placebo and Nocebo Effects on Sports and Exercise Performance: A Systematic Literature Review Update. Chhabra et al. Nutrients. (2024)
Nonplacebo Controls to Determine the Magnitude of Ergogenic Interventions: A Systematic Review and Meta-analysis. Felipe Miguel Marticorena, Arthur Carvalho, Luana Farias DE Oliveira, Eimear Dolan, Bruno Gualano, Paul Swinton, Bryan Saunders. Med Sci Sports Exerc (2021)
The Placebo and Nocebo effect on sports performance: A systematic review. Philip Hurst, Lieke Schipof-Godart, Attila Szabo, John Raglin, Florentina Hettinga, Bart Roelands, Andrew Lane, Abby Foad, Damian Coleman, Chris Beedie. Eur J Sports Sci (2020)
Consensus statement on placebo effects in sports and exercise: The need for conceptual clarity, methodological rigour, and the elucidation of neurobiological mechanisms. Beedie et al. (2018) Eur J Sport Sci
Quantifying the placebo effect in psychological outcomes of exercise training: a meta-analysis of randomized trials. Jacob B Lindheimer, Patrick J O'Connor, Rod K Dishman. Sports Med (2015)
Who is Thomas Solomon?
My knowledge has been honed following 20+ years of running, cycling, hiking, cross-country skiing, lifting, and climbing, 15+ years of academic research at world-leading universities and hospitals, and 10+ years advising and coaching in athletic performance and lifestyle change.
I have a BSc in Biochemistry, a PhD in Exercise Science, and over 90 peer-reviewed publications in medical journals.
I'm also an ACSM-certified Exercise Physiologist (ACSM-EP), an ACSM-certified Personal Trainer (ACSM-CPT), a VDOT-certified Distance Running Coach, and a UKVRN Registered Nutritionist (RNutr).
Since 2002, I’ve conducted biomedical research in exercise and nutrition and have taught and led university courses in exercise physiology, nutrition, biochemistry, and molecular medicine.
And, with my personal experience of competing on the track (800m to 10,000m), the road (5 k to marathon), on the trails, and in the mountains, by foot, bicycle, cross-country ski, and during obstacle course races (OCR), I deeply understand what it's like to train and compete — I've been there, done it, and gotten sweat, mud, and tears on my t-shirt.