Skip to main content

Verified by Psychology Today

Self-Control

How Willpower Wasn't: The Truth About Ego Depletion

Why an effect found in hundreds of studies didn't replicate.

 Photo by Brigitte Tohm from Pexels
New research suggests that willpower research was wrong.
Source: Photo by Brigitte Tohm from Pexels

Ego depletion, a modern psychological take on willpower, has been studied in over a hundred research studies—yet a new, definitive manuscript testing ego depletion found no evidence of an effect. These results are based on getting 36 laboratories to pool their resources and collect a huge sample (3,531 participants) to provide a more definitive test of the ego depletion effect. As the authors put it “the data were four times more likely under the null than the alternative hypotheses.” In other words, based on the new data collected, it is four times more likely there is no ego depletion effect than that there is one.

Crucially, the researchers were required to specify ahead of time exactly what experimental methods and statistical methods they would use, so that they could not choose to adjust the method or statistics after the fact to make results better fall in line with predictions. The use of flexible statistical methods allows researchers to increase their likelihood of getting a “false positive” (supporting an effect that isn’t real in the broader world) from 5 percent to over 60 percent, so this is an important way to provide a strict test of the ego depletion effect.

The project also combined several advances in replication methods: (1) comparing different experimental set-ups, (2) recruiting both experts in the research area and methods experts who have no stake in the research area, (3) sharing video instructions created by experts and giving personalized advice about how to set up the study properly to each research team, and (4) using a separate, blinded team to conduct data analysis. This Paradigmatic Replication approach has created the new Study of Record on ego depletion, the one providing the most definitive account of the effect. And it found no effect.

How could psychology have hundreds of published papers on the ego depletion effect, but the carefully controlled, definitive study shows there is no effect? It turns out that Paul Meehl, a well-regarded psychologist who had been president of the American Psychological Association, laid out the explanation in 1967.

 Photo by Igor Starkov from Pexels
Meehl suggested psychologists have an issue with how they build theory.
Source: Photo by Igor Starkov from Pexels

The problem is in the way psychologists build up a body of evidence. They typically report results of studies that come out in the expected direction excitedly, as confirmation of the theory. When results don’t come out in the expected direction, some reason is typically found to dismiss them. As Meehl wrote, there is “a fairly widespread tendency to report experimental findings with a liberal use of ad hoc explanations for those that didn’t ‘pan out.’ This last methodological sin is especially tempting in the ‘soft’ fields of (personality and social) psychology, where the profession highly rewards a kind of ‘cuteness’ or ‘cleverness’ in experimental design.”

Cute experiments have striking, almost theatrical set-ups. For example, the original ego depletion study had participants working on a task in a room with a bowl of radishes and a bowl of freshly baked chocolate chip cookies, and told people to eat from one bowl but not the other. It was like a perfect TV set-up to test willpower: some people got to eat delicious chocolate chip cookies while others had to just look at the cookies while they got bland radishes.

Of course, these kinds of experiments also rely on what Meehl called “complex and rather dubious auxiliary assumptions.” For example, you have to assume that people like chocolate chip cookies (I don’t, so I’d throw off the experiment!), that they aren’t full from having just eaten a meal, that no one has a gluten allergy and so wouldn’t be tempted by cookies, etc. These assumptions seem plausible, but instead of creating experimental setups that control more precisely for them, they are left to chance. That way, they are “readily available as (genuinely) plausible ‘outs’ when the prediction fails.” If the study didn’t come out with the result your theory predicted, it’s probably just because of one of these “auxiliary assumptions” (the study was run right after lunch!).

 Photo by Pratikxox from Pexels
Theory building doesn't work if you can't rule anything out.
Source: Photo by Pratikxox from Pexels

Across a series of studies, a research group can claim that the theory was supported after every positive result, but that there was some alternate explanation that needs to be investigated for every negative result. It’s a “heads I win, tails you lose” method of theory building. Using this method, “a zealous and clever investigator can slowly wend his way through a tenuous nomological network, performing a long series of related experiments which appear to the uncritical reader as a fine example of ‘an integrated research program’ without ever once refuting or corroborating so much as a single strand of the network.” This method can’t ever move a theory closer to the truth, because it can’t rule out anything as wrong. As Meehl sees it, this pattern can lead to “a long publication list and… a full professorship” but with an enduring contribution of “hardly anything.”

There is evidence this happened with ego depletion. First, there are many stories about researchers across psychology who had “failed” studies they couldn’t get published. Having a scientific record of these instances where the effect didn’t work would have balanced our picture of the overall theory, but they were dismissed.

Then there are published comments by the originator of the theory, Roy Baumeister, who takes exactly the line Meehl described. He wrote that researchers (typically graduate students working in the lab) who couldn’t get results supporting his theory lacked an essential but indescribable “flair” for running studies. Further, when a previous large-scale replication study couldn’t find evidence for ego depletion, he used the “plausible out” that it was because of the computerized task done in that study—not any issue with the theory—that the study didn’t work.

When careful controls were put in place, and the “heads I win, tails you lose” method was ruled out, because everyone agreed that this coin flip was done right—it had to be reported. This allowed the scientific community to see how the problem Meehl described had played out in real life.

In later writings, Meehl suggested that journal editors routinely ask for this type of replication, to get this type of confirmation. Given what we’ve seen, this is probably warranted for other important areas of psychology research. His solution for researchers, though, wasn’t just to do more carefully controlled tests to find out if there was or wasn’t an effect. It was to move beyond the idea that there is or isn’t willpower, to create theories that made more precise predictions about the specific ways that willpower works. That is the real next step in improving psychology research.

Facebook image: Iakov Filimonov/Shutterstock

LinkedIn image: Pressmaster/Shutterstock

References

Vohs, K. D., Schmeichel, B. J., Lohmann, S., Gronau, Q., Finley, A. J…. Wagenmakers, E.-J., & Albarracín, D. (in press). A multi-site preregistered paradigmatic test of the ego depletion effect. Psychological Science.

Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34(2), 103-115.

advertisement
More from Alexander Danvers Ph.D.
More from Psychology Today