Can Nancy Cartwright Fit Out Psychology's Hidden Moderators?
Can Nancy Cartwright’s philosophy rescue a terrible argument?
Posted Jul 07, 2020
The “hidden moderators” argument in psychology has been frequently trotted out as an excuse for why we can ignore it when studies don’t replicate. A famous researcher claimed something about human nature (like how willpower works) based on the results of a study. It later turns out that those results don’t hold up in subsequent versions of that study, but the researcher claims we can ignore them because there must have been some other factor—a “hidden moderator”—that affected this subsequent version of the study (but of course was absent in the original).
It’s a transparent attempt to dismiss inconvenient facts, so most thoughtful people ignore it. But reading one of the most influential philosophers of science of the last century has led me to rethink it. Is there really something to the “hidden moderators” argument?
Nancy Cartwright doesn’t like what she calls “science fundamentalists.” Fundamentalists, in her view, tend to think that “all facts belong to one grand scheme” and that facts coming from highly structured and theorized “schemes” are most important (The Dappled World, p. 24). This is a kind of physics imperialism that says that all the specific ways that things (like particles) act in formal theories (like Coulomb’s law) are fundamental and universal.
One of Cartwright’s big insights, though, is that the very precise laws of physics don’t work universally. Coulomb’s law, for example, is an equation that describes how much force is acting on a charged particle. But it assumes that the particle is so tiny that the force of gravity won’t play a role. Of course, in everyday life, gravity does matter. It’s only in a highly controlled experimental situation that we can see Coulomb’s law in action. In everyday life, the precise predictions made by Coulomb’s law won’t fit our observations of charged particles very well at all—because there’s so much else going on that needs to be taken into account.
All of physics, according to Cartwright, is like this. The precise laws that describe how different forces operate will only produce accurate results in artificial, controlled situations. It’s only when they are “shielded” from all the other normally occurring, variable forces in the wider world that they become accurate.
We tend to think that you do physics and other “precise sciences” by starting with abstract principles—the laws—and then deriving what would happen in any new situation. But that’s not what happens in practice. Instead, we use laws to help guide us towards creating a model that will work in a specific situation. Cartwright calls it “fitting out” the abstract model, where we must work out how the abstract principle will be turned into a concrete description that holds in a specific situation. There’s a fair amount of work that needs to happen between knowing how forces act on bodies and actually getting a reasonably accurate prediction of how much weight a particular bridge can hold. We don’t just know automatically how the bridge is going to work based on first principles—there’s a certain amount of working out the specifics for that bridge, or this type of design, that relies on more intuitive, situational knowledge about what’s going on.
Cartwright’s arguments are abstract when applied to physics, where we don’t really have intuitions about how particles act or how to sum up forces. (Although they were certainly controversial when presented in her book How the Laws of Physics Lie.) But when you apply them to current debates about replication in psychology, they become deeply illuminating.
Psychologists who come up with theories often describe these theories at an abstract level. Ego depletion, for example, involves a steady depletion of willpower over time, which can be built up through repeated exercise. So what do you mean by willpower? And what do you mean by depletion? How often and what kind of exercise is needed to build it up? Any researcher who wants to study ego depletion needs to “fit out” the theory to a specific example. That means the researcher needs to pick a specific task that will be used to measure willpower (like persistence in solving difficult anagram puzzles), pick a way to measure or induce the depletion (like forcing someone to do a difficult editing task ahead of time to tire them), pick a way to measure their level of “strength” coming in, etc. To study willpower, you need to set up a specific way to observe and measure it.
The assumption in social psychology has been that picking the right set-up for studying something is fun, easy, and a creative release. You could pick any type of situation where someone has to do something hard and treat it as a measure of willpower: it could be squeezing a handgrip hard, avoiding chocolate chip cookies, keeping a straight face while watching a funny movie, etc. Any of these is a legitimate measure, and you’ll often see multiple such very disparate ways of “fitting out” an abstract concept in a single manuscript.
What Cartwright shows, however, is that this style of research leaves a lot of work undone. In the most precise sciences—the kinds of sciences you can use to build things like bridges, planes, and spaceships—you can’t just assume that the abstract principles will automatically work in the same way across every relevant situation. Instead, a lot of work needs to go into figuring out the details of how the abstract principle will work in a given situation. Further, to really get at the concept you’re trying to measure—whether it’s force on a charged particle or willpower—you need to carefully construct the experimental situation to shield it from outside forces.
Psychologists whose theories make predictions that don’t replicate often like to invoke “hidden moderators” as a reason the study didn’t work when it was repeated a second time. There’s some real expertise to getting a study to work out, and it involves setting things up just so to get at the effect. Applying Cartwright’s ideas, we would say that this is right. It does take a lot of work and expertise to figure out the proper set up to get at the concept we really want to measure. A key part of the science is using the same set up over and over again, working out all the kinks, so that you have a procedure that consistently allows you to see the effect you want to study.
It’s not that physics doesn’t have hidden moderators. Of course it does! Consider trying to use the laws of physics to predict how a leaf will blow in the wind: there are so many complex and varied forces acting on it that it’s practically impossible to predict where it will go. What physics has done is to tackle this problem head-on by picking specific set-ups—like a pendulum swinging or a particle being shot at a plate—that it studies very intensely until it fully understands what the “hidden moderators” are. In essence, it brings the hidden moderators to the light. Once it’s done that, they can be properly accounted for, and scientists can start to develop a very precise understanding of what will happen when they change the concept they’re interest in—like willpower.
The problem with the “hidden moderators” argument in psychology isn’t that these aren’t important factors. It’s that they are only invoked selectively. When a study “works,” we don’t worry about them. When it doesn’t give the results we want, then hidden moderators are at work. Instead of just dismissing this argument, psychology needs to take it much more seriously, and build in an understanding of hidden moderators from the beginning. That means having fewer fun, creative paradigms that are staged like plays (“accidentally” dropping pens and seeing how many the participant bends over to pick up for you), and picking a fixed set that we can take the time to deeply understand.