Psychology's Course Correction

After a string of failed follow-ups, psychologists are pushing to change how their field works.

By Matt Huston, published January 2, 2018 - last reviewed on April 17, 2018

Fredrisher/Shutterstock

For those who investigate human behavior, particularly social scientists, the past few years have been rocky ones. Retests of many high-profile claims have come out negative—including the idea that exerting willpower on one task makes it harder to do so on an unrelated task, for example, or that striking a "power pose" can lead you to make bolder choices. A 2015 report found that fewer than half of nearly 100 published psychology findings held true in follow-up studies. Yet this replication crisis has empowered a growing movement for more robust research. Here's what a few of the reformers want to see happen next. 

Testing and Retesting

Failures to reproduce some landmark findings have made headlines, but such jolts could be avoided if low-key replication attempts were baked into the standard research process, says Chris Chambers, a cognitive neuroscientist at Cardiff University. Once a finding is deemed significant, a system of rigorous replication would double- and triple-check it using the same procedure before it is taken as fact. "Shock therapy, at the moment, seems to be what's driving people," Chambers says. "The more we can normalize practices like replication, I hope, the less we'll need to rely on that shock." At the online journal Royal Society Open Science, he and his colleagues have taken the unusual step of committing to publish replications of studies that first appeared in more than two dozen other journals.

Open Predictions

Many psychology studies are presented as testing a predetermined hypothesis about some aspect of human behavior. In practice, researchers have long exercised the freedom to alter a study's hypothesis after the results are in. "It's really important that we eliminate bias from researcher behavior, like changing a hypothesis after the fact to fit unexpected results or analyzing data a lot of different ways and reporting only the most attractive outcomes," Chambers explains. Preregistration, a practice he advocates, involves making a record of hypotheses and analysis plans in advance. This leaves researchers less wiggle room in their analysis, but also increases the odds that any seemingly noteworthy findings are due to more than chance.

Showing the Work

Another push for transparency has focused on the public release of the data on which researchers' conclusions are based. "We're definitely seeing an uptick, but I think we're still in the early days," says Simine Vazire, a psychologist at the University of California, Davis. While many psychologists have traditionally kept the data they collect private, the scientific case for publishing this material, when it's feasible to do so, is straightforward. "One thing that distinguishes scientific ways of knowing from other ways is the willingness to make the basis of your claims available for others to scrutinize and reanalyze," Vazire says. Full access to the data makes it easier for other scientists to distinguish weak cases from strong ones.

Changing Incentives

In psychology, as in other academic fields, the need to get studies published—career advancement rides on it—encourages scientists to selectively present evidence that seems to support their claims, since it's typically more appealing to journals. But contrary or ambiguous results are key to the scientific pursuit of truth. "We have to make sure the culture of incentives reinforces the values that we share without requiring people to think about them all the time," says psychologist Brian Nosek, who heads the nonprofit Center for Open Science. Working with Nosek's group, Chambers has promoted a model, Registered Reports, in which journals agree with researchers ahead of time to publish the results of well-designed studies, however they turn out—a policy that could deliver on the promise of a more honest body of evidence.