I have been in the field of cognitive science for about 25 years now. There are three significant changes that I have seen in the kinds of research that get done in that span.
First, there is tremendous pressure to publish a lot of papers. When I went on the job market for the first time about 20 years ago, I had 4 published papers, and I landed a job on the faculty at Columbia University. Now, many of the job candidates applying for faculty positions here at the University of Texas have more than 10 publications.
Second, the typical length of a psychology paper has gotten shorter. Until about 1990, most papers were 10-20 pages long and reported about 4 experiments. About 20 years ago, the journal Psychological Science started publishing short articles. Since then, almost every scientific journal has added a category of short reports (including the journal I edit, Cognitive Science). These shorter reports are just 3-5 pages long and often describe one or two studies.
Third, there has been an increased interest in the literature on counterintuitive findings. When I first started doing research, it was very hard to publish a new finding. It often took several years and many replications of a finding to convince surly reviewers that a new phenomenon was real and worthy of study. Now, the journals are much more open to findings that are surprising. Indeed, the mission of Psychological Science early in its existence was to focus on research results that would make you go “Wow!”
This combination of factors has had both benefits and costs to the field. On the positive side, researchers have begun to go beyond the topics that dominated the field in the 1970s and 1980s. Because it is easier to explore new topics, researchers have expanded the range of research considerably.
On the negative side, though, this openness has led to abuses. In a few high-profile cases, there has been outright fraud. Over the past 5 years, several researchers have resigned their posts, because they were actually making up their data.
More commonly, though, the pressure to publish novel findings has led researchers to cut corners. Rather than replicating a new effect several times, researchers run one or perhaps two studies and then rush to publish the new result. In addition, several reports have pointed to common practices in the research community that inflate the possibility that experimenters will find evidence for differences between experimental conditions that do not really exist.
For example, researchers working on a new technique may run a small number of participants. If the experiment does not seem to be “working” (that is, the experiment is not showing the desired finding), then the experimenters tweak the method and try again. They continue doing this until they get the expected finding. This practice inflates the likelihood that researchers will observe a difference between groups that is a statistical anomaly and not a real research finding.
So, what can be done?
The only way to determine which findings in the literature are real is direct replication. That is, researchers from a variety of labs need to take interesting studies and repeat them exactly to determine whether the results hold up.
The problem is that direct replications of studies are hard to publish. The reputation of a scientific journal hinges on citations. Journals become more prestigious when other researchers cite the work published in that journal in later papers. Researchers themselves are evaluated on how often their work is cited. Direct replications of studies are not often cited, because people want to go back to the original source when writing about a topic.
So, there is little incentive for researchers to replicate findings directly. And that means that it can be hard to determine whether a surprising new finding is real.
Now, the journal Perspectives on Psychological Science is trying to correct this problem. The editor of this journal, Barbara Spellman, has been worried about the issue of replication for some time. She has enlisted the help of prominent psychologists Daniel Simons and Alex Holcombe to serve as the editors of a new project on replication at this journal.
Researchers will be encouraged to repeat interesting studies using the same methods used by the original researchers. These reports will be collected and published as a group. Because the aim of this section of the journal is replication, the focus will be on a statistical measure called effect size. Effect size is a measure of the strength of an experimental result. Most studies in the literature focus on a different criterion, which is statistical significance. Statistical significance measures how likely it is that an experiment would come out the same way if it were run again. But, research practices like the one I described earlier can influence the goodness of this measure of significance. So, the measure of effect size is a better focus for work on replication.
Ultimately, the idea is that if many researchers from many labs repeat an experiment, then the field will have a good idea of the strength of a result. Because Perspectives on Psychological Science is a prominent journal, it will encourage researchers to take on these replications. As this project moves forward, it will provide a firmer foundation of evidence for the development of new theories about the mind.
I am looking forward to seeing how this project turns out and to participating in these replications.
Follow me on Twitter.