Psychological science has been abuzz with concerns about the replicability of research findings for quite a while now. In other words, in general, if psychological research findings are to be trusted as real and valid, then investigators in Lab Y in Alaska should also find a psychological effect observed by researchers in Lab X in Arkansas—or even Lab Z in Azerbaijan. In fact, there is an initiative for promoting replication work being led by the Association for Psychological Science (APS).
Why does replication matter? Put simply, if psychology is to be recognized as a science, then its findings need to be reliable—theories cannot be built if the effects used to develop them cannot be found by other researchers (or, in some cases, those who discovered them in the first place). Think about it: What research “facts” appearing in psychological journals and books (including the myriad text books used to educate students at all levels) should be trusted if replication is not a routine activity? There is also a problem (and not a pretty one—and let’s hope a rare one) posed by intentionally faked research results that make it into the literature. Such scandals don’t help to make psychology’s case to be a full member of the sciences an easy one.
To be fair, good psychological scientists always demonstrate a novel finding in more than one study before seeking to publish their work. First, a casual review of mainstream psychology journals reveals that multi-study articles are now the norm (this was not the case 20 or 30 years ago). In fact, it’s very hard to get a manuscript accepted into the best journals if the article doesn’t present the findings in a package of three, four, five or even six separate experiments. Second, good research builds on existing research, which means established (i.e., usually replicated) findings are a big part of the process. Researchers working on the same topic often borrow measures, methods, techniques, and so on, thereby often finding similar results noted earlier by other investigators (though there is a problem when a result reported in the literature is not found by another researcher—it’s very hard to publish a null result once an original finding is in print—the presumption is that the original investigator got it right, that the replicator did something wrong).
So, replicating research findings makes sense—a lot of sense—and members of the psychological community recognize the importance of replication efforts. But what about replicating pedagogical demonstrations, assessment activities, student learning outcomes, and research on the scholarship of teaching and learning (SoTL)—should these efforts also be subject to the replication criterion?
What does this mean for psychology teachers who do class-based research? Ideally, it means that, whenever possible, instructors should not only document what they did and how they did it, they should also replicate any effects at least once before sharing their findings. So, if someone teaching two sections of the same class introduces a novel pedagogical strategy in one section (using the other section as a control group) and the hypothesized result is found, a second (replication) study should be performed in a subsequent semester before the results are written up for publication. So-called “one-shot” projects typically don’t tell us much about the effect in question, or how students learn or perform on the usual assessments in response to a new teaching technique.
Now, I am not suggesting that causal activities teachers create “on the fly” always need to be replicated (though that might be nice). On the other hand, once a teacher decides to publish her findings in a pedagogical journal, such as Teaching of Psychology (ToP) or Psychology Learning & Teaching (PLAT), some effort towards replication is a good idea. More to the point, the rigor of psychology pedagogy journals has increased substantially, which means that one-shot studies are less likely to be published anyway. Indeed, would-be authors are likely to be asked to conduct a second (replication) study and to revise and resubmit their manuscript when the (anticipated) results are in. Yes, this takes more time and more effort, but it also means that a positive outcome—a replicated effect—means something—it’s there for other teachers and researchers to use in their own work.
And there is something to be said for ensuring the integrity of one’s own classroom in this process, too. So, if you develop(ed) an activity that seemed to improve students’ test or quiz scores, class engagement, attendance, written assignments, or the like, wouldn’t it be nice to know that it wasn’t a fluke so you can make it a regular part of your teaching repertoire? As we enter the new year, I encourage creative psychology teachers to think about ways they can demonstrate the veracity of the activities they develop (or routinely use) in their teaching—it would be great if each teacher selected just one activity/effect to replicate. As we collectively build and promote a replicable science of psychology, we should not neglect the part of that science that deals with quality teaching and learning in the process.
Have a productive and engaging new year!