Toward a More Self-Correcting Psychological Science
How to make psychology the true science it has always aspired to be.
Posted Dec 14, 2017
My prior post here identified a slew of ways in which psychological science has not lived up to its self-appointed claims as a "self-correcting" science. One could view that as me just being a negative Nancy. To (self-?)correct that, in this post, I identify a slew of constructive steps that could make psychology the self-correcting science it has long claimed to be.
The ideas for this essay emerged from the Scientific Self-Correction working group of the Society for the Improvement of Psychological Science. A followup blog will introduce this new organization working to, shockingly, improve psychological science in more detail.
Contributors to the ideas presented here include Lisa Auster-Gussman, Andy Vonasch, Marcel van Assen, Richie Lenne, Joe Hilgard, Victor Keller, Traci Mann, Alex McDiarmid, Alexa Tullett, and Simon Columbus. Nonetheless, as with the earlier essay, all errors, outrages, misstatements, and other problems with this essay are entirely mine.
SIPS stands for the Society for the Improvement of Psychological Science. This is a brand-new organization, holding its first conference in 2016.
I will be blunt: SIPS rocks. No, really. It is wild. It is dominated by mid-career and young scholars with many undergraduates and graduate students. Hardly anyone from my generation attends (I am 62), which, of course, makes sense, because it is my generation that got us into this mess, and because many of the most senior and successful people in the field are extraordinarily protective of their status.
As I was working on this post, another group (some of whom are quite active in SIPS) came up with this wonderful idea: They are collecting instances where researchers have "lost confidence" in one of their prior findings, and explain why. They will compile these into a single paper which will, eventually, be published as a scholarly article that is something of a compendium of actual self-correction in psychology.
SIPS is the future, and it is wild. At a typical academic conference, famous senior people and rising stars get to pontificate and publicize their bold new discoveries, or rehash their former glories, while everyone else sits around and listens, with maybe a few questions at the end.
Not at SIPS. I missed the 2016 event, but was there in 2017. SIPS’ main events are workshops and “hackathons.” Want to learn better statistics? Lots of workshops. Want to learn the new open science practices to ensure transparency? Lots of workshops.
But my favorite events were the hackathons and “unconference” sessions. Hackathons are small group meetings, involving anyone who wishes to participate, to figure out how to improve some aspect of psychological science. Think the journals are a mess? There is a hackathon for creating a journal based on the new principles of open science. Think the culture in most departments does not support optimal scientific practices? There is a hackathon to change the academic research culture.
And what the hell is an “unconference”? It is an intellectual oven. First, someone proposes a half-baked idea for somehow improving psychology. If even a few other scholars think this might be an interesting idea, they meet, and they try to bake the half-baked idea into something that might actually change scientific practice. If the idea has legs, the unconference can morph into a hackathon, where the goal is no longer to throw ideas around, but rather how to figure out what to implement in order to improve the field.
And that, gentle reader, is the source of this post. I have been frustrated by failures of self-correction in psychology for a long time; my 2012 book on social perception was something of a forced march through such failures in social psychology with respect to claims about stereotypes, error, bias, and self-fulfilling prophecies in social perception. And this article was my first foray into self-correction as a scholarly topic, per se.
But I did not even think of “working on” facilitating scientific self-correction as a goal until launching this unconference at SIPS. Because of the relatively informal SIPS process, people came and went. Overall, I would guess that about 15 scholars were there for at least some of the time, and most of the discussions had at least 10 people present.
Our unconference was so successful that it quickly evolved into a hackathon; that is, we quickly went from whining and moaning to coming up with specific ways to improve self-correction in psychology. Here are some additional ideas our self-correction group generated at our very first meeting:
An award for Champions of Scientific Self-Correction. Self-correction is hard because it is hard to admit we were wrong. And, so far, the system of science is entirely lined up to make psychological scientists defensive. Starting in graduate school, we are trained to defend our ideas and research findings from critics, to explain away counter-evidence or alternative explanations. We brand ourselves with our phenomena, and, if those phenomena turn out to be illusory, our brand is damaged. There go the TED talks, the bestselling books, and the fast track to grant funding. Solution? Recognize self-correction as an act of intellectual strength, not a sign of weakness.One solution is to create an incentive to self-correct, or, at least, a reward, an acknowledgment for extraordinary acts of self-correction. An award for self-correction might, at minimum, de-stigmatize self-correction. And it might do more; as an award that scientists could actually include on their vitas, it might actually encourage a greater degree of openness to self-correction.
Raise awareness. Coordinated push for blogs/essays/editorials on obstacles to self-correction, and the need for reforms to facilitate self-correction.
Badges for peer-reviewed journal articles that include explicit statements of what would falsify a theory, hypothesis, or conclusion. Badges are a relatively recent innovation in scholarly publishing within scientific psychology; they are intended to represent indicators of scientific quality, validity and transparency. Articles can receive badges for making their data or materials publicly available; and for having posted a pre-registered plan describing the hypotheses, methods, and analyses. We raise the possibility of a fourth badge. One of the core problems in psychological science is wide theoretical flexibility for researchers to interpret almost any finding as “support” for their theory or claim.
Consider this example: One does not need to be a psychological scientist to know that, in general, if prejudice predicts behavior at all, it should predict discrimination against the target of one’s prejudice. Sexism should predict discrimination against women; racism should predict discrimination against racial groups other than one’s own. Nonetheless, a paper claimed confirmation of a hypothesis that higher implicit prejudice (as measured by the Implicit Association Test [IAT]) would predict more pro-black behavior! If causing more anti-black and more pro-black behavior both are taken as confirmation of a hypothesis about prejudice, what would ever constitute disconfirmation?
Similarly, authors commonly engage in posthoc discounting of the importance of research that disconfirms their pet theories and hypotheses. Because psychological scientists are not currently required, or even rewarded, for explicitly stating what would constitute such disconfirmation, they have almost infinite flexibility to simply criticize such work. This becomes far more difficult if they have to state, upfront, what evidence they would accept as disconfirmation.
Theory Clearinghouse. Whether online or in book form, create a resource that identifies theories that are known to generate (even partially) contrasting predictions. To the extent that failures to self-correct occur because people are not aware of alternatives, such a resource should facilitate self-correction. It will also facilitate psychological science in other ways. Platt (1964) argued for strong inference—the idea that, by pitting alternative theories against one another, we can make the most rapid progress possible in scientific research, far more rapidly than simply testing our pet theories.
Psychology needs to become a more efficiently self-correcting science. Psychologists often engage in their work to make people’s lives better. But our track record for doing so is spotty at best, in part, because false or misleading claims can serve as the basis for interventions and advice. Whether you are simply a consumer of psychological research and ideas, or a researcher yourself, self-correction is, or, at least, should be, one of the hallmarks of what makes psychology a “true science.” I am not really sure what the term “true science” actually even means, but, for practical purposes, I would like it to mean something like, “A field that produces research that leads to insights that are actually true, and that, sometimes, can actually lead to things that constructively improve people’s lives.” Facilitating self-correction is one important tool for doing so.
If you find these sort of ideas interesting, consider following me on twitter:
https://twitter.com/PsychRabble My focus there, as here, is primarily on issues of science reform, academic freedom, speech issues in the academy, and also stereotypes, prejudice, and discrimination.