I appear to be in a tiny minority of psychological researchers who believe failures to replicate influential, classic studies are just as important to publish as the originals. Studies typically do not become highly cited classics because they receive a reception of "hmm, that's interesting, let's see if it really holds up." They become classics because they tell stories that many other researchers want to believe (they confirm theoretical or political perspectives that researchers hold dear).
Researchers love to point out the inherent ambiguity of failed replications, and they are right. Replications may fail for many reasons, including:
Far less frequently, however, researchers fail to point out the inherent ambiguity in the original "successful" studies finding significant effects. False "successes" can occur because:
Furthermore, many studies in social psychology and other fields, some quite influential and classic have proven irreplicable (see references below). Look for my next blog post which will be a simple list of irreplicable or difficult to replicate studies in social psychology.
Many journals in social psychology explicitly disdain publishing attempted replications on the grounds that "there is no new contribution." Those grounds are severely misguided. If the replication succeeds, the new contribution is the confirmation that the original study was, in fact, actually correct and its conclusions likely to be justified. Going from the first study—"Interesting, I wonder if that result holds up and is really believable"—to the replication—"Wow, that is really true" seems just as large a marginal increase in knowledge as going from nothing to the first study.
If the replication fails, the conclusion is that the original study, no matter how innovative, creative, or "important" cannot necessary be believed or taken at face value, and additional research is needed to identify whether and under what conditions (if any) the effect appears. That is a ton of new information and warrants publication in the same journal that published the original article. In addition to journal policies against publishing replications and reviewer disdain for replications as being unoriginal, failed replications are also often very difficult to publish because they are (dysfunctionally, in my opinion) sent to the original researchers for review—and, of course, those researchers have a vested interest in making sure the failed replication never sees the light of day.
The journals have exactly the wrong policies. Instead, replications of previously unreplicated studies should have higher than usual priority and failures to replicate should almost never be sent to the authors of the original study. Instead, they should be sent for review to researchers with no conflict of interest.
It is quite reasonable for laypeople and experts alike to be skeptical of one or two reports, no matter how "scientific" they are, and no matter how prestigious is the journal in which they are reported. Typically, a single demonstration of almost anything—a baseball player getting a hit, a broker recommmending a good stock, a restaurant providing a mediocre appetizer—is not taken as strong evidence for any conclusion (in these examples, that the player is a great hitter, the broker will make you tons of money, or the restaurant is awful) in almost any other walk of life. Scientific "results" should be treated identically—not believed, or believed very tentatively, pending consistent demonstrations that such results actually hold up.
I would go further: No one should ever take seriously the conclusions reached in ANY study (including mine!) unless and until they have been replicated, preferably more than once, by someone other than the original researchers, their collaborators, or current or former students.
Bartlett, T. (2012). Is psychology about to come undone? Chronicle of Higher Education. Retrieved on 8/7/12 from: http://chronicle.com/blogs/percolator/is-psychology-about-to-come-undone/29045
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLOS Medicine, 2, 696-701.
John, L.K., Loewenstein, G., Prelec, D. (In press). Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science.
Jussim, L. (2012). Social perception and social reality: Why accuracy dominates bias and self-fulfilling prophecy. NY: Oxford University Press. (Abstracts, first chapter, and excerpts available at: http://www.rci.rutgers.edu/~jussim/)
Simmons, J.P., Nelson, L.D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366.