Most of my entries so far have addressed problems with Psychology as a science: replicability problems, questionable research practices, political biases
, and problems of interpretation and credibility. And I do think that understanding
limitations and problems with psychological science are incredibly important.
However, at the same time, I have never stated, implied, or intended to declare psychological research as bunk. There is a ton of good research going on out there. So, the question then becomes, "How does one know when to take some claim made by psychological researchers seriously?"
Although this is not a simple question, I attempt to provide some useful guidelines here about how to know when and whether to take science generally, psychological science specifically, and claims supposedly based on science seriously.
I. What is the Claim? Is the claim "We found X" or is the claim "X is true"? "We found X" is likely to be a description of the result of a particular study. Researchers do sometimes misinterpret their own and others' studies, but, at least, "We found X" is a narrow claim. Either the study did or did not find X. "X is true" is far more extreme. X might be true, but believing X to be true requires a much higher standard of evidence than is required for believing "They found X."
Here is why:
II. Never ever believe as "generally true" any conclusion from a single study. Single studies can produce erroneous or misleading results for a zillion reasons.
III. Never ever believe as "generally true" any conclusion emerging from a single researcher or team of researchers.
See my prior posts for points II and III above:
IV. If a result is reported in a single paper, it should be treated as, "Oh, that's interesting, I wonder if it is actually true" but not as "fact." Of course, the result is fact (unless the researchers committed fraud, which is highly unlikely). But that does not mean that the phenomenon they found or the conclusions they reached are justified or replicable.
V. Typically, when a result has been found by five or more independent teams of researchers, without any failures or qualifications, it is highly credible. Of course, there always could be something systematically wrong with the entire field, and someone, someday, may discover that. When that happens, one should be open to changing one's conclusion. Science differs from religion in that religion supposedly deals with "eternal" truths. There are no inherently eternal truths in science, because it is always possible that new data will come to light to change our beliefs.
VI. Also, my criteria of "five" teams of researchers is arbitrary. A reasonable person could select a higher or lower number. For me, however, if five separate teams find something, and no research is out there that contradicts those findings, I am usually pretty convinced, unless I can see something systematically problematic.
Note: "Independent teams" means teams who are not connected. A study by Dr. X, two replications by Dr. X's post docs, and two more by Dr. X's former grad students, DO NOT count.
VII. Seek meta-analyses! (Meta-analysis is a set of techniques for combining results from many studies, in part, to see if there really is a there there, and, if so, how big is it). When lots of research has been done in some area, meta-analyses usually provide EXCELLENT summaries of how large or small some phenomenon is. For example, many social psychologists seem to believe that gender stereotype biases (when judging individual men and women) are large, powerful, and pervasive (see my book for numerous examples). However, Swim et al found that, on average, gender stereotype biases are one of the smallest effects in social psychology, averaging to a correlation (between target gender and perceiver judgment) of r=.04.
VIII. Give studies more credibility when they are not likely to have been p-hacked. Keep your eyes open to evidence of p-hacking. P-hacking occurs when researchers twist, distort, fry, broil, roast, and marinate their data in order to produce an analysis that reaches the scientific Holy Grail of p<.05, in order to publish. Known p-hacking red flags: Small sample sizes, especially with unusually large effects, use of covariates, discarded participants, "cute" and "counterintuitive" findings. Just because some report includes these red flags does not mean they engaged in p-hacking. But they are more likely to have done so than when the report does not include such red flags. Reports without such red flags are more credible.
IX. In politicized areas of psychological research, be alert to the potential for political bias to distort the theory, the methods, and the interpretations. Most psychologists are liberals. In general, liberals are no more, and possibly less, biased in how they view science than are conservatives. However, there are so few conservatives in psychology that "conservative bias," however much it exists in the wider world, hardly exists in psychological science. Liberal bias, however, is alive and well. Be alert to research agendas that seem to be inspired and driven by ideological agendas. In good science, some question about human functioning drives the research; in biased science, proving a liberal worldview is "better" or "justified" drives the science.
Jussim, L. (2012). Social perception and social reality: Why accuracy dominates bias and self-fulfilling prophecy. New York: Oxford University Press.
Simmons, J.P., Nelson, L.D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366.
Swim, J, et al (1989). Joan McKay vs. John McKay: Do gender stereotypes bias evaluations? Psychological Bulletin, 105, 409-429.