Rejecting the "Secret Sauce" Excuse

New research blows a hole in a popular reason for dismissing replication.

Posted Dec 13, 2019

Photo by Matthis Volquardsen on Pexels.
A classic study in Terror Management Theory failed to replicate.
Source: Photo by Matthis Volquardsen on Pexels.

For us to trust a research finding, I think we should be able to count on it. By that, I mean that we can reliably get the study to work if we repeat it. If an experimental procedure reliably gets the same results whenever we do it, then we can start to build on that finding and try to explain why it happens. If we can’t get the same result when we repeat the study, then we shouldn’t put too much stock in the original result. It might have been a fluke, or we might have gotten the result for some reason other than what the researchers thought was important.

It’s still hard to get all social psychologists to agree that replication matters. One of the arguments I often hear against the importance of replication is that people who replicate research are incompetent. They clearly don’t know all the special tricks required to get studies to work, tricks that aren’t shared in the scientific literature, but that people who really know the area have learned over time. (I wrote about this “secret sauce” approach to research last week.) Keeping the special tricks—or secret sauce—to yourself violates scientific values, but it does make the claim of “incompetence” plausible.

Recently released results of a new study cast doubt on this “secret sauce defense.” The Many Labs 4 project recruited nine labs to conduct a replication of a classic social psychology finding based on what they read in the literature. Another 11 labs were recruited to conduct a replication using the expert advice of the original authors. The key was to see if the study replicated when you had the input of the experts—access to the secret sauce—but failed when you didn’t.

Figure by A. Danvers.
Procedures in the study being replicated.
Source: Figure by A. Danvers.

The study being replicated is based on Terror Management Theory, which argues that we try to use defense mechanisms to deal with our fear of death. If you are worried about dying, you might want to be more supportive of groups you belong to that will outlive you—like your nation.

This study tested this by asking people to write about their own death (or not) to make their mortality salient. They then measured how much people liked the author of a pro-American essay, as compared to the author of an anti-American essay. In the original study, American college students who had been asked to write about their own death showed a much stronger bias towards Americans. (A whole blog post could be written about whether this is the best way to test these ideas, and why we might want to do some careful work making sure these variables are good measures of a defense mechanism. The important part here is just that the methods from the original study are being replicated with expert input.)

The expert consultants made a few changes to the procedures: (1) they made the anti-American essay more extreme, (2) they added a longer filler survey between writing about death (or not) and reading the essays, (3) they specified that the procedure should try to put people in a relaxed mood (e.g., have a “relaxed research assistant”), and (4) they added a demographic item to assess how important the participant’s American identity was to them. These were the elements of the “secret sauce” they specified.

Results showed that the study did not replicate. Whether or not the replicators used the secret sauce, the study showed no effect of the death prime on pro-American bias.

From Klein et al's public preprint.
Results of the study, as displayed in the pre-print. "In House" used their own judgment, "Author Advised" used expert input.
Source: From Klein et al's public preprint.

A few different ways of testing the question were also checked. There was no effect when they excluded any participant who wasn’t white or wasn’t born in the U.S. There also was no effect when they only included people who strongly identified as an American (meaning they responded to the question “How important to you is your identity as an American?” with a 7 out of 9 or above). In all cases, the effect of having access to the secret sauce was almost exactly zero.

So what did we learn? Is it always wrong to claim that a replication didn’t work because it wasn't done properly? No, we can’t say that based on one result. But seeing that access to secret sauce didn’t help in this case should lead us to update our beliefs. Now that we have seen that secret sauce makes no difference in one case, it seems much more plausible that it makes no difference in other cases, too. If someone says that something didn’t replicate because the replicators aren’t competent, we can now point to a test of whether that matters—and show that it didn’t. Sometimes an effect doesn’t replicate because there just isn’t something reliable there.