The Gutsy Move Psychology Needs to Make

Can psychologists learn from research on intelligence analysts?

Posted Jun 02, 2019

Photo by Jhonis Martins from Pexels
Phil Tetlock's forecasting tournament asked regular people to make the kinds of predictions intelligence analysts routinely do.
Source: Photo by Jhonis Martins from Pexels

U.S. intelligence analysts didn’t look great at the end of the forecasting tournament described by Dan Gardner and Phil Tetlock in the book Superforecasting: The Art and Science of Prediction. A group of everyday people without access to classified intelligence had just beat them on making forecasts on major world events—answering questions like “How many refugees will the crisis in Syria create?” better than the professionals.

Tetlock’s research group participated in a large forecasting tournament hosted by IARPA—the Intelligence Advanced Research Projects Activity—where predictions needed to be made on a variety of questions designed to be similar to those intelligence analysts would encounter in their normal course of work. His approach was to recruit large numbers of people who were willing to act as forecasters, give them some basic training on probability and reasoning, and let them loose. His findings indicate that specific personality traits and intellectual qualities led some people to consistently make the best forecasts.

I’ll leave it to you to read the book to get all the insights about qualities of good forecasters (or I may post on it again—I’m Moneyball-level obsessed with it right now). But one insightful point that Tetlock makes is that the choice to even host this tournament was a gutsy one by the U.S. intelligence community.

By subjecting themselves to an objective test of their forecasting accuracy, the intelligence community was exposing itself to the risk of failure and embarrassment. If a bunch of untrained amateurs without access to the highest-quality information could make better predictions than the professionals did, it would suggest that the entire profession wasn’t doing as well as it could—or should—be doing.

The key to figuring out how well the community was doing—and what analysis strategies worked—was actually testing the accuracy of predictions against a real, objective standard. The intelligence community was risking wide-scale professional embarrassment, but they did it anyway. Ultimately, knowing if they were right or not—and how to improve their accuracy overall—was more important.

From Pixaby on Pexels
Until outsiders started tracking the accuracy of their predictions, the intelligence community didn't know their accuracy.
Source: From Pixaby on Pexels

The social sciences (and other fields, such as medicine and nutrition) are facing a similar dilemma right now. Are we willing to risk the possibility of being wrong—and of possibly having been wrong for quite some time at precisely the thing we have been claiming to be experts in—in exchange for the reward of gaining a better understanding of how to be right?

Judging the accuracy of a forecast seems like an obvious task, but as Gardner and Tetlock spell out, it is quite difficult to do fairly and objectively. Take a pundit who writes that the election of the new Ukrainian President will lead to increased conflict with Russia. What counts as increased conflict? On what timescale should we judge the prediction—does it count if no increased conflict has come after 1 year in office? What about 3 years? If the Ukrainian president makes aggressive statements that seem like they should increase conflict, but nothing changes, does that count?

To score the pundit’s prediction, these details need to be hammered out. Similarly, to score a social psychologist’s theory, the details of what it predicts need to be hammered out.

For example, the Theory of Cognitive Dissonance states that people experience psychological distress when they hold two or more contradictory beliefs, ideas, or values. When people encounter information that contradicts a belief, they will try to resolve the contradiction to reduce their discomfort. This theory has been used to explain many irrational behaviors and belief systems of others. Some examples:

Quitting Smoking: If someone knows that smoking causes cancer, but they still smoke, they have cognitive dissonance. Their belief isn’t in line with their behavior. This could cause them to downplay or denigrate the evidence linking smoking to cancer.

Photo by Inguva Venkata Eshwar from Pexels
If these men know smoking causes cancer, they should feel some cognitive dissonance. But do they?
Source: Photo by Inguva Venkata Eshwar from Pexels

Trump and North Korea: A recent news article describes Trump’s negotiating strategy with North Korea as employing cognitive dissonance on Kim Jong Un, because he is praising Kim personally while displaying negative attitudes towards his nuclear arsenal. Under this theory, described as a central negotiating tactic by business negotiation expert William Ury, the person being praised (in this case Kim) begins to separate themselves from their position to resolve the dissonance of being liked by the individual (in this case Trump) while engaging in behavior that is not liked by that person (in this case maintaining a nuclear arsenal).

These all seem like relatively straightforward applications of cognitive dissonance. Certainly, they seem to explain patterns of behavior that seem confusing or illogical. But the problem is that it’s unclear when they will apply in future situations.

If an individual smokes, but also knows that smoking causes cancer, will that person necessarily experience dissonance and try to discredit the link between smoking and cancer? Or will the person admit that the evidence is good, and they don’t want to smoke, but just find it difficult to break the habit, because it helps them release stress?

If Trump continues to praise Kim and criticize his behavior, will that necessarily lead Kim to start distancing himself from his position on maintaining a nuclear arsenal? Or will Kim interpret Trump’s praise as a signal that as long as he pretends to go along with denuclearization and allows Trump to score political points at home, he doesn’t need to follow through and actually take apart his nuclear arsenal?

This isn’t to say that I don’t think cognitive dissonance is “real.” I think it might very well be a powerful force that changes people’s minds. Instead, I am arguing that if we were to score psychological theories rigorously, the way that Tetlock and his colleagues did intelligence analysts, then we would need to define cognitive dissonance more carefully. We would need to do the equivalent of translating a general statement made by a political pundit into a specific prediction that needs to hold for the approach to work.

To really start to generate a more refined theory of cognitive dissonance, we would need to take experts in the theory and ask them to make predictions about where and when it will apply—and, ideally, how strongly the effect will change a person’s belief, idea, or value. The prediction need not be in an all-or-nothing form, it can be “20 percent of the time, this will occur, and it will be roughly equal to a 5 to 15-point bump on a 100-point opinion poll.” But to develop a theory that we can use to navigate the world around us—not just to create stories about what has already happened—we need to start making true predictions that have the possibility of exposing weaknesses. And possibly even embarrassing ourselves.

In other words, to find out how reliable and useful our theory is, psychologists need to regularly employ a new reform advocated for in the Credibility Revolution: preregistration. We need to make the gutsy decision to write down exactly what we think will happen before any data is collected or analysis is performed, risking embarrassment so that we can really know what works.