In many decisions in life, our choices are imbued with differing amounts of risk. For example, deciding between a favorite, delightfully cheesy chimichanga dish at a local Mexican restaurant and the special of the day could mean deciding between an assured level of enjoyment and an uncertain, unknown level of enjoyment. The enjoyment of the special could surpass the enjoyment afforded by the safe, go-to meal of deep fried cheesiness, but it could also end in regret when the special sauce on the new meal turns out to be hotter than a volcano inhibiting you from tasting your well-deserved margarita.
Fortunately, many of the risky decisions we make from day to day are relatively unimportant. Some, however, can have important consequences. For example, the decisions that intelligence
agents make have important national security implications and could turn the tides of war, possibly saving or ending lives. Intelligence agents have a lot of experience making risky decisions. This is, after all, what they do day in and day out. This experience, however, may lead them to be biased
in the decisions they are making.
In a recent study published by Dr. Valerie Reyna, a preeminent cognitive psychologist, it was found that intelligence agents were more likely to be biased by the wording or framing of risky choice problems than college students or other adults were.
Risky choice problems were made famous (at least in psychology, economics, and decision-making circles) in Daniel Kahneman and Amos Tversky’s work, in which they showed that people make irrational choices. Side note: This work led to a Nobel Prize—not too shabby! Here’s an example of a typical risky choice problem (taken verbatim from Reyna’s study):
Imagine that the United States is preparing for the outbreak of an unusual disease, which is expected to kill 600 people. Please indicate which option you prefer. (People are given the options from either the GAIN FRAME or from the LOSS FRAME for each problem, not both).
CHOICES (GAIN FRAME):
a) 200 people saved for sure
b) 1/3 probability 600 people will be saved and 2/3 probability that no one will be saved
CHOICES (LOSS FRAME):
a) 400 people die for sure
b) 2/3 probability 600 people die and 1/3 probability that no one dies
First off, according to Expected Utility Theory, options a) and b) are equivalent if you take the weighted sums obtained by adding the values of outcomes multiplied by their respective probabilities. For example, if you take option b) from the first set of options and multiply the value of 600 people saved multiplied by a probability of 1/3 and add that to the value of 0 people saved multiplied by a probability of 2/3, then you get an expected value of 200 people saved. This is the same as option a). Secondly, the scenarios in the gain frame and in the loss frame are equivalent (if 200 people are saved, the converse is also true: 400 people die).
If people were entirely rational, then they would choose all options equally (because they would know they are equal and would randomly pick one). People don’t choose all options equally, because they are not entirely rational. When the question is framed as gains or as lives saved, people more often choose the sure option (a), meaning they are risk-averse. People would rather save some people for sure than risk saving nobody. When the question is framed as losses or as deaths, people more often choose the risky option (b), meaning they are risk-seeking. In this case, people would rather risk killing everyone if it means they have a chance of no one dying. Basically, the wording can induce people to be either risk-averse or risk seeking even if the outcomes are equivalent. This is the power of language.
The interesting thing about Reyna and her colleagues’ study is that it made the prediction that intelligence agents are even more likely to be biased by the wording (framing) of the choices when compared to other people. The researchers made this prediction, because with expertise comes the tendency to rely on gist-based representations instead of verbatim ones, meaning experts are more likely to think of things in a summarized form rather than think about the exact numbers in a step by step fashion. This means the intelligence agents would be less likely to do the expected utility calculations and compare the exact quantities that result. They would be more likely to boil the choices down to “save some” versus “save all or none” in the gain frame (resulting in a choice of “save some”) and to “kill some” versus “kill all or none” in the loss frame (resulting in a choice of “kill all or none”).
When comparing intelligence agents’ responses to those of college students and to those of adults similar in age to the intelligence agents on 30 different risky choice problems, Reyna and her colleagues indeed found that the intelligence agents were more affected by the wording of the choices (i.e., they were more biased and irrational).
Reyna and her colleagues also asked people to rate their confidence in their choices while they made these risky choices and they found that the intelligence agents were more confident in their choices than the other groups were. This mirrors previous findings about experts and the occurrence of overconfidence (see previous blog post about overconfidence).
The finding that intelligence agents, experts in risky decision-making, may be more prone to decision-making biases and that they are more confident in their choices while making these decisions may be an unfortunate side effect of being an expert. On the other hand, if real life options are equivalent, people used to making risky decisions must develop methods to choose from these options and choosing randomly would probably not be looked upon favorably by those affected by the choices. Certainly, making choices at random would make for a difficult career politically.
At the end of the day, before you abandon the opinions or choices of experts, however, know that along with these adverse effects of expertise, there are still important advantages of being an expert, like being able to make decisions more quickly and being able to parse apart relevant from irrelevant information when making decisions- important advantages that might matter more in the long run than the effects of the biases.
Kahneman, D, & Tversky, A. (2000). Choice, values, and frames. New York, NY: Cambridge University Press.
Reyna, V. F., Chick, C. F., Corbin, J. C., & Hsia, A. N. (2014). Developmental reversals in risky decision making: Intelligence agents show larger decision biases than college students. Psychological Science, 25(1), 76-84.
*Reyna and her colleagues were also able to get rid of everyone’s biases and further explain the source of the biases with additional manipulations (see original article for details).