Illusion is the first of all pleasures.
~ Oscar Wilde (found here)
People make all kinds of errors and mistakes when judging and deciding. One task of psychological science is to understand these errors at a theoretical level and – if possible – to find ways to reduce them. The field of behavioral decision making as well as the interdiscipline of behavioral economics have evolved from the seminal work by Daniel Kahneman and Amos Tversky (KT). KT introduced the idea that people reason with the help of a small number of heuristics, or rules of thumb, rather than by a comprehensively analyzing the problems at hand. KT noted that this often works well, while also producing systematic and thus predictable biases. By revealing these biases, they hoped to understand human judgment and decision-making more deeply (Tversky & Kahneman, 1974). This hope found expression in the visual illusion metaphor. KT (1984) proposed that like visual illusions, judgment and decision errors are involuntary and difficult to correct. With time, the visual-illusion metaphor has turned into an accepted doctrine whose presumed truth value owes more to frequent repetition than to careful analysis and justification. In other words, the doctrine rests on the judgmental heuristic that repetition implies truth.
Dan Ariely has recently emerged as the public face of research on irrational judgment, decision-making, and economic behavior [I already commented on this in my post fine cantiere]. In TED talks and online lectures, Dan uses the visual-illusion metaphor explicitly and rhetorically to soften the audience up for his message that humans are predictably irrational decision-makers. Like others, he fails to explore why or how the metaphor might work. He begins with demonstrations of visual illusions such as Roger Shepard’s table-top illusion or the Leaning Tower of Pisa illusion. Trusting that the audience is impressed by the power and the coolness of these displays, Dan, like many others in this business, draws an a fortiori inference. If the visual system, which has been honed by evolution for millions of years, can fail so dramatically, how many more decision errors must occur in areas that are comparatively novel, such as finance? Arguably, people make more mistakes when investing than when looking at a potted plant, but how do errors in the latter case help us understand errors in the former?
In one TED talk, Dan offers a sample of three decision errors. First, the default effect occurs when people end up “choosing” different options when allowed not to choose at all, i.e., when a lack of any active selection returns the default. Impressively, countries that allow individuals to decline being a potential organ donor have far greater donor pools than countries that allow individuals to decline not being a potential donor (Johnson & Goldstein, 2003). Second, Dan reports that physicians are more likely to pull a patient back from scheduled surgery when they discover that they forgot to test the efficacy of one drug, than when they notice that they overlooked two drugs. In the latter case, the physicians would need to make a second decision about the order in which to test the two drugs. They might consider this to be a drag. This effect is troubling because if two drugs remain to be tried, the chances that at least one will work are greater than if only one drug remains. Finally, Dan offers the asymmetric dominance effect first described by Huber, Payne & Puto (1982). He asks us to imagine having a choice between a weekend in Rome and a weekend in Paris with all expenses paid. The rate of preferring one option (say Rome) goes up when a third option is introduced which would be identical to it were it not for the addition of one negative feature (you have to pay for your morning coffee). Since no one prefers the Rome-minus-coffee option over the Rome-with-coffee option, the former option is dominated and should be ignored. The rate at which people prefer a paid trip to Rome should only depend on how it compares with a paid trip to Paris.
Before considering how illusory these effects are, take another look at the visuals. You “see” them. That’s what vision – and hence its illusions – is all about. A single person and a single trial are enough to demonstrate the table-top illusion or the Pisa illusion. In contrast, the default effect, the choice-cost effect (drugs), and the asymmetric dominance effect emerge only when two scenarios are created and compared – by the experimenters. Even if these effects are demonstrated as a preference reversal within a single individual (which they are typically not), a comparison of at least two judgments is necessary. If these judgments differ, it may be said that there is a case for incoherence irrationality (i.e., the person contradicts herself), while it cannot be said that any one of these judgments, taken on its own, is wrong. Visual illusions, however, are all about inaccuracy. We see things that are not there or we see them with a distortion that can be revealed with the help of a tape measure.
The visual illusions presented to support the metaphor depend on direct comparisons. There are two table tops and two pictures of Pisa. Take one away and the illusion goes away. Understanding the role of comparison does not eliminate the illusion. The role of comparison is more complex in the decision effects. Default effect and cost-choice effects are obtained among respondents who do not make comparisons. They respond only to one of two possible scenarios. The comparisons are made by the researchers. If there is an illusion, it may reside in the investigators who believe that individuals must have stable preferences. The asymmetric dominance effect does involve a comparison, but arguable the wrong one (with a dominated alternative). Here, however, a sane respondent will ignore the dominated option once she comprehends the whole pattern.
Ariely fails to mention that visual illusions reveal the ingenuity of the visual system. Helmholtz (1866) famously argued that the visual system constructs representations of reality by making unconscious inferences. It has to do so when the available input is insufficient, ambiguous, or indeterminate. The perception of depth, for example, must be created from the two-dimensional stimulus input recorded on the retina. Perception requires experience and expectation. Because perception is a hypothesis or a bet of what reality is like, it is vulnerable to manipulation. The operation of an intelligent inference can only be seen when a seemingly unintelligent result is provoked (Gregory, 1997).
The visual-illusion metaphor may not be so inapt after all. Many judgments and decisions we arrive at with the use of heuristics are good enough, as KT noted so long ago and as others, e.g., Gerd Gigerenzer and his collaborators, have shown empirically (e.g., Todd & Gigerenzer, 2007) and that is why the systematic errors that remain reveal the heuristics. Going with the default saves energy and it capitalizes on the expertise of others (and hence it is exploitable). The cost-choice effect minimizes, well, costs – at least in the short term. The asymmetric dominance effect shows that most judgments (like perceptions) require some context for comparison (which means that wrong comparisons can occur). The difference is that vision scientists know that they can’t take away the errors without destroying the adaptive system that produced them, whereas decision scientists illusorily believe they can dump the bath water and save the baby. Now that’s a case of overconfidence, another trademark decision fallacy.
Regarding Pisa: What did the right tower say to the left tower? Barcollo ma non mollo. I may be leaning, but I am not falling.
Gregory, R. (1997). Knowledge in perception and illusion. Philosophical Transactions of the Royal Society of London B, 352, 1121–1128
Helmholtz, H. von (1866). Concerning the perceptions in general. In: Treatise on physiological optics (Vol. III, 3rd edn, translated by J. P. C. Southall 1925, Opt. Soc. Am. Section 26, reprinted New York: Dover, 1962).
Huber, J., Payne, J. W., & Puto, C. (1982). Adding asymmetrically dominated alternatives: violations of regularity and the similarity hypothesis. Journal of Consumer Research, 9, 90-98.
Johnson, E., & Goldstein, D. (2003). Do defaults save lives? Science, 302, 1338-1339.
Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39, 341-350.
Todd, P. M., & Gigerenzer, G. (2007). Environments that make us smart: Ecological rationality. Current Directions in Psychological Science, 16, 167-171.
Tversky, A. &, Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131.