"Nothing in life is as important as you think it is when you are thinking about it."
~ Daniel Kahneman

Daniel, "Danny", Kahneman (Nobel Prize in Economics, 2002) has written an amazing book on the psychology of judgment and decision-making. The book is called Thinking, fast and slow, and it goes far beyond the psychology of judgment and decision making. It is a book about perception, memory, emotion, intuition, effortful thinking (cogitation), well-being, identity, and much more. It is at the same time a trade book and a review and manifesto directed at academics.

There have been many reviews of this book, some of them hagiographic. My own - positive - review will appear in the American Journal of Psychology later this year. In this post, instead of writing another review, I will try to bring Kahneman to you in quotes; but not without a cautionary note. Quotes are by necessity removed from their context, except for those that didn't have any context from which they could be removed, like phrases that were designed as sound bites or aphorisms, which is not the case here. Separated from their context, quotes invite the risk of misinterpretation. Moreover, I will intersperse the quoted phrases with my own questions and reactions, which raises another set of risks. I can only assure you that I have chosen phrases that are highly available in memory and that I take to be representative of Kahneman's thinking, so I hope you will let your thinking about psychology be anchored by them. Kahneman's quotes are in italics and the page numbers in parentheses.

What is the book about?

"Much of the discussion in this book is about biases of intuition. However, the focus on error does not denigrate human intelligence any more than the attention to diseases in medical texts denies good health." (4)

What is the relation between bias and error?

"Systematic errors are known as biases." (3)

Can't we just check our own thinking and correct these errors?

"We are often confident even when we are wrong, and an objective observer is more likely to detect our errors than we are." (4)

This sounds like errors are a matter of self-deception if we can see them in others but not in ourselves. And shouldn't it be more important to us to avoid our own errors than correcting those of others? Have scientists always taken the view you describe?

"Social scientists in the 1970s broadly accepted two ideas about human nature. First, people are generally rational, and their thinking is normally sound. Second, emotions such as fear, affection, and hatred explain most of the occasions on which people depart from rationality. Our article [Science 1974] challenged both assumptions." (8)

Is thinking heuristically, that is, by intuition, all bad?

"Heuristics are quite useful, but sometimes lead to severe and systematic errors." (10)

In a word, how do heuristics work?

"This is the essence of intuitive heuristics: when faced with a difficult decision, we often answer an easier one instead, usually without noticing the substitution." (12)

Why do we like intuitive thinking?

[System 1; i.e., intuitive thinking] "operates automatically and quickly, with little or no effort and sense of voluntary control." (20)

And the slow thinking, the kind done by System 2?

[System 2] "allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration." [21]

These subjective experiences might be illusions, though. Is System 2 also being a bit self-deceptive?

"Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book." (21)

But System 1 is a flawed hero, right? So what are its flaws?

[It] "does not (cannot) allow for information it does not have." (85) "Jumping to conclusions on the basis of limited evidence is so important to an understanding of intuitive thinking." (86)

And that is where System 2 comes in?

[There are] "circumstances in which System 2 takes over, overruling the freewheeling impulses of the associations of System 1." (21)

How does it take over?

"When System 1 runs into difficulty, it calls on System 2 to support more detailed and specific processing that may solve the problem of the moment." (21)

System 1 knows when it needs help?

"System 2 is mobilized to increased effort when it detects an error about to be made. [. . .] System 2 takes over when things get difficult, and it normally has the last word." (25)

So System 2 calls itself in. It puts out fires. How well does it succeed?

"System 2 is the only one that can follow rules, compare objects on several attributes, and make deliberate choices between options." (36)

That sounds great. Any problems?

"System 1 [. . .] cannot be turned off." (25) "Error can be prevented only by the enhanced monitoring and effortful activity of System 2." (29)

System 2 sounds like a fighter who has to deal with many setbacks.

"Continuous vigilance . . . is impractical." (29)

What are its goals?

"System 2 is in charge of self-control." (26)

Is System 1 then "The Self?"

"System 1 and System 2 . . . are fictitious characters . . . and there is no part of the brain that either of the systems would call home." (29)

If the two systems are mere metaphors, how are we to think of the scientific findings?

"Disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of the studies are true. More important, you must accept that they are true about you." (57)

Ok, ok. But there has been some criticism of your work, no?

"The net effect [. . .] was in increase in the visibility of our work to the general public, and a small dent in the credibility of our approach among scholars in the field." (165)

When do you really know that you can trust a research finding?

"The ultimate test of an explanation is whether it would have made the event predictable in advance." (200)

That sounds like a strict criterion, but don't we have to worry about hindsight bias?

"You know you made a theoretical advance when you no longer reconstruct why you failed for so long to see the obvious." (279)

Ok, so I won't worry about the hindsight bias. - You describe how System 1 constructs good stories on the fly. What's wrong with that?

"The most coherent stories are not necessarily the most probable, but they are plausible." (159)

You call the focus on case histories ‘the inside view,' as opposed to ‘the outside view,'
which involves statistical information from a reference class. Is thinking by way of specific cases just a lack of statistical reasoning?

"The preference for the inside view has moral overtones." (249)

It sounds like System 1 wants to be moral, but achieves exactly the opposite - inasmuch as irrational decisions are also harmful. Can intuitive thinking also lead to some good?

"When action is needed, optimism, even of the mildly delusional variety, may be a good thing." (256)

Oh, great! Is this good for capitalism?

"The optimistic risk taking of entrepreneurs surely contributes to the economic dynamism of a capitalistic society, even if most risk takers end up disappointed." (259)

The foolish overconfidence of some is good for society as a whole and for those entrepreneurs who get lucky in business, but not for the many who try but fail. You ask yourself

"can overconfident optimism be overcome by training?" (264)

and reply "I am not optimistic." (264)

For those who like a capitalist economy, it is good that you are not. Now tell us about prospect theory.

"Prospect theory turned out to be the most significant work we ever did." (271)

In particular?

"The concept of loss aversion is certainly the most significant contribution of psychology to behavioral economics." (300)

You may not be optimistic, but you are not all that modest either. What's so new in prospect theory?

"We retained utility theory as a logic of rational choice but abandoned the idea that people are perfectly rational choosers." (314)

You also abandoned probability and talk about decision weights instead. The latter are not strictly proportional to the former. Why is that bad?

[It] "leads to inconsistencies and other disasters." (314)

And therefore it is . . .

"a clear violation of the coherence doctrine." (356)

Why is that disastrous?

"It is costly to be risk averse for gains and risk seeking for losses." (335)

Is the combination of these preferences costly or any one in particular?

"Many unfortunate human situations unfold [. . .] where people who face bad options take desperate gambles, accepting a high probability of making things worse in exchange for a small hope of avoiding a large loss. The thought of accepting the large sure loss is too painful, and the hope of complete relief is too enticing, to make the sensible decision that it is time to cut one's losses." (318 - 319)

Don't people notice that the probability of complete relief is very small?

"The actual probability is inconsequential; only possibility matters." (323)

You are not sanguine about human rationality.

"The ideal of logical consistency [. . .] is not achievable by our limited mind." (335)

That bad?

"The definition of rationality as coherence is impossibly restrictive; it demands adherence to rules of logic that a finite mind is not able to implement. Reasonable people cannot be rational by that definition." (411)

It is fair to say that your work has refuted the theory that cognitive illusions cannot occur.

"A theory that is worthy of the name asserts that certain events are impossible - they will not happen if the theory is true." (374)

Your theory, though, is descriptive and not normative. What does it take to refute a theory that is built up from observational data? There are phenomena you say prospect theory cannot explain, like regret, but failure to explain is not the same as disconfirmation. In the last part of your book, you disconfirm the idea of a unitary self by distinguishing empirically between an experiencing and a remembering self. Why is this distinction important?

"What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self." (381)

Does this scare you?

"We cannot fully trust our preferences to reflect our interests, even if they are based on personal experience." (384 - 385)

So who are you?

"Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me." (390)

Thank you for this conversation, both of you.

You are reading

One Among Many

Homo Dichotomus

Can we get from probability to decision?

Game of Cheating

Leave the money on the table.

Sapolsky on Free Will

Primates don't have it.