In 2009, 228 people died in the crash of Air France 447 when the plane went into a high-altitude stall over the tropical Atlantic. Recording boxes recovered after the crash revealed a cause that expert pilots found inexplicable: When a momentary equipment failure (the freezing of a speed sensor) caused the autopilot to disengage, the plane’s less-experienced copilot, Pierre-Cédric Bonin, suddenly pitched the nose of the plane upward, which, at high altitude, caused the plane to lose lift and then stall. The correct action in this situation would have been to make no change to the plane’s controls or, if the plane did stall, to pitch the nose of the plane downward, not upward. For the next four minutes, as the plane descended to the sea, the cockpit recorder revealed the crew urgently trying to comprehend what was happening, and how to remedy it. Only at the last second did the copilot tell the crew that he had the stick controlling the plane’s pitch in the back rather than the forward position the whole time. The pilot and crew seemed to have been unaware of this.

If we want to understand why tragedies like this one can occur, and what we can do to prevent them, a starting point is to understand how people make decisions under risk and how these decisions can go awry. Today most psychologists agree that our brains make choices using two cognitive systems. System 1 governs automated and instinctive thoughts (such as those that take place when we jump away the instant we see a snake) and System 2 governs more controlled thoughts (such as those that allow us to decide what the best cure might be if the snake bites us).[i] In most contexts, these two systems, working in concert, allow us to navigate our day-do-day lives with ease using simple intuitions and rules of thumb, freeing up mental resources for more taxing deliberative calculations if needed.

How does this explain why a trained pilot might make such a high-stakes error? A plausible story is this: An essential part of pilot training is to teach reflexive (System 1) responses to potentially dangerous situations. Among these, perhaps the most fundamental is the pull-up maneuver: When a pilot hears an alarm indicating that the plane is descending too fast upon landing (or approaching other terrain), he or she is trained not to think but to react—that is, to allow the decision to be made by System 1, not System 2. For commercial airliners, this would involve accelerating the thrusters to pick up speed, as if on takeoff, and only then pitching the nose of the aircraft upward. When Air France 447 was flying in the middle of the night with a (relatively) inexperienced copilot at the controls, the confusion that followed the indicator of the loss of airspeed and the automatic disengagement of the autopilot seemed to have triggered this automatic response: The copilot pulled the nose up. He was making decisions using System 1 instincts rather than System 2 deliberations.

Of course, after this happened, there was still time for him and two other crew members to use their System 2 abilities to diagnose the situation and enact a simple remedy, that is, pitch the plane downward, but this never happened. The crew had no experience solving the problem of an airworthy plane suddenly started to descend with unreliable indicator gauges, so they had to construct an answer on the fly. But there were too many hypotheses to trace down, too little time to do it. The pilots overlooked the simplest of all explanations: The nose was pointed up.

When protective decisions go awry it is not because we lack the innate ability to make good decisions, but rather because these abilities are not well designed for dealing with rare threats for which we have little stored knowledge. 

Overcoming Our Innate Engineering: The Six Core Biases

Is there a remedy? At first blush, the prospects would seem bleak. The way we think, after all, is something that has evolved over millions of years, from a time when powers of deep deliberation were less important than reflexive reaction and instinctive anticipation. Our basic cognitive wiring is thus not something we can hope to change.

 Still, there may be a way out. While preparedness errors may have many origins, research on disasters over the years suggests that most can be traced to the harmful effects of six systematic biases that reflect flaws in how we instinctively perceive risk (System 1 errors) and how we use these perceptions when making decisions (System 2 errors).

Here are six reasons that individuals, communities, and institutions often underinvest in protection against low-probability, high-consequence events. They are:

1. Myopia: a tendency to focus on overly short future time horizons when appraising immediate costs and the potential benefits of protective investments;

2. Amnesia: a tendency to forget too quickly the lessons of past disasters;

3. Optimism: a tendency to underestimate the likelihood that losses will occur from future hazards;

4. Inertia: a tendency to maintain the status quo or adopt a default option when there is uncertainty about the potential benefits of investing in alternative protective measures;

5. Simplification: a tendency to selectively attend to only a subset of the relevant factors to consider when making choices involving risk; and

6. Herding: a tendency to base choices on the observed actions of others.

We need to recognize that when making decisions, our biases are part of our cognitive DNA. While we may not be able to alter our cognitive wiring, we may be able to improve preparedness by recognizing these specific biases and designing strategies that anticipate them.

Adapted from The Ostrich Paradox: Why We Underprepare for Disasters.

You are reading

The Uncertainty

Why We Are Underprepared for Disasters

The 6 cognitive biases we need to address