I am troubled by the way some professionals take a dismissive, even contemptuous stance regarding experts.

Experts have skills that go far beyond anything the rest of us can do. They see things that are invisible to others. They make connections and inferences we’d never think of. They spot problems we’d miss until it was too late. Much of my career revolves around studying experts, trying to learn some of their secrets. 

Certainly, experts in a field are never perfect. They can be overconfident. They can be mistaken. So it is reasonable to take a skeptical view of experts, especially when the experts are self-proclaimed, like pundits on television news shows. A healthy skepticism invites inquiry into how good an expert is and what it takes to become an expert.

What disturbs me is an attitude that goes beyond healthy skepticism into knee-jerk contempt — that the experts in a field, any field, shouldn’t be taken seriously.

I first encountered signs of this contemptuous attitude when I attended conferences on judgment and decision making. Researchers in the Heuristics and Biases tradition gleefully reported experiments showing that even the experts fell prey to the biases. In 1971 Tversky and Kahneman reported that expert statisticians made poor choices when they followed their intuition about generalizing from small samples. McNeil et al., (1982) reported that experienced physicians were just as susceptible to framing effects about how to treat lung cancer as graduate students and ambulatory patients. Even the experts were inherently biased in their judgments. The lesson was: you can’t trust experts. 

The Judgment and Decision Making field places special importance on the work of Paul Meehl (1954), who conducted a number of studies showing that linear statistical models exceeded or matched the clinical judgments of experts, suggesting that we’d be better off replacing the judgments of the experts with those of the statistical models. (What doesn’t get much attention is that the factors loaded into the linear statistical models came from the experts themselves; the primary benefit of the statistics was to increase consistency.)

An article by Kahneman and Klein (2009) put the matter bluntly: “The basic stance of Heuristics and Biases researchers, as they consider experts, is one of skepticism. They are trained to look for opportunities to compare expert performance with performance by formal models or rules and to expect that experts will do poorly in such comparisons.” (p.  518)

So I have been noticing this contemptuous attitude about experts for many years, but then a few months ago something happened that truly alarmed me.

A colleague of mine, Joseph Borders, was approached by a manager in a very large petrochemical company about setting up a cognitive skills training program for the panel operators who control massive units within a manufacturing plant. These panel operators work under tremendous stress. If they unnecessarily shut down a plant, the costs of missed production can run into millions of dollars. But if they fail to shut down a malfunctioning plant, they can trigger an explosion that has even greater consequences in terms of dollars and lives. Joey and I could see why the plant manager would want to build expertise in the panel operators.

However, the project never came off. Months later, the manager sheepishly explained to us that the plan to build the expertise of panel operators had been blocked by a higher-up who explained that the plant didn’t need its panel operators to make better decisions because the operators were hopelessly biased. Instead, he intended to take the decision making out of their hands and rely on some sort of Artificial Intelligence instead.

Obviously, I was stunned by this explanation. Not only was the executive’s faith in Artificial Intelligence misplaced (the tacit knowledge needed to spot subtle cues can take years to develop), but the executive’s distrust of the panel operators seemed like a very dangerous attitude. And if petrochemical plant executives are now acting on their fears of biases in their panel operators, that suggests how far the campaign to discredit experts has come.

Where does this fear of experts come from? Largely from the Heuristics and Biases community, and the studies showing that experts demonstrate the same kinds of biases as novices. These findings damage to the reputation of experts.

Of course, the situation may not be as grim as the skeptics proclaim. First, the effect of judgment biases may be overstated. Several studies have found that the judgment and decision biases become weaker or disappear when people are given naturalistic tasks rather than artificial ones. Second, the biases stem from our use of heuristics, and heuristics are very useful. The Heuristics and Biases community has performed little or no research on the benefits of using heuristics; these benefits must far outweigh the drawbacks. Third, people who are most prone to use heuristics and commit biases, and who violate the precepts of Bayesian statistics, do very well in life. Berg & Gigerenzer (2010) reported that they earned more money and held more accurate beliefs compared to people who lined up with the rational choice strategies.

So where does this leave us? There are a number of compelling reasons why people want to dismiss expertise. I don’t think these reasons stand up well to scrutiny, but that doesn’t matter if they aren’t scrutinized. It doesn’t matter if the only message that comes through is that we have to take decisions out of the hands of experts.

Therefore, I think we need to be more energetic in conveying a different message, that expertise matters.  We need to conduct more research and collect more evidence demonstrating what experts are able to accomplish. An example is the work of Jim Staszewski (2008), a professor at Carnegie Mellon University. The U.S. Army had spent $38M in developing improved minesweepers but when they were tested they provided no advantage over the previous model, both had about a 20% detection rate. Staszewski and his colleagues located two Army engineers who had mastered the new equipment. When tested, these experts achieved dramatic results, over 90% detection rates. The research team then constructed a course to teach new Army engineers how to use the new type of minesweeper effectively. That’s what experts can buy you.

Kahneman and Klein (2009) identified the conditions needed for people to gain intuitive expertise: a reasonably well-structured as opposed to chaotic environment and the opportunity for meaningful feedback on judgments and decisions. We concluded that, in Kahneman’s words, “a psychology of professional judgment that ignores intuitive skills is seriously blinkered.” (p. 525)

Phil Tetlock illustrates the type of transition that can turn the tide. Tetlock (2005) reported the results of a study of the forecasting accuracy of leading experts and pundits, given clear prediction targets (e.g., “Should we expect in the next ten years defense spending as a percentage of government expenditure to rise, fall, or stay the same?”). The results were dismal — not much better than would be achieved by a chimp throwing darts.  Tetlock concluded that “Humanity barely bests the chimp.” (p. 51). Naturally, the expertise skeptics were delighted.

However, ten years later Tetlock was part of a research team led by Barbara Mellers that attempted to develop forecasting expertise. And they succeeded, as described in Tetlock and Gardner’s (2015) book Superforecasting. Tetlock showed that amateurs, not part of any government agency, were able to outperform the professional forecasters and win a forecasting championship. These superforecasters weren’t just lucky.  They sustained their high levels of forecasting accuracy over several years. Sure, 30% of the superforecasters dropped out of the top ranks in the sample, but 70% stayed at the top. Their performance stemmed from research, analysis, self-criticism, and gathering the perspectives of others. They worked hard to develop and maintain their level of expertise and they succeeded magnificently.

In the first Tetlock project, experts weren’t much better than chimps. In the second project, they were champs. Tetlock’s appreciation of experts changed as he worked with them and watched them in action. His transition should inspire others to shake loose of their biases about experts and take expertise more seriously.  

You are reading

Seeing What Others Don't

The Age of Centaurs

Instead of building smarter machines, let's build machines that make us smarter.

The War on Experts

Five professional communities are trying to discredit expertise.

Cognitizing a Scenario

Developing high-impact training scenarios