How Can We Identify the Experts?

Seven criteria for deciding who is really credible.

Posted Sep 01, 2018

We want pragmatic guidelines for deciding which if any purported experts to listen to when making a difficult and important decision. How can we know who is really credible?

Bottom line: We cannot know for sure. There are no iron-clad criteria. 

However, there are soft criteria, indicators we can pay attention to. I have identified seven so far, drawing on papers such as Crispen & Hoffman, 2016, and Shanteau, 2015, and on suggestions by Danny Kahneman and Robert Hoffman. Even though none of these criteria are fool-proof, all of them seem useful and relevant: 

(a) Successful performance—measurable track record of making good decisions in the past. (But with a large sample, some we do very well just by luck, such as stock-pickers who have called the market direction accurately in the past 10 years.) 

(b) Peer respect. (But peer ratings can be contaminated by a person’s confident bearing or fluent articulation of reasons for choices.) 

(c) Career—number of years performing the task. (But some 10-year veterans have one year of experience repeated 10 times and, even worse, some vocations do not provide any opportunity for meaningful feedback.) 

(d) Quality of tacit knowledge such as mental models. (But some experts may be less articulate because tacit knowledge is by definition hard to articulate.) 

(e) Reliability. (Reliability is necessary but not sufficient. A watch that is consistently one hour slow will be highly reliable but completely inaccurate).

(f) Credentials—licensing or certification of achieving professional standards. (But credentials just signify a minimal level of competence, not the achievement of expertise.)

(g) Reflection. When I ask "What was the last mistake you made?" most credible experts immediately describe a recent blunder that has been eating at them. In contrast, journeymen posing as experts typically say they can't think of any; they seem sincere but, of course, they may be faking. And some actual experts, upon being asked about recent mistakes, may for all kinds of reasons choose not to share any of these, even ones they have been ruminating about. So this criterion of reflection and candor is not any more foolproof than the others. 

Therefore, even if we cannot know for sure, we can still make judgments about who to consider an expert. Look at criterion (d) above. Tacit knowledge includes perceptual skills. Consider the commentators at Olympic events like diving. They see things we don’t see until they show us in slow motion. I would call them experts. Pattern recognition is another aspect of tacit knowledge. The firefighters I study use pattern recognition to size up situations that seem bewildering to me—and subsequent events confirm their judgments. I consider them experts. Anticipation is another aspect of tacit knowledge. At a fire, I am looking at the size and intensity of the flames and speculating about what additional equipment will be needed. But the experienced commanders are thinking about where to stage this equipment—where to place each truck so that it won’t get in the way of other trucks or run over hoses. They are way ahead of me. I consider them experts. Mental models are another type of tacit knowledge. The petrochemical plant operators I have just been studying can describe the units in the plant and the ways they are connected and how they work but also how they don’t work, how they are likely to break down, how a subtle event, e.g., the failure of a sensor, will affect performance and how it can be detected and how you can work around it. They are aware of causes and causal interactions I cannot even guess at. I consider them experts. 

Go back to anticipatory thinking. During the Cuban Missile Crisis in 1962, some members of John F. Kennedy’s team wanted to launch a surprise attack against Cuba. Others, the real experts, pointed out the likelihood that the USSR might retaliate with a strike against West Berlin, which would probably trigger a nuclear war. So they weren’t offering predictions. Instead, they were using their mental models to illustrate geopolitical implications. I consider this second group to be experts. They saw things, saw implications, that others did not.

My point here is that criterion (d) offers us a great deal of leverage in making an assessment of who is an expert. It offers us a transitive measure—seeing things that others do not. Going on record as seeing things that turn out to be accurate. That’s what experts do and it is what marks them as experts. Klein & Hoffman (1993) provide additional discussion of how experts can see the invisible—the tacit knowledge forming the core of Strategy (d).

This relative advantage, experts over the rest of us, is a different perspective than adopting an absolute criterion of how close are the purported experts to the actual answer. When people demonstrate that they see important things that I haven’t noticed, that’s when I decide to rely on their expertise. I don’t expect that they will be perfect. But I appreciate that they are way better than I am, and better than the others in the room.

Danny Kahneman (personal communication) has suggested an addition to Strategy (d): Saying original things that aren’t silly. This is another way that experts reveal themselves through their comments and observations.

Minor point: I do not see a sharp delineation or stage of getting to be an expert. I see it as a continuum, as a relative advantage over others.

Are there experts in every field of human enterprise? I don’t think so. There are fields that are very proceduralized. People are considered experts if they know which page of the procedural manual to turn to. I don’t consider them experts, and I don’t believe that there are experts in every field. In fact, with high turnover I am seeing lots of fields that have no experts.

What about astrologers? In pre-scientific eras, astrologers developed tacit knowledge and were highly articulate and were regarded as experts. In a pre-scientific world people wouldn’t consider most of the criteria I listed at the beginning of this essay. We rule out (a) successful performance and (d) quality of tacit knowledge and (e) measurement of reliability and (f) professional certification. That leaves (b) peer respect and (c) career. So, yes, in a pre-scientific world, a highly articulate astrologer passes muster. Same as a stock selector or TV political pundit does even today.

Finally, let’s examine criterion (e) reliability. This criterion is necessary but not sufficient. It is necessary because expert credibility depends on generating the same recommendations given the same inputs. 

Some people would want to expand the notion of reliability to cover reliability between experts, but Jim Shanteau (2015), one of the leading experts on expertise, has pointed out that there is value in having different experts express varying perspectives. We often use experts, not as prognisticators, but as consultants and we can benefit from their divergent viewpoints. 

So we are really looking for within-expert reliability. Shanteau has shown that within-expert reliability varies by domain. For weather forecasters making short-term forecasts, within-expert reliability is .98. For other domains, it is still sizeable, although greatly reduced yet I believe still much higher than for novices, e.g., r= 0.62 for grain inspectors and r= 0.40 for clinical psychologists.

However, I don’t think we can just look at statistical reliability. I think we also have to examine process reliability. Consider test-retest reliability. You administer a 10-item test at Time-1, and then administer the same test at Time-2. If I score 5/10 at Time-1 and 5/10 at Time-2, you would likely count that as indicating Test-Retest reliability. I got the same score both times. 

But let’s say I got the first 5 items right at Time-1 and I got the last 5 items right at Time-2. Even though I got the same 5/10 score both times, my performance was completely different. My result shows complete unreliability, not reliability. That’s why I am arguing that we need a second criterion, a process criterion, in addition to the statistical one.

Also, we need to be careful with a reliability criterion because we don’t want to encourage people to strive for reliability that is too high—which would encourage rigidity rather than the continuing exploration that is central to becoming an expert.

So, seven criteria. To be considered an expert, a person should meet at least one of these criteria, and it is probably a good idea to expect the person to meet at least two or three of the criteria. We shouldn’t try to tick off the number of criteria met because quality also matters, especially with criterion (d) the quality of a person’s ideas. And we also should be on guard against criteria that might fool us, such as the person’s confident bearing. 

We might try to become more skillful at identifying experts, making judgments and getting feedback, and reflecting on what we should have been noticing and what we should have been discounting. In that way, perhaps we can develop expertise at identifying experts. 

References

Crispen, P. & Hoffman, R.R. (2016). How many experts? IEEE Intelligent Systems, November/December, 56-62.

Klein, G. A., & Hoffman, R. R. (1993). Seeing the invisible: Perceptual/cognitive aspects of expertise. In M. Rabinowitz (Ed.), Cognitive science foundations of instruction(pp. 203-226). Mahwah, NJ: Lawrence Erlbaum Associates.

Shanteau, J. (2015). Why task domains (still) matter for understanding expertise. Journal of Applied Research in Memory and Cognition, 4, 169-175.