Skip to main content

Verified by Psychology Today

Why Experience Isn’t Always the Best Teacher

Beliefs developed over time can prompt errors in crime labs and other arenas.

Key points

  • Forensic scientists, security agents, and doctors often make visual judgments of whether a certain outcome is present.
  • Prevalence effects occur when experience teaches us that a given outcome is rare, which makes us poorer at noticing when it happens.
  • Prevalence also affects judgments of forensic evidence in ways that can produce miscarriages of justice.
  • Crime labs can use blind proficiency testing to periodically “retrain” forensic analysts and thus reduce mistakes.

Many jobs require people to make frequent and important visual judgments. Take airport security officers, for example. On any given day, they inspect thousands of suitcases through an X-ray machine, looking for weapons or other dangerous contraband–and while the vast majority of luggage does not contain such items, failing to notice a weapon can have devastating consequences.

Wikimedia Commons / Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
Source: Wikimedia Commons / Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)

Surely these trained professionals are diligent, and research suggests that they are quite skilled at detecting weapons in controlled settings–but in 2017, the Department of Homeland Security conducted undercover tests in which actors tried to sneak weapons into real-world airports. TSA agents failed to detect these weapons over 70 percent of the time. Why might these mistakes happen?

What Are Prevalence Effects?

Performance on a task generally improves with experience. In cognitive psychology, however, the phenomenon of prevalence effects shows how experience can backfire and create errors like these. In essence, if we learn through experience that a given outcome is rare, we unconsciously become complacent when looking for that outcome, which makes us less likely to notice it when it does happen.

As our hypothetical airport security officer learns over time that suitcases seldom contain weapons, they come to approach any given suitcase with the expectation that it will not contain a weapon, which then impairs their ability to detect a weapon when one is actually there.

Prevalence Effects in Forensic Science

My colleagues and I recently found that this phenomenon can affect forensic evidence judgments. In our first study, each participant viewed 100 pairs of fingerprints one at a time. For each pair, they decided whether the two fingerprints came from the same person (“matched”) and then learned whether their answer was correct before viewing the next pair. But here’s the twist:

While some participants saw equal numbers of matching and non-matching pairs, others saw 90 pairs that actually matched (and only 10 that didn’t), and another group saw only 10 pairs that matched (and 90 that didn’t). Participants in the latter two groups would learn over time that the pairs usually did (or usually didn’t) match.

simon jhuan/Shutterstock
Source: simon jhuan/Shutterstock

As we expected, the ratio of matching to non-matching pairs affected the types of mistakes that participants made: People who saw mostly matching pairs grew more likely to misjudge non-matches as matches—i.e., errors that wrongly implicate innocent criminal suspects—and conversely, people who saw mostly non-matching pairs grew more likely to misjudge matches as non-matches—i.e., errors that might allow criminal offenders to remain free to re-offend.

Other studies have found similar results among people screening for fake IDs, radiologists searching for cancerous tumors, and, sure enough, airport security officers inspecting luggage for weapons. In each case, stimuli that appeared less frequently were more likely to go undetected.

Can We Avoid Prevalence Effects?

Unfortunately, prevalence effects are also difficult to correct. In our second study, published just this week, we first replicated the above findings with a sample of forensic science trainees. In addition, we tested whether requiring individuals to compare the fingerprints in a more structured and nuanced way would alleviate the problem–but it did not. Previous studies had tried other approaches to correct prevalence effects–such as forewarning people about the effect, forcing them to work more slowly, or giving them the opportunity to correct their answers–which were likewise ineffective.

If we can’t correct the effect, perhaps we can correct the prevalence–that is to say, avoid the effect by changing how often the outcomes occur in the first place. In some domains, this is impossible; for example, doctors cannot control how many of their patients’ X-rays show cancerous tumors. But in other domains, like the undercover airport tests described above, we can artificially and covertly increase the frequency of certain outcomes, like weapons.

In forensic laboratories, agencies such as the National Academy of Sciences and the President’s Council of Advisors on Science and Technology have similarly advocated for the use of blind proficiency testing, wherein scientists are periodically and unknowingly asked to test samples that do not actually come from a crime scene but are instead covert tests of their ability.

While many labs continue to resist blind proficiency testing for various reasons, others have published firsthand accounts suggesting that the practice is feasible and reaps substantial benefits. Our research suggests that blind proficiency testing can also combat errors due to prevalence effects; for example, if examiners in a given lab judge most evidence as “matching,” supervisors can introduce more “non-matches” as blind proficiency tests to balance out the ratio. Baggage screening studies had already found that prevalence effects decreased when participants were periodically “retrained”–i.e., mainly showed suitcases containing weapons and feedback on their performance.

They say that “experience is the best teacher,” which is often true–but research on the prevalence effect shows how experience can be a double-edged sword. In domains such as forensic science, even infrequent errors in visual tasks can have tremendous consequences, yet we also have some control over how exactly those tasks are structured. Thus, rather than learning from experience per se, we may be able to create an even better teacher by optimizing the type of experience that professionals receive.