Skip to main content

Verified by Psychology Today

Bias

Mis-Diagnosing the Causes of Diagnostic Errors in Healthcare

We can’t fix a problem if we don’t understand it.

Key points

  • Diagnostic errors contribute to an estimated 10 percent of patient deaths per year in U.S. hospitals.
  • The standard psychological explanations for diagnostic errors are not particularly useful.
  • These standard explanations describe reasons for diagnostic failures that are only clear with hindsight.
  • One source of diagnostic errors that does seem important is fixation errors.

In 2015, the Institute of Medicine (IOM) issued a 444-page report, “Improving Diagnosis in Health Care,” to bring attention to an important problem: the frequency and severity of mistaken diagnoses in health care settings. This report estimated that 5 percent of adults in the U.S. seeking outpatient care experience some sort of diagnostic error and that diagnostic errors contribute to roughly 10 percent of patient deaths. “Most people will experience at least one diagnostic error in their lifetime, sometimes with devastating consequences,” the report states (p. 1).

What are the causes of diagnostic errors? The IOM report draws on the work of leading healthcare researchers to identify some obvious culprits: high workload, time pressure, fatigue, and information overload.

This essay is about the psychological reasons, not the physical reasons listed above. Why would a physician, in the absence of time pressure, workload pressure, fatigue, or information overload, still make a diagnosis error?

The IOM report synthesized the scientific literature and drew from a large group of highly respected consultants to offer a dozen causes for diagnosis errors. These explanations all seem important—but closer examination shows the opposite. Ultimately, we argue, the explanations are not particularly helpful. If we can’t be clear about the causes of diagnostic errors, we have little hope of reducing them.

Here are the reasons cited in the IOM report, along with rebuttals:

  • Premature closure. But physicians need to reach some closure in order to treat the patient. “Premature closure” seems like an example of hindsight. You only know it was premature if you later discover it was wrong, and physicians are unable to detect premature closure in real time.
  • Failed visual pattern recognition. Again, you only know it was a failure if you look back with hindsight, knowing that the physician had erred.
  • Lack of knowledge. Same issue. Physicians cannot be expected to know everything, so the lack of knowledge problem only pops up after you discover the mistake.
  • Failure to consider competing diagnoses. There is some merit to this item. Physicians are trained to consider alternatives, a “differential diagnosis” approach. Some conditions such as chest pain do have a standard differential, but others do not; you only know you left something out if the original diagnosis was wrong. At any rate, we argue that the concept of differential diagnosis is not particularly useful because it is already well-established in the medical community.
  • Overweigh a hypothesis. How much weight should a hypothesis be given? The answer is only clear in retrospect, after a mistake was made.
  • Incomplete history. Same issue. You only know the history was incomplete if the subsequent, accurate diagnosis, reveals the missing aspect that the history hadn’t unearthed.
  • Failure to distinguish key symptoms from misleading ones. Same problem with hindsight. You only know the key symptoms in retrospect.
  • Ignoring important cues. And again, “important” only becomes clear in hindsight.
  • Hypothesis doesn’t fit the facts, and faulty synthesis. These are only determined in hindsight.
  • Faulty context awareness. Obviously, if the physician made a mistake.
  • Misjudging salience. Obviously, if the physician made a mistake.
  • Faulty perception. Obviously, with hindsight.

Therefore, we argue that these “explanations” don’t explain anything and don’t offer any useful guidance for physicians. Imagine that we told physicians: “Don’t reach closure before you identify the correct diagnosis; only make accurate pattern judgments; rely only on the relevant knowledge for a case; only consider the correct alternative; weigh the hypotheses appropriately; take histories that capture the critical information; don’t get confused by misleading symptoms; don’t ignore important cues; only entertain accurate hypotheses; don’t misjudge salience; and make only accurate perceptions.”

How could any of this advice be useful?

The IOM report did identify one factor that does seem important: being captured by an initial mistaken diagnosis and failing to revise. But even there, the IOM report, reflecting the available literature, went off in a direction that seems unproductive. The IOM report highlights the role of cognitive bias in diagnostic error, and particularly confirmation bias.

However, an earlier essay on this site, "The Curious Case of Confirmation Bias," has described how decision researchers have backed away from the idea of confirmation bias because there are conditions under which a confirmation strategy is valuable. Further, efforts to reduce confirmation bias have not met with great success, which is fortunate given its value as a heuristic. The earlier essay suggested that we instead consider fixation errors (e.g., DeKeyser and Woods, 1990) as the problem, rather than confirmation bias.

What is the Problem With Confirmation Bias?

Researchers warning against confirmation bias may discourage physicians from speculating at the outset, even though rapid speculation has such great value in treating patients with life-threatening conditions. The confirmation-bias advocates will discourage physicians from seeking confirming evidence even though this tactic can have great value. The advocates concern themselves only with eliminating errors, ignoring the ability of, and the importance of skilled physicians to rapidly make accurate and timely diagnoses.

In contrast, fixation errors do not signify that physicians are defective thinkers, and can be reduced. A previous essay, "Escaping from Fixation," has offered suggestions for overcoming fixation errors. Klein and Jarosz (2011) have found that people can break free of fixation (as opposed to being trapped by confirmation bias). A separate effort has conducted workshops for the petrochemical industry, presenting methods for overcoming fixation errors.

Conclusion

We have provided a review of the advice given to physicians to reduce diagnostic errors. We believe that our account will be useful for the healthcare community. We have suggested that the concept of fixation errors will be more productive than the concept of confirmation biases.

This essay is a collaboration between Gary Klein and Rollin J. (Terry) Fairbanks, M.D., Chief Quality and Safety Officer at MedStar Health and Professor of Emergency Medicine at Georgetown University.

References

Balogh, E., Miller, B.T., & Ball, J. (Eds.). Improving diagnosis in health care. National Academies of Sciences, Engineering, and Medicine. The National Academies Press.

De Keyser, V., & Woods, D.D. (1990). Fixation errors: Failures to revise situation assessment in dynamic and risky systems. In A.G. Colombo and A. Saiz de Bustamante (Eds.) System reliability assessment (pp. 231-251). Drodrecht, The Netherlands: Kluwer Academic.

Klein G, & Jarosz A. (2011). A naturalistic study of insight. Journal of Cognitive Engineering and Decision Making, 5, 335-351.

advertisement
More from Gary Klein Ph.D.
More from Psychology Today