Skip to main content

Verified by Psychology Today

Our Occasional Inability to Notice What’s in Front of Us

The invisible gorilla has reappeared.

Key points

  • Many of us exaggerate the classic 1999 invisible-gorilla effect as the norm, but most participants saw the gorilla.
  • A new study places this effect into a larger list of cases in which drivers and radiologists miss what’s in front of them.
  • It’s possible to raise the alarm about the consequences of such blindness without overstating the actual frequency of mistakes.
Source: Phillipcspence/Pixabay

There are limits to humans’ visual attention. Some people, in some circumstances, can look at a scene and miss something right in front of them. This oversight can sometimes have serious consequences, such as when a driver doesn’t see a cyclist or a radiologist misses a tumor.

The award-winning study on this topic by Simons and Chabris (1999) led to a best-selling book called The Invisible Gorilla: How Our Intuitions Deceive Us (Chabris and Simons, 2009). The “invisible gorilla” refers to the 1999 result as a case of “inattentional blindness” in which some participants didn’t recall seeing someone in a gorilla suit right in front of them while they focused on the passing of basketballs.

Exaggerated Reports

Although the book reported that “roughly half” of the participants did recall the gorilla, many writers and instructors have since exaggerated the effect. It’s common to read or hear that people “frequently” or “often” missed the gorilla, that they “tend[ed] not to notice,” or even that “most people” missed it.

In my book about biases (Stalder, 2018), I referenced this language and further revealed that it was less than half—43 percent—who missed the gorilla in a real-life context. There were multiple conditions in the original study, only some of which represented real-life viewing. In the few other conditions, the gorilla was “partially transparent” (to mimic an earlier procedure of superimposing different displays). When the gorilla was barely visible in this condition, about 70 percent missed it. It was only by combining the transparent and real-life conditions that the proportion who missed it rose to half. Even 43 percent was an amazing result, but it was not the norm.

Numerous educational videos and reports share real-life images and not the barely visible transparent gorilla (to see the transparent condition, you can scroll to p.1067 in the original article).

My book's general point regarded a common bias to overgeneralize (which underlies stereotyping). The exaggerated reporting of the gorilla effect was just one of multiple high-profile examples. My advice was to be wary of vague or overinclusive nonstatistical language like “frequently” or “most people” in reading about psychological effects.

An Updated Review: “Normal Blindness”

In recent years, public attention to the gorilla effect has died down. But a new study by Wolfe and colleagues (July 21, 2022) has brought the not-so-invisible gorilla back into the news (Carroll, 2022; Pearson, 2022). Wolfe and colleagues reviewed the gorilla effect and more recent results, proposed a single framework to cover all such effects, and coined a new name for our occasional oversight: “normal blindness.”

The authors wrote that “humans routinely miss important information that is “right in front of our eyes,” from overlooking typos in a paper to failing to see a cyclist in an intersection.” Although it’s true that humans have visual limitations, terms like “normal” and “routinely” raise my previous question about overgeneralizing.

Wolfe and colleagues first reviewed the classic invisible-gorilla study. They reported that “about 50 percent” missed the gorilla, again overlooking that the percentage was actually lower in real-life conditions. In another initial example, people looked for a T in a series of L’s and missed the T 5-10 percent of the time, which does not really rise to a “routine” level.

More important than proofreading are cases of driver error or medical misdiagnosis. Wolfe and colleagues cited studies on driving accidents in which a cyclist was not seen in time. However, they did not provide the overall proportion of times that cyclists were missed (my limited look at the research suggests it’s a minority of the time). Some of those studies established that drivers who don’t expect to see a cyclist or are otherwise distracted are more likely not to see the cyclist. Makes sense.

Source: hkgoldstein0/Pixabay

Similarly, radiologists looking for one type of anomaly are more likely to miss another. Among multiple cited studies, Williams and colleagues (2021) reported that “when their attention was focused on searching for lung nodules, 66 percent of radiologists did not detect breast cancer and 30 percent did not detect lymphadenopathy.” These error rates dropped to 3 and 10 percent, respectively, when “searching for a broader range of abnormalities.”

Wolfe and colleagues also cited Wolfe’s earlier work (2005), which established that the more rare or infrequent the target item is, the more often observers miss it. Overall, common items were missed less than 10 percent of the time, whereas the rarest items were missed about 35 percent of the time (with one condition exceeding 50 percent).


To their credit, Wolfe and colleagues (2022) acknowledged a recent study that suggested that part of why people don’t report having seen something right in front of them is a memory failure and not that they didn’t originally notice it (Robbins et al., 2019). Wolfe and colleagues also identified the random nature of such errors in concluding that “an item that one observer at one time misses will be seen at a different time or by a different observer.” Thus, a primary piece of advice was to seek “a second pair of eyes.”

In Sum

People have visual limitations. And when drivers’ or radiologists’ attentional mistakes result in fatalities, that is tragic. Identifying or reminding us about the conditions under which these oversights are more likely is thus an important part of Wolfe and colleagues’ work. It can save lives.

However, these goals can be achieved without overstating the typically-less-than-50-percent frequency of the errors (though I acknowledge subjectivity in terms like “frequently” and “routinely”). The classic invisible-gorilla study, particularly, is well known and still commonly cited in science classrooms. It seems important to highlight that this attentional mistake was not the norm while still emphasizing its dangers in certain contexts. The broader benefits of not overgeneralizing include reduced stereotyping, whether about particular groups or humans in general.


Linda Carroll, “Here's Why You Don't Always Notice Things That Are Right in Front of You,” Today, July 26, 2022,….

Christopher Chabris and Daniel Simons, The Invisible Gorilla: How Our Intuitions Deceive Us (New York: Broadway Paperbacks, 2009).

Dave Pearson, “Should Patients and Referrers Worry That Radiologists Have ‘Normal Blindness’ Just Like Everyone Else?”, Radiology Business, July 27, 2022,….

Chloe J. Robbins et al., “The ‘Saw but Forgot’ Error: A Role For Short-Term Memory Failures in Understanding Junction Crashes?”, PLoS One 14 (2019),

Daniel J. Simons and Christopher F. Chabris, “Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events,” Perception 28 (1999): 1059–74.

Daniel R. Stalder, The Power of Context: How to Manage Our Bias and Improve Our Understanding of Others (Amherst, NY: Prometheus Books, 2018).

Lauren Williams et al., “The Invisible Breast Cancer: Experience Does Not Protect Against Inattentional Blindness to Clinically Relevant Findings in Radiology,” Psychonomic Bulletin and Review 28 (2021): 503–511.

Jeremy M. Wolfe et al., “Normal Blindness: When We Look But Fail To See,” Trends in Cognitive Sciences (July 21, 2022),….

Jeremy M. Wolfe et al., “Rare Items Often Missed in Visual Searches,” Nature 435 (2005): 439–40.