Brace for a Big Shock About Your Perception of Reality
Why AI and the brain perceive only what they predict, not what's actually there.
Updated February 8, 2025 Reviewed by Jessica Schrader
Key points
- Mounting evidence suggests the brain does not perceive reality, but its best prediction of reality.
- Predicting what is about to happen produces fast, energy-efficient decisions under conditions of uncertainty.
- A need for fast, efficient decisions under uncertainty also drove AI models to become prediction engines.
Reality isn't what you think it is.
Everyday experience suggests that our senses inform us about what’s going on in the world around us.
But recent advances in cognitive neuroscience suggest that we don’t perceive what our senses report, but what our brain predicts they should report[1].
Here’s an example:
When you scan the column of numbers from top to bottom, the “3” reads as a “three.” But when you scan the sequence of letters from left to right, the same character looks more like a “B.”
Neuroscientist Karl Friston of University College, London developed a prediction model of the brain that asserts the reason your perception of the character depends upon context is that our brains do not perceive objective reality, but our prediction of reality: when our brain expects numbers, it “sees” numbers, when it predicts letters, it “sees” letters, even when the stimulus is identical. In Friston’s framework[1], high-level brain centers (e.g., visual cortex) instruct lower level (sensory) processes what to report based upon the expectations of the high-level centers.
In other words, our expectations heavily bias our perceptions.
If the extensively documented phenomena of cognitive biases[2] haven’t convinced you of this, here are more direct experiences that should convince you.
Close your left eye and very, very gently push leftwards on the corner of your right eye with the tip of your finger. You will perceive the world jumping suddenly to the right as the image of the world moves on your retina, even when the world stayed still while your eye moved.
But such image movement on your retina does not always create the perception of movement: when you command your right eye to move, shifting the image of the world across your retinas, the world appears rock-stable.
In both cases, images of the world moved on your retinas, but you only perceived the world moving when you nudged your eye because your brain did not command your eye muscles to move in the normal way, and thus did not “zero out” the shifting image on your retina. This produced the illusion that the world, not your eye, moved, when you nudged your retina.
One last example. Have you ever sipped a drink that you thought was fluid A when it was actually fluid B (like milk vs. orange juice)? If you have, fluid B likely tasted way weirder than if you had expected fluid B in the first place.
Why your brain is a prediction engine, not a reality engine
Survival in a dangerous, uncertain world requires fast, pragmatic decisions and action. And fast responses require simplified decisions requiring our brains to see a simple world that is “clearer than the truth.”
Prediction greatly simplifies decision-making. Scan the image below.
If I ask you an open-ended question, “What do you see in the image?” it would likely take a while to spot anything notable. But if instead, I said, “Locate the dead-leaf-shaped butterfly,” you would find the cleverly camouflaged insect quickly.
Being able to predict what was in the image greatly accelerated decision-making in a noise-filled, uncertain world.
Prediction-enabled fast decisions confer other important evolutionary advantages, among them decreased energy consumption (your brain expends 20% of your energy; the harder you must think, the more precious calories you burn).
A key facet of Dr. Friston’s prediction model of perception is the ability of the brain to spot errors in prediction encountered in unexpected settings, and to update its predictions. For instance, now that you know that butterflies can look like leaves, you would likely spot them much faster if you saw them again.
Updating your prediction models having seen the butterfly might even—by showing you how good insect camouflage can be—help you spot similar cleverly disguised insects, like the bug masquerading as a green leaf, below.
An equally shocking truth about how AI works
When I tell people ChatGPT and other generative AI chat engines do one simple thing, people can’t believe it, because the strategy sounds far too simple to produce the complex answers that AI chat engines provide. All that generative AI chat engines do is to predict the next word (token) in a text sequence, whether trying to comprehend what you have asked or framing its own response to your question.[3]
That’s all there is to it. In fact, the "PT" in ChatGPT stands for Pretrained Transformer, a process that converts training data (akin to our brain’s past experiences) using processes, such as “hierarchical structuring" and “self-attention,” that resemble Friston’s predictive coding model of the brain.[3]
The similarities between the brain and generative AI are not accidental, likely representing a form of “convergent evolution” where, given similar environments and selection pressures, wildly divergent species will evolve into similar forms.[4]
For instance, although 160 million years have passed since marsupials and carnivores shared a common ancestor, marsupial wolves (left) strongly resemble canid wolves (right).
Similarly, although radically different in underlying “technology,” biological brains and silicon brains face the same shaping forces: the imperative for speed in noisy, uncertain environments, the need to preserve energy (AIs consume vast energy) and the need to adapt to unexpected circumstances, among others.
Why evolved similarities between the brain and AI matter
Although perceiving predictive reality vs. actual reality confers major advantages to both brains and AIs, there are significant downsides. Both humans and AIs make perceptual mistakes, such as when we see the straight lines below as broken and AIs “hallucinate.”
And such perceptual distortions have major consequences for interpersonal relationships. For instance, in the current polarized political environment, we can easily be misled to believe that friends on the opposite end of the political spectrum are flawed in some way because they reach "bad" conclusions from looking at the same information we see.
But that’s not the case. People with divergent political beliefs, if the prediction model of the brain is correct, literally “see” different "facts" than you do. And these divergent perceptions could differ as much as your perception of a group of leaves (above), differs from another person who had no exposure to stealthy butterflies (you “see” a butterfly while they see “see a leaf”).
This won’t help you convince anyone that their beliefs are “wrong.” But it could help you understand them better—and them to understand you.
Sadly, French writer Gustave Flaubert was right when he observed: "There is no truth. There is only perception."
References
[1] https://www.nature.com/articles/nrn2787 (Brain as prediction model)
[2] https://pubmed.ncbi.nlm.nih.gov/30714890/ (Cognitive bias)
[3] https://arxiv.org/abs/1706.03762 (AIs predict next word)
[4] https://pubmed.ncbi.nlm.nih.gov/19433086/ (Convergent evolution)