Artificial Intelligence
AI vs. the Imperfect Human Brain
For AI to advance further, it needs to emulate the flaws of the human mind.
Posted September 12, 2024 Reviewed by Margaret Foley
Key points
- AI is designed by engineers to work flawlessly, but that isn’t how the human brain works.
- The human brain has many “weaknesses” that are actually strengths in disguise, such as mind-wandering.
- For AI to become truly intelligent, it needs to differentiate between true human weaknesses and strengths.
As a neuroscientist, I am not afraid of AI.
In fact, I would love to see it improve faster. One major obstacle to a significant leap is that AI is developed by engineers, and as such designed to work flawlessly. But this is not how the brain works, and for very good reasons.
Algorithms are said to be deterministic; for input A, the output is always B. Computer memory is meant to be stable and reliable. If you store a list of words, you get this exact list back when you call for it later. When you save a photo, you want the computer’s memory to maintain the exact original content, pixel by pixel.
In the same vein, artificial reasoning and decision-making are based on fixed sets of rules and conventions. When you give a computer a task, you expect an outcome that will adhere to these rules. So it seems natural that software is geared toward accuracy, rigid boundaries, clear categories, and overall "common sense." The problem is that these formulas and algorithms miss the real prowess of human intelligence along the way.
Weakness of the Mind?
When I teach freshmen about the wonders of the brain, I like to start with illusions and other demonstrations that might be interpreted as weaknesses of the mind. Once the giggles subside, I turn to show them how each of these instances is, in fact, a manifestation of mental strength.
For example, take this research on perception. When we align three separate Pac-Mans in space like the corners of a triangle, as shown below, people "see" an actual white triangle. This is called the Kanizsa triangle after the Italian psychologist Gaetano Kanizsa who published it first.
In this illusion, we hallucinate lines that are simply not there.
In fact, even neurons in the visual cortex that detect lines in specific orientations respond as if there were actual lines connecting the Pac-Mans.
Is this a mistake that should be avoided in artificial systems? Or should AI deliberately try to imitate this illusion because it is an implicit reflection of our brains’ powerful inclination to rely on previous experience?
The brain chooses to see a triangle here because the likelihood of a white triangle lying over three circles is much higher than the probability of three rare shapes aligned so perfectly. AI applications, where the contours of a triangle are expected to be represented by actual pixels, would see here only three Pac-Mans. "The thing-in-itself," as the philosopher Immanuel Kant would call it.
The best AI could do for now is to recognize this display as the famous Kanizsa demo and report that there is an illusory triangle within, without really seeing illusory contours like we do. AI systems advance in their gradually more refined ability to learn from experience, by extracting the statistics embedded in massive amounts of examples, and by incorporating context—which could one day make them more human in their illusions.
The Brain Can Expand and Bend Reality
Research on human memory provides numerous illusions like this that can be interpreted as either weaknesses or as reflections of implicit strengths.
In one example, you give participants in your experiment the following list of words to remember: soda, tooth, taste, sour, sugar, candy, pie, chocolate, and honey. Then, you show them individual words and they have to say whether each appeared on the list or not.
You would find that their memories are pretty accurate, but most of them will also say "yes" to the word "sweet" although it was not on the list. Just because it is related to the other words, they falsely "recall" it as being on the list previously presented.
This is the DRM effect (named after Deese, Roediger, and McDermott), and it shows us that the brain can expand and bend reality for a higher purpose. In this specific instance, the co-activation of representations that were not part of the input (called spreading activation) serves the function of preparing us for what else might be relevant in the specific context.
As neuroscience advances, we realize that generating predictions is one of the brain’s primary operations. What seems like faulty memory is, in fact, the by-product of a superior mechanism that’s meant to prepare us for the future. Yet if a programmer deliberately generated code that gives back words that were not part of the original list, she would be called to her boss to be disciplined.
Making AI Truly Intelligent
Perhaps the brain’s most profound strength that is disguised as a malfunction is the propensity of our minds to wander. It is also the most omnipresent and most consuming. It turns out we wander away for about 50 percent of our day. For half of our waking hours, our mind is not where our body is.
This, of course, is not modeled in AI, and you would sound crazy to suggest that an algorithm should drift away randomly while it has a clear goal to accomplish. After all, even we tend to feel guilty when we are caught wandering. This is what society has instilled in us, but there must be a reason for why such an efficient organ as the brain would spend so much time and metabolic energy on wandering. We know evolution wouldn’t let us do it if it was not essential for our being.
And, indeed, research from the last decade reveals more and more aspects for which mind-wandering is conducive—even critical. Many mental operations that are necessary for our everyday actions are executed during our frequent mind-wandering.
We simulate future scenarios (such as an upcoming job interview or vacation) in great detail to anticipate and be ready. We plan and we make decisions based on the various possible outcomes we foresee. We think about the intentions of others through "theory-of-mind" processes. We come up with creative ideas and solutions. We’re able to do all of this thanks to the creative "incubation" that is carried out by our wandering mind. There is clearly a monumental and purposeful role for our absent mind.
If we use AI as a tool, of course, we would want it to do exactly what we ask it to do. Just like we do not want our oven to stop cooking at random times, we also would not want a control-tower algorithm tracking and orchestrating the trajectories of multiple inbound and outbound airplanes to suddenly be dividing its computational resources with anything else that might distract it. In fact, in such cases, we also do not want humans to wander but rather to remain here and now with their mind.
But what about more typical and less extreme situations?
It will require further research before we know how to implement mind-wandering in AI in a constructive manner, even if it is hard to imagine right now. This research should include considerations such as the different ways by which the conscious and the unconscious minds perform computations: serial vs. parallel, limited capacity vs. unlimited, and more.
If we want AI to be truly intelligent, like humans and even more, we cannot be replicating only the explicit functions that are evident to us too clearly. Of course, there are things the brain does that could be outright counterproductive or even subversive to AI development, such as stereotypical thinking, numerous cognitive biases (such as those first characterized by the late Tversky and Kahneman), or the often harmful fallibility of eyewitness testimonies.
Evolution is not done, and there is always room for improvement. So this is not a call to blindly copy everything human but merely to delve deeper so that we can distinguish between weaknesses that are just that and weaknesses that are strengths in disguise.
And mind-wandering should be at the top of our list.
In our current attempts to create intelligence, we ignore what intelligent humans do with their mind for half of their time. How could we expect to generate real intelligence?