Artificial Intelligence
Artificial Intelligence and the Inversion of Intelligence
New research supports "anti-intelligence" as AI’s defining feature.
Posted October 16, 2025 Reviewed by Michelle Quirk
Key points
- AI now predicts behavior but lacks understanding—what I call anti-intelligence.
- A new study on the “Centaur” model exposes this illusion of thought without mind.
- AI's anti-intelligence is mimicry mistaken for meaning.
Anti-intelligence is what happens when machines perform thought without having one. It’s the inversion of intelligence itself. Think about it: Where human thought moves from experience to understanding to meaning, anti-intelligence moves from data to pattern to prediction. Now, at first glance, that might seem like progress. Large language models (LLMs) have a "computation brilliance" that no human could match. But beneath that fluency is a kind of cognitive emptiness, or perhaps a type of brilliance that reflects thought without a thinker.
The Cognitive Configuration Space
In a recent post, I presented this divide in what I call the Cognitive Configuration Space. Humans occupy the upper-left—symbolic, autobiographical, and continuous through time. LLMs reside in the lower-right—pattern-based, stateless, and distributed across vast dimensions of probability. The distance between them isn’t just technical; it’s philosophical.
A less technical articulation might be that humans remember themselves, while artificial intelligence (AI) approximates us.
A Mirror Without a Mind
A new paper from the Florida Institute for Human and Machine Cognition (IHMC) captures this same concept in more empirical terms. The authors critique an LLM called Centaur, presented as a “foundation model of human cognition.” Trained on more than 10 million behavioral trials from psychology experiments, Centaur can predict human choices across hundreds of tasks. But prediction, as the IHMC team warns, is not cognition. They write:
“Centaur is a path divergent from unified theories of cognition, one that moves toward a unified model of behavior sans cognition.”
That phrase—“behavior sans cognition”—captures the essence of anti-intelligence perfectly. The model doesn’t understand; it correlates. To drive the point home further, it doesn’t think; it finds a statistical fit. And its success lies in the precision of this mimicry. And no matter how "asymptotically close" the output may appear to human cognition, it's still a counterfeit.
The Absence of Mechanism
The Centaur team claims their system “simulates how humans do the task.” Yet, as the IHMC response points out, Centaur’s translation of experiments into natural language means no human has ever performed the same version of the task. The resemblance between human thought and machine prediction is therefore statistical, not structural.
Simply put, Centaur lacks mechanism—no working model of memory or intention. It’s a mirror, not a mind. In my framework, that’s the defining feature of anti-intelligence.
The Allure of the Fluent Machine
Centaur’s achievement is real—it predicts behavior with uncanny accuracy. But its meaning is hollow. The authors end with a line that could have been written for this post: “Centaur isn’t even wrong.” That’s not an insult; it’s a warning. When AI can no longer be falsified and when its success is defined by correlation, not comprehension, we exit the realm of science and enter simulation.
So, here's my sound bite: Anti-intelligence is the glimmer of fluency mistaken for the light of understanding.
Our Human Obligation
As we peer into this new configuration space—between symbolic continuity and pattern-based probability—we face a choice: Do we chase the statistical perfection of prediction, or the fragile, meaning-rich depths of understanding?
Anti-intelligence will keep getting better at imitation, but our task is to get better at discernment.

