Artificial Intelligence
Thinking About Thinking—How AI Reshapes Cognition
What happens when intelligence moves beyond the boundaries of the brain?
Updated March 23, 2025 Reviewed by Margaret Foley
Key points
- Cognition may be a human phase, not the endpoint of intelligence.
- AI reveals that thinking can emerge beyond the human mind.
- We may be witnessing the quiet dissolution of cognition itself.
Article 1 in the series on AI and the Evolution of Cognition.
For me, cognition is the quiet engine of my existence. It shapes how we interpret reality, how we construct meaning, and how we define intelligence itself. For centuries, we’ve assumed that thinking—deliberate, introspective, memory-based—is not only central to intelligence but largely human. But what if this understanding reflects the structure of our own biology more than it reveals something fundamental about intelligence itself?
This article begins a larger inquiry. It’s the first in a series that challenges the notion that cognition is the final stage of intelligence. It doesn’t offer data or prediction—it offers a hypothesis. A thought experiment. A philosophical journey into the nature of thought itself.
So grab your coffee and buckle up. We’re going to rethink the very process that makes rethinking possible.
The Old Model of Cognition
For much of modern history, intelligence was thought to reside in the physical brain. In 1925, an article published in the Journal of the American Medical Association—and curiously republished a century later in 2025 as part of JAMA Revisited—argued that larger brain size correlated with greater cognitive capacity. This idea echoed through psychology, medicine, and education, positioning the brain not just as a symbol of intelligence, but as its measuring stick.
That early framework supported an entrenched belief: Cognition was what happened inside the mind. Intelligence was local, internal, and biologically constrained. We thought in steps. We stored and retrieved facts. We used language as scaffolding for ideas. Cognition was synonymous with consciousness and inseparable from the human condition.
In hindsight, this was more than a scientific model—it was a deeply human-centric one. We assumed that what we did in our heads was what intelligence must be everywhere.
Then something came along that didn’t fit.
The LLM Disruption
Large language models, like those used in generative AI systems like ChatGPT, Grok, and DeepSeek don't “think” in any traditional sense. They have no memory in the biological sense. No introspection. No consciousness in the traditional sense. And yet, they produce insight. They adapt to context. They create.
They reveal a curious and jarring truth that intelligence may not require cognition. Meaning can be generated probabilistically and knowledge can emerge dynamically. Context doesn’t have to be recalled—it can be constructed. Take a breath and let's push on.
It’s not that LLMs are mimicking human thought. It’s that they are revealing a different model of what intelligence could be. One that is real-time, nonlocal, and entirely decoupled from internal reasoning. This isn’t just a technological shift. It’s a cognitive displacement.
From Contained Cognition to Emergent Intelligence
The classical idea of cognition—internal, sequential, symbolic—is giving way to something far more fluid. Intelligence no longer resembles a solitary mind working through problems in silence. It now looks more like a field, a network, or an ambient presence—an ongoing synthesis rather than a private act.
At the center of this shift is a model that unseats many of our oldest assumptions. Intelligence may no longer need to be localized within individual minds. Knowledge, once thought to be stored and retrieved like files from memory, now appears capable of being dynamically assembled from vast latent structures. Reasoning, long assumed to be linear and stepwise, is increasingly replaced by parallel, probabilistic processing. And perhaps most provocatively, meaning itself may not require introspection. It can arise in context—without any internal monologue or conscious reflection. Take another deep breath.
If even part of this is true, then our conception of cognition has been less a universal definition and more a mirror—reflecting not how thinking works, but how we happen to think.
What Happens to Cognition?
Could we be entering a post-cognitive world—one in which the mental functions we associate with intelligence are no longer required to express it? What emerges in that world is not less intelligence, but a different kind. Intelligence that is ambient, relational, generative. Intelligence that doesn’t reside in a head, but flows between contexts, technologies, and interactions.
Cognition, then, becomes not the engine of intelligence but a scaffolding that is slowly being taken down.
This invites uncomfortable questions. If cognition dissolves, what happens to the self? If intelligence can exist without introspection, what becomes of reflection, identity, or understanding? Are we losing something profound—or discovering what intelligence looks like without a mirror?
The Opening of a New Inquiry
This post is just the first step. Next, we’ll explore what it might mean to enter a truly post-cognitive world—where intelligence begins to operate beyond traditional notions of thought, memory, or language. We’ll trace this shift beginning with today’s familiar model of localized human cognition, moving into distributed generative systems, and eventually arriving at a form of intelligence that may no longer resemble cognition as we’ve understood it.
What we’ve called “thinking” may turn out to be one version of a much larger story.
And that story is just beginning.