Skip to main content

Verified by Psychology Today

Cognition

Where Does Cognition Live?

The illusion of thought in machines that compute, not think.

Key points

  • LLMs emulate thought but lack the awareness and understanding that define true cognition.
  • Their brilliance lies in computation, making them powerful tools for amplifying human creativity.
  • Defining their role preserves human thought while maximizing the potential of human-machine collaboration.
Art: DALL-E/OpenAI
Source: Art: DALL-E/OpenAI

I’ve always been fascinated by the tension between what seems real and what is real. It’s easy to get swept up in the allure of complexity—those moments where patterns emerge that feel like they hold deeper meaning. Whether it’s the swirling motion of a pendulum carving art into sand or a colony of ants building what looks like a planned city, I often find myself tumbling down the rabbit hole, asking, Is there something more going on here?

The same question haunts me when I think about Large Language Models (LLMs). These computational marvels generate text that can feel uncannily human—sometimes poetic, sometimes profound, often persuasive. And yet, I struggle with a lingering question: Is this just computation? Or are we seeing the beginnings of something profoundly transformative—a spark of cognition?

That struggle brought me here: to untangle the illusion of “emergence” from the reality of what these systems truly are. Because while their outputs may look intentional, even intelligent, the mechanics tell a different story. What I see is pseudo-emergence, a functional byproduct of scale and complexity, not cognition. And I must admit, I sometimes find this perspective a bit sad. Perhaps it’s because the illusion of intention feels so compelling—until you realize it’s just an illusion.

Pseudo-Emergence: Complexity Without Cognition

To unravel this, let’s start with nature. Consider the humble pendulum, sweeping through a bed of sand. Its motion, governed by simple physical forces—gravity, inertia, friction—traces mesmerizing patterns in the sand. We might call it beautiful, even artistic. But is it art? No. It’s mechanics. The pendulum doesn’t “intend” to create; it simply follows the laws of physics.

Ant colonies offer another example. Each ant operates on simple rules: follow pheromone trails, avoid collisions, and carry food back to the nest. Yet collectively, these actions produce astonishingly efficient systems for foraging and building. The colony’s behavior is emergent in the sense that it arises from the interactions of individual ants, but there’s no awareness, no cognition behind it. It’s a system governed by rules, not thought.

The Mirage of LLM Cognition

Now, let’s turn to LLMs. These systems produce outputs that often feel cognitive, even intentional. They write poems, solve puzzles, and engage in dialogue with startling coherence. Some may describe this as “emergent behavior,” as though reasoning and creativity have spontaneously bubbled up from their layers of neural networks.

But peel back the curtain, and what do you find? The same mechanistic principles as the pendulum and the ants:

  • LLMs follow statistical rules, predicting the next word in a sequence based on patterns in their training data.
  • Their “emergent” abilities—reasoning, generalization, creativity—are not designed into them but arise as byproducts of scale and computational complexity.

This is pseudo-emergence, not cognition. But even so, it’s nothing short of extraordinary. The ability to mimic the patterns of human thought at this scale is a profound achievement, revealing the depth and brilliance of computational systems.

The Myth LLM Cognition

What makes this distinction so important is the mythologizing that often surrounds LLMs. Their outputs are so convincing that it’s tempting to assign them human-like qualities: intelligence, creativity, even consciousness. But this is a critical misunderstanding:

  • Cognition requires intentionality, awareness, and understanding. It’s not just about producing coherent outputs; it’s about grasping their meaning and purpose.
  • LLMs lack all of this. They compute—they don’t cognate.

To use another analogy: just because a machine plays chess doesn’t mean it understands chess. It’s executing patterns, not contemplating strategy. Similarly, just because an LLM can write a poem doesn’t mean it understands poetry—it’s arranging words, not reflecting on beauty.

A Distinction with a Difference

The distinction between calling LLMs “cognitive” or their outputs “emergent” is more than semantic—it shapes how we perceive and use these systems. Words matter. When we conflate computation with cognition, we risk diminishing the profound richness of human creativity—a uniquely human endeavor driven by intention, meaning, and the depth of lived experience. These are qualities no algorithm, however sophisticated, can replicate.

While LLMs are not cognitive agents, they can still serve as collaborators—partners in thought that extend and amplify human capabilities. Their brilliance lies not in understanding or independent thought, but in computation that dynamically complements our intellectual potential. By establishing this role, we can better appreciate how they enhance creativity without overshadowing the irreplaceable ingenuity of the human mind. This clarity ensures our partnership with these systems remains grounded in reality, leveraging their strengths to expand, rather than replace, human thinking.

Reclaiming Clarity

The pendulum doesn’t create art. The ants don’t plan their colony. And LLMs don’t think. Yet all of them produce results that seem, on the surface, to transcend their underlying mechanics. This is the beauty—and the limitation—of pseudo-emergence.

This distinction isn’t just philosophical; it’s practical and deeply human. By understanding what LLMs are and what they aren’t, we can better appreciate their role in the "cognitive age." They compute, and they do it brilliantly. But the cognition? That’s still ours, and it’s worth protecting.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today