Skip to main content
Artificial Intelligence

AI Isn’t Just a Tool—It’s a Test

AI is a test not of its intelligence, but of ours.

Key points

  • AI isn’t the test—it’s the mirror. What’s truly measured is our discernment and curiosity.
  • AI's allure lies in its convincing mimicry, challenging our capacity for critical thinking.
  • The danger isn’t AI’s rise—it’s our retreat and staying present is how we steward our own intelligence.
ChatGPT modified by NostaLab.
Source: ChatGPT modified by NostaLab.

Two recent articles point to something subtle but significant unfolding in our relationship with artificial intelligence. In Rolling Stone, writer Miles Klee critiques the growing presence of AI with a cultural skepticism that’s hard to ignore. He paints it as theater—flashy, convenient, and uncomfortably hollow. In contrast, my own post in Psychology Today offers a different but related view that AI, especially large language models (LLMs), present what I call cognitive theater—an elegant performance of intelligence that feels real, even when it isn’t. Klee questions the cultural spectacle. I question the cognitive seduction. Both perspectives point to the same deeper truth that is as fascinating as it is concerning.

I see it almost every day. Smart, thoughtful people become wide-eyed and breathless when an AI tool mimics something clever, or poetic, or eerily human. There’s often a moment of awe, followed quickly by a kind of surrender.

This isn’t gullibility, it’s enchantment. And I understand it. I’ve felt it too. But part of my job now—part of all of our jobs—is to gently pull people back from that edge. Not to diminish the wonder, but to restore the context. To remind ourselves that beneath the magic is machinery. Beneath the fluency, prediction. And that if we mistake performance for presence, we may forfeit something essential—our own capacity to think with intention.

The Performance of Thought

Today’s AI doesn’t think in any traditional sense. It doesn’t understand what it says or intend what it outputs. And yet, it speaks with remarkable fluency, mimicking the cadence, tone, and structure of our real thoughts. That’s not a bug—it’s the design. Large language models operate through statistical prediction. They draw on enormous datasets to generate text that fits the prompt, the moment, and often the emotion of the exchange.

But here’s the catch, the more convincing the performance, the more likely we are to suspend disbelief. We hear intelligence. We project understanding. And over time, the line between real and rendered cognition begins to blur.

The danger is not in what the AI knows—it "knows" nothing—but in what we assume it knows because it sounds like us.

When Convenience Replaces Cognition

In professional and personal settings alike, AI is stepping into roles traditionally defined by human judgment. In medicine, AI-assisted diagnostics and decision-support tools hold great promise—offering speed, scalability, and pattern recognition that can genuinely enhance care. But the challenge isn’t just technical accuracy, it’s cognitive trust. As these systems grow more confident in tone, we must be careful not to confuse confidence with correctness. A model trained on partial or biased data can still sound persuasive. That’s why an element of criticality needs to be part of our engagement—from the kitchen to the boardroom.

Across sectors—education, medicine, business—the potential of AI is real. And so is the value of cognitive off-loading. Used wisely, it can reduce noise, accelerate routine tasks, and give us more space to think creatively and act decisively. But there’s a line—subtle but critical—between off-loading and outsourcing ourselves. The risk isn’t overreach, but under-engagement that allows the tool to replace not just effort, but intention.

That’s where the danger lies—not in what AI can generate, but in what we quietly stop generating on our own.

The Risk Isn’t Replacement—It’s Retreat

For years, we’ve debated whether AI will replace human workers, thinkers, or creators. But the more subtle and immediate risk is that we might retreat from the very tasks that make us most human—not because we’re forced to, but because it’s easier. The friction that once demanded engagement is starting to dissolve. That’s not necessarily a problem. But it is a shift that deserves our attention.

This is not an anti-technology message. I’ve spent decades championing innovation and embracing the potential of digital transformation. But even the most transformative tools require thoughtful use. The real danger isn’t that AI will take over. It’s that we’ll slowly, quietly, stop showing up with the full force of our human discernment.

The risk is that we'll let fluency replace curiosity and tragically, let performance stand in for presence.

Holding the Line

So what does it mean to hold the line? It means staying mentally engaged, even when the machine offers a shortcut. It means reviewing that AI-generated draft with a critical eye. It means remembering that insight doesn’t arrive fully formed—it often comes from the struggle to find clarity. That struggle, that cognitive friction, is still ours to own.

Holding the line doesn’t mean rejecting AI. It means partnering with it intelligently. It means using its fluency as a springboard—not a substitute. The best uses of AI don’t diminish us—they demand more of us. They call us to think more clearly, to question more deeply, and to refine what matters most in the age of synthetic thought.

Still Ours to Steward

AI doesn’t care. But it performs—brilliantly. And when we accept that performance without question, the test isn’t about the machine—it’s about us. This moment isn’t about whether AI can pass for intelligent. It’s about whether we can stay rooted in our own intelligence—our curiosity, our discernment, our responsibility.

The machine doesn’t ask to be trusted. We choose to trust it. It doesn’t decide—we do. The real risk isn’t what AI becomes, but what we become when we stop showing up. But if we stay engaged—asking better questions, challenging easy answers, thinking with intention—AI becomes more than a mirror. It becomes a lens that sharpens what’s already within us.

Because the truth is, this isn’t just a technological problem. It’s a human one too.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today