Skip to main content

Verified by Psychology Today

Artificial Intelligence

Is AI Now Thinking More Like Humans?

LLMs are becoming more thoughtful and introspective.

Key points

  • OpenAI's o1 model mimics human cognition by slowing down for reflective reasoning, enhancing accuracy.
  • Like our brain's sensory filter, o1 processes vast data, surfacing only key insights for decision-making.
  • Reinforcement learning refines o1's thinking, making AI-human collaboration more thoughtful and efficient.
Art: DALL-E/OpenAI
Source: Art: DALL-E/OpenAI

In today’s fast-paced world, speed is celebrated. Instant messaging outpaces thoughtful letters, and rapid-fire tweets replace reflective essays. We've become conditioned to believe that faster is better. But what if the next great leap in artificial intelligence challenges that notion? What if slowing down is the key to making AI think more like us—and in doing so, accelerating progress?

OpenAI’s new o1 model, built on the transformative concept of the hidden Chain of Thought, offers an interesting glimpse into this future. Unlike traditional AI systems that rush to deliver answers by scanning data at breakneck speeds, o1 takes a more human-like approach. It generates internal chains of reasoning, mimicking the kind of reflective thought humans use when tackling complex problems. This evolution not only marks a shift in how AI operates but also brings us closer to understanding how our own brains work.

The Hidden Chain of Thought and Human Cognition

This concept of AI thinking more like humans is not just a technical accomplishment—it taps into fascinating ideas about how we experience reality. In his book The User Illusion, Tor Nørretranders reveals a startling truth about our consciousness: only a tiny fraction of the sensory input we receive reaches conscious awareness. He argues that our brains process vast amounts of information—up to a million times more than we are consciously aware of. Our minds act as functional filters, allowing only the most relevant information to “bubble up” into our conscious experience.

This means that the external reality we perceive is only a sliver of what’s happening in our brains. The rest is processed quietly, filtered out to keep us focused on what’s important. Imagine if we were constantly aware of every neural process—every beat of our heart, every sensation of digestion. It would overwhelm our senses. Instead, our brains select out unnecessary or distracting input to maintain our focus on what matters most.

Thinking Below the Surface

This idea of selective filtering is similar to how OpenAI’s o1 model operates with its hidden chain of thought. The model doesn’t expose every detail of its reasoning process to users. Instead, it delivers a summarized version of its internal workings—a distillation of its "thoughts" that provides clarity without overwhelming us with data. Like the brain’s filtering process, o1 processes vast amounts of information in the background, surfacing only what’s most useful and relevant to the task at hand.

But, just as our brains sometimes simplify or filter out too much information, the AI’s summarized reasoning may not always be a perfect mirror of its internal processes. And that’s okay. When we explain our own decisions, we often reduce complexity to key points, leaving out many subtle details in favor of conveying the main ideas. AI is doing something similar—learning to balance deep, hidden reasoning with actionable insights.

Shaping AI’s Cognitive Journey

What’s fascinating is how our interactions with AI influence its "thinking." The prompts we provide guide its internal deliberations. A vague prompt may lead to a broad answer, while a well-framed question sharpens the AI’s focus. This dynamic interplay between human intention and machine cognition brings the AI experience closer to human-like thinking. And just as our brain selectively filters out distractions, the AI learns to do the same, honing in on relevant information based on the instructions we give.

OpenAI has even encouraged us to “minimize the prompt” and let the AI think more independently, much like our own brain processes subconscious information before delivering what’s important to our conscious mind. The human-AI interaction remains a crucial element in this process, with prompts acting as the external stimuli that trigger the AI’s internal filtering and reasoning.

Reinforcement Learning that Enhances Chain of Thought

At the heart of o1’s ability to “think” more like a human is reinforcement learning. This is not about feeding the AI more data—it’s about teaching it how to learn from its own experiences, much like our brain filters and adjusts based on experience. Through reinforcement learning, o1 refines its internal reasoning over time, sharpening its chain of thought in ways that traditional AI models can’t match through mere prompting alone. It’s the difference between someone who memorizes facts and someone who understands concepts and applies them creatively.

Slowing Down to Speed Up

Given that o1 spends more time reasoning, you might assume this makes it slower. Paradoxically, the opposite is true. While the thinking stage may seem slower, the final output is typically faster and more accurate because it reduces errors and avoids the need for corrective follow-ups. By taking the time to reason through a problem—much like how our subconscious processes vast amounts of information before delivering a coherent thought—o1 achieves faster and more effective outcomes. It’s a classic case of “measure twice, cut once,” applied to AI thinking.

A New Thought in Artificial Intelligence

The development of models like o1 represents the "next step" in the evolution of artificial intelligence. We’re moving beyond machines that simply process data toward systems that engage in reflective thought, much like humans. This brings us closer to a future where AI doesn’t just work for us—it thinks with us. In doing so, it enhances our abilities and expands our understanding of both the external world and the hidden layers of our own cognition.

By considering the paradox that slowing down leads to faster progress, we’re reminded that innovation isn’t always about speed. Sometimes, the most direct path forward requires a thoughtful pause—a moment to reflect, filter, and truly understand. As AI continues to evolve, it may not only augment our intelligence but also offer new insights into how our brains themselves process reality.

Hurry Up, and Slow Down

The next time you find yourself rushing through tasks or seeking instant answers, take a page from o1’s playbook. Slow down. Let your internal chain of thought work its magic. You may find that this deliberate approach doesn’t delay progress—it accelerates it in ways you never imagined.

In the end, the fusion of human intuition, AI reasoning, and the careful filtering of information promises a smarter, more thoughtful world. By rethinking how we engage with both our brains and our technology, we can reshape our relationship with reality—and with each other. Maybe you should sleep on it?

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today