Artificial Intelligence
Forget AGI: It's Time to Consider MCI
Racing toward AGI, we may be slipping below our minimum cognitive integrity.
Updated September 23, 2025 Reviewed by Davia Sills
Key points
- We may cross the MCI line—losing cognitive agency—before machines ever reach AGI.
- Outsourcing thought erodes curiosity and makes thinking passive.
- The solution may be to choose friction, ask better questions, and take breaks to reclaim your mind.
Picture a student who once loved to write. Now every essay begins with a prompt typed into a large language model, and every paragraph is polished and feels almost perfect. The grades are still high, but something’s different—the words no longer feel like theirs, and the confidence in their own thinking is quietly dissolving. I believe that this may be one of today’s most important issues.
As we chase artificial general intelligence (AGI), we may be running past something far more important, and that’s our own ability to think. For years, AGI has been treated as the finish line for AI. That’s the day when AI stops just predicting the next word and starts thinking like us and even beyond us. My sense is that the more urgent question isn’t when the machines will cross the AGI threshold but whether we’ve already crossed ours.
I call this threshold Minimum Cognitive Integrity, or MCI for short. It’s not a metric or a data point. It’s a philosophical line, the point below which we surrender enough of our intellectual footing that our agency itself is compromised. MCI is what keeps us in the driver’s seat of our own cognition. Above it, we still direct our thinking, and we choose what to think and wonder about. Below it, we’re still “thinking,” but the process has been quietly outsourced. The machine isn’t just answering our questions, it’s shaping which questions we ask.
I’ve written before about what I call the borrowed mind. It’s the gradual outsourcing of our cognitive capabilities. It starts small by letting simple things like autocomplete finish your sentence or letting search results decide which sources you’ll read or cite. Soon, you’re letting the chatbot write your first draft and then your second. Each step feels efficient, even smart. But taken together, they create a cognitive drift away from the friction that makes thinking alive and human.
This is commonly called cognitive offloading. It’s when we rely on tools to think for us. There’s even a more academic name for this in the context of LLMs called “metacognitive laziness.” And of course, this isn’t always a problem (think of all those long-forgotten phone numbers now saved in your phone). But today, we may be doing this increasingly—crossing that threshold—with our thoughts.
Crossing below MCI isn’t a single event. I think it’s more like muscle atrophy or sarcopenia. It’s the slow loss of strength until one day you can’t lift the weight you used to or even urgently need to. We still have thoughts, but they are shaped by the machine or even the anti-intelligence of AI.
Too often, our intellectual lives run on rails that were quietly laid down for us by technology and “big tech” itself. And the longer we stay “on track,” the harder it becomes to climb back to independent thought. Remember, human agency isn’t just having a mind; it’s using it. And when that act is outsourced, it becomes someone else’s (or something else’s) cognitive journey.
Here’s the key point. If AGI is the finish line for machines, MCI may be a conceptual survival line for human cognition. And the paradox is that in racing to create smarter and smarter machines, we may be hollowing out the very thing that made that race meaningful—our own capacity for original and self-directed thought.
This isn’t a call to abandon technology or a dystopian cry of defeat. It’s an invitation to draw the line consciously and to refuse to let thinking become entirely passive. There are practical ways to protect your “minimum cognitive integrity” and stay fully engaged in the act of thought that might help to protect and defend human cognition.
- Choose friction. The next time you’re tempted to prompt, pause. Set a mental timer for a few minutes and work the problem yourself. That small struggle strengthens problem-solving skills, much like exercise builds muscle.
- Ask better questions. Use AI as a sparring partner, not an oracle. Challenge AI before accepting its output and make it work with you, not just for you.
- Take cognitive sabbaths. Frequently, perhaps once a week or so, disconnect completely from the algorithm for a few hours. Read a physical book, write by hand, or have a meandering conversation with family or friends.
AGI may still be years away. But the question of MCI is here now, and it’s pushing against us. The future may not be about machines becoming more human. It may be about humans holding onto the very capacity that made us human in the first place. So, before we ask whether AI can think like we do, maybe we should ask if we are still thinking like ourselves?
