Artificial Intelligence
Reaping the Benefits of AI Without the Brain Rot
AI doesn’t have to be an all-or-nothing affair.
Posted December 4, 2025 Reviewed by Margaret Foley
Key points
- The transition to an AI-driven economy is real, and there is no benefit in denying it.
- The pressure of needing to use AI while avoiding its harmful effects can cause cognitive dissonance.
- It’s possible to use AI to gain benefits while limiting harm, but any advice must be specific and practical.
There is an old historical vignette about a real-life 11th-century Danish king named Canute the Great. As the story goes, King Canute ordered his throne to be placed on the shore of a sea and, in front of his subjects, ordered the tide to stop. To no one’s great shock, the tide refused to cooperate and came in anyway. Unfortunately, this story is often misinterpreted as being about the arrogance of Canute and, by extension, of all leaders who believe their power and greatness to be absolute. But the moral of the story is really about Canute’s wisdom and his ability to recognize the futility of resisting the inevitable.
We, as a society, are presently facing a “King Canute and the tide” moment when it comes to the incredibly rapid advancement of AI. We can either humbly accept that a profound transformation is taking place, one that rivals the Industrial Revolution in magnitude (and arguably surpasses it), or we can stick our heads in the sand and join what alarmingly appears to be growing trends of AI denialism or a resigned sense of helplessness. If we don’t want to get swept away by the tide, then we mustn’t waste time in denialism. But in order to get past the denialism, we have to acknowledge why it occurs in the first place and how natural it is under the circumstances. Only then can we reasonably propose a more balanced mindset that isn’t AI denialism but isn’t a non-critical embrace of AI maximalism either.
The Dilemma of Embracing vs. Shunning AI
Let’s start by examining why it can be so tempting to give in to AI denialism or helplessness. Living in the age of AI presents a strange and disorienting conundrum. On one hand, prominent figures in business and technology never miss an opportunity to remind us that we must either keep up by learning to use AI or get left behind. Sometimes the rhetoric goes beyond just getting “left behind” to outright getting fired if you don’t use AI. That is no small amount of pressure and stress, and under the circumstances, denial becomes understandable since it is essentially a way to avoid pain.
At the same time, there is a growing wave of claims in the media, often with research to back it up, about the potentially deleterious effects of AI use on cognition, creativity, and critical thinking, not to mention how it can negatively impact human relationships and interactions. This is likely contributing to some of the public backlash against AI that we are seeing.
These opposing pressures of people losing their jobs if they don’t keep up with AI, but causing potential harm to their cognition, relationships, and even their sense of self if they do use it, are surely leading to paralyzing levels of cognitive dissonance. What is one to do? Don’t use AI, but get left behind socio-economically? Or use AI but become dull, unimaginative zombies who are unable to think for themselves? Neither seems to be a reasonable choice to have to make. But what if there was a third, more balanced way to approach the dilemma? After all, as we’ve already established, the tide is coming in whether we like it or not, and we want to be wise like King Canute.
Encouraging Ideas in Search of Actual Practices
You may have seen articles in the media reassuring you that using AI doesn’t have to lead to brain rot and that it can actually boost creativity and critical thinking. Often, there will even be research to back up these claims. Jan Bieser, senior researcher and speaker at the Gottlieb Duttweiler Institute, has been quoted as saying, “Looking forward, the most successful ideas likely won’t come from bright thinkers alone but from those best at mindfully steering intelligent machines while remaining firmly in the driver’s seat.”
This sounds terrific, but how exactly does one remain firmly “in the driver’s seat”? Because the key is that AI boosts creativity only if ("if” being the operative word) it is used in such a way that makes this possible. The problem, as you may have noticed with many articles or claims, is that there aren’t always concrete explanations for exactly how to use AI so that it boosts creativity and doesn’t diminish it. Often, even when there are explanations, they are too vague or generalized to be of much assistance. Having good "metacognitive strategies” and maintaining a personal touch when using AI are examples of such advice, which, while correct, isn't necessarily helpful for those who aren’t familiar with these concepts or how to practice them in concrete ways. Just what is metacognition, exactly? And how do you maintain a “personal touch”? Such overly general statements only lead to more questions instead of clarifying them.
To transform that general encouragement into powerful practice, the next post will take a deeper dive into some concrete specifics for how to use popular AI tools such as ChatGPT so that they boost your creativity and critical thinking skills. In so doing, the goal is to help readers find that elusive balance, using AI so they don’t “get left behind,” but not sacrificing their cognitive abilities in the process. And that is something that the wise King Canute would have surely approved of.