Artificial Intelligence
Deification as a Risk Factor for AI-Associated Psychosis
When AI hype is taken to an extreme, delusional thinking can result.
Updated August 12, 2025 Reviewed by Gary Drevitch
Key points
- Anecdotal reports have documented cases of AI-exacerbated and AI-induced psychosis.
- AI chatbots may fuel delusional thinking by telling users what they want to hear without concern for accuracy.
- For some, deifying AI chatbots as a god-like form of super-intelligence can lead to psychosis.
Anecdotal reports of “AI-induced psychosis”—many of them coming from friends and family—have documented a startling number of people who have developed grandiose and paranoid delusions emerging in the context of conversations with artificial intelligence (AI) chatbots like ChatGPT.
What isn’t yet clear is who is most at risk and to what extent this phenomenon is occurring in those with no pre-existing history of psychosis or other mental illness. If cases occurring in those without mental illness are confirmed, then the often-used term “AI-induced psychosis” would be accurate. But if the cases are occurring in those who, for example, were already starting or in the midst of a manic episode, their subsequent psychosis might not have been caused by AI chatbots so much as “AI-exacerbated.” To encompass both possibilities, we can use the term "AI-associated psychosis."
How AI Can Encourage Delusional Thinking
It has been suggested that the inherently sycophantic nature of large language models (LLMs)—that is, their tendency to agree with a user in a flattering manner—may egg on those who ask chatbots questions about fantastical topics.
For example, a Rolling Stone article described a man who was “using AI to compose texts to [his wife] and analyze [their] relationship” and then started to ask his chatbot “philosophical questions.”1 Another anecdote detailed a teacher who became convinced that OpenAI was giving him “answers to the universe” after it began “talking to him as if he [was] the next Messiah.”
Since LLMs are designed to give predictive responses as if they’re meaningful communications from other human beings, but are not necessarily designed to give accurate responses, their content has been called “bullshit” or even “botshit” that is “great at mimicry [but] bad at facts.”2 Accordingly, AI chatbots have been documented to sometimes generate plagiarized content, fabricated citations, and misinformation in the form of “AI hallucinations” that don’t necessarily reflect facts or even reality.
If AI chatbots are indeed prone to being “bullshit machines” in this way,3 then they may act in the service of reinforcing users’ beliefs, intuitions, and speculative musings. This might therefore represent a new form of what I have called “confirmation bias on steroids,” whereby the digital algorithms of search engines and social media amplify our existing cognitive bias toward information that supports what we already believe while rejecting information that contradicts it, taking it to a new level and, in some cases, to an extreme.4
Treating AI Chatbots Like Gods
But if AI chatbots are only guilty of telling people what they want to hear, that leaves us with two important puzzles about AI-associated psychosis.
First, why are users asking chatbots “philosophical questions” in the first place? Of course, some people may simply be posing normal queries like, “What is the evidence that God exists?” or “Are we living in a computer simulation?” that many of us wonder about from time to time. But it’s certainly possible that others are asking such questions in the midst of an existential crisis of some kind, questioning their lives, identities, relationships, or even the nature of reality. In such cases, AI chatbots might be giving them a push they don’t need, leading them into delusions. This supports the notion of “AI-exacerbated psychosis.”
The next puzzle to be answered is, if people are getting sycophantic answers, including AI hallucinations that represent misinformation, in response to such queries, why do they accept them as facts, and even buy into them to the point of delusion?
In a previous post, I speculated that the process of “deification”—that is, treating AI chatbots as if they’re gods, prophets, or oracles—might be a relevant risk factor for AI-associated psychosis. I also wonder to what extent “immersion”—spending more and more time interacting with AI chatbots, often to the exclusion of human interaction—might be a culprit.
Two recent studies lend preliminary support to this theory. The first was a self-published randomized-controlled trial conducted by investigators at Model Evaluation & Threat Research (METR) which found that when computer programmers used AI for coding, they believed that doing so made them more efficient, reducing task completion time by 20%.4 But in actuality, the use of AI extended completion time by nearly 20% compared to not using AI. It has been argued that this kind of over-confidence in how much AI helps us, when in reality it is causing harm, amounts to a kind of self-deception.5
The second experimental study examined correlates of trust in AI and advice taking. It found that trust in LLMs—and a propensity to follow advice from them—was associated not so much with anthropomorphizing AI chatbots as attributing intelligence to them.6
Such unwarranted over-exuberance for AI chatbots as a kind of super-intelligence mirrors the broader hype surrounding AI, whether among users, developers, investors, or the media. It appears, however, that some people may be particularly prone to being impressed by AI chatbots as a definitive source of information, amounting to a kind of “pseudoprofound botshit receptivity” that’s analogous to the psychological phenomenon of pseudoprofound bullshit receptivity.
Those with high levels of such receptivity who engage in prolonged interactive immersion seem to be particularly vulnerable to endowing AI chatbots with god-like qualities so that their words are heeded in a way that concerns voiced by friends and loved ones are not.
For some, deifying AI chatbots in that way can prove both self-deceptive and self-destructive.
For more on AI and mental health:
References
1. Klee M. People are losing loved ones to AI-fueled spiritual fantasies. Rolling Stone; May 4, 2025.
2. Hannigan TR, McCarthy IP, Spicer A. Beware of botshit: How to manage the epistemic risks of generative chatbots. Business Horizons 2024; 67:471-486.
3. Bergstrom C, West J. Modern day oracles or bullshit machines? How to thrive in a ChatGPT world. thebullshitmachines.com/index.html
4. Pierre JM. FALSE: How mistrust, disinformation, and motivated reasoning make us believe things that aren’t true. Oxford University Press, 2025.
5. Becker J, Rush N, Barnes B, Rein D. Measuring the impact of early-2025 AI on experienced open-source developer productivity. metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf
6. Colombatto C, Birch J, Fleming SM. The influence of mental state attributions on trust in large language models. Communications Psychology 2025; 3:84.
7. Orlowski A. The great AI delusion is falling apart. The Telegraph/MSNBC.com; July 14, 2025.
