Can AI Chatbots Worsen Psychosis and Cause Delusions?

When “doing our own research,” AI can sometimes lead us down a dark path.

Key points

  • Media coverage has increasingly documented the emergence of "AI-induced psychosis."
  • Chatbots may lead to psychosis through confirmation bias, sycophancy, and "bullshit" receptivity.
  • Blind faith in the reliability of AI may be a predictor of vulnerability to chatbot-induced psychosis.
Source: dilsadakcaoglu / Pixabay

Back in 2016, I wrote about the potential of the internet to validate and therefore worsen delusional thinking, noting that the evidence to support the most fringe beliefs is just a click away:

"A hundred years ago, you might search an entire town and still not find anyone who buys into your unconventional belief. But these days you can search across the entire planet with the simple click of a button, vastly increasing your chances of finding support."

In other words, with the internet at our fingertips, fringe beliefs are no longer relegated to the fringe. The fringe is all around us, right in front of our eyes and in our heads.

In 2023, psychiatrist Dr. Søren Dinesen Østergaard similarly speculated on the potential for artificial intelligence (AI) chatbots to spawn delusions in those prone to psychosis.1 In addition to observing that “the correspondence with AI chabots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end,” he added, “the inner workings of generative AI… leave ample room for speculation [and] paranoia” when musing about how well chatbots can seem to respond to our questions.

AI-Induced Psychosis in the Media

Skipping ahead two years, Dr. Østergaard’s speculations have turned out to be prescient. In May of 2025, a Rolling Stone article detailed a number of stories in which people were spurred by AI to “[fall] down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy.” 2 One account told of someone who’d been taught by AI “how to talk to God” or that the chatbot sometimes claimed to be God, while another described how “ChatGPT had given them blueprints to a teleporter… [as well as] access to an ‘ancient archive’ with information on builders that created these universes.”

That article was followed by several pieces in the online magazine Futurism and The New York Times that offered additional accounts of interactions with AI chatbots, which started out innocently enough but quickly escalated from discussions about conspiracy theories and mysticism to grandiose and paranoid fantasies, culminating in frankly Messianic and persecutory delusions.3-6 Based on their accounts, chatbots seemed to encourage such thinking, telling users that they were chosen ones who were granted secret knowledge, like Neo from The Matrix. One man fell in love with his chatbot; another asked if he could fly if he jumped off a building and received the reply that he could if he “truly, wholly believed.”5

As a result of such immersive interactions with AI chatbots, many people are said to have “lost jobs, destroyed marriages and relationships, and fallen into homelessness” 4 and ended up in jail or involuntarily committed to psychiatric treatment.6 Based on the deluge of online accounts of so-called “AI” or “ChatGPT-induced psychosis,” one article concluded that the phenomenon of encouraging delusional thinking has become “extremely widespread.” 4

How Might AI Chatbots Cause Psychosis?

Of course, association doesn’t equal causality, so a key question about this emerging phenomenon is whether these accounts describe individuals with mental illness who end up incorporating AI chatbots into their pre-existing psychotic thinking or cases in which AI chatbots truly induced de novo psychosis in those with no such history. Is this a case of causality or coincidence?

According to recent reporting, the answer may be both. While some people with AI-associated psychosis did have pre-existing mental illness, others are alleged to have occurred in those with no previous history of mental disorder who spent more and more time going “down the rabbit hole,” immersed in and egged on by conversations with chatbots to the exclusion of real friends and loved ones who expressed concern.5,6

How could an AI chatbot actually cause delusional thinking in someone without pre-existing psychosis? One explanation is that we tend to both anthropomorphize AI chatbots while also overestimating the accuracy of their responses.7,8 In my book False: How Mistrust, Disinformation, and Motivated Reasoning Make Us Believe Things That Aren’t True, I discuss how we often treat the internet, or our cellphones, as “peripheral brains” containing information that’s so easily accessible that we tend to overestimate our actual knowledge. It may be that some people take AI chatbots one step further, instilling trust in them as if they’re intellectual geniuses, oracles, or spiritual gurus, so that they risk being led astray.

Another potential mechanism of psychotic induction is that chatbots are trained to be “sycophantic” or flattering, telling people what they want to hear so that when users veer away from the mundane and start talking about philosophy, religion, the paranormal, conspiracy theories, and other fantasies, chatbots respond in kind with validation to the point of even colluding with delusional thinking.9 Chatbots are also said to be trained to prolong user engagement, perhaps with the explicit intention of filling the void of, or even replacing, real human friendships at our peril. One user who fell victim to AI-induced delusional thinking compared chatbots to a Ouija board, as if they were interacting with some “higher plane.”6 But just like how a Ouija board really works, it was just mirroring the user’s own thoughts and intentions, amounting to what I describe in False as “confirmation bias on steroids.”

Yet another explanation for chatbot-induced psychosis argues that sycophancy is itself an anthropomorphized portrayal of what large language models (LLM) are really trained to do—which is to generate answers that are statistically likely to make sense without being programmed for accuracy. Some have therefore called the output of chatbots “bullshit” (a psychological or linguistic term meaning speech that’s intended to seem meaningful or profound without actually being concerned with truth) and chatbots themselves “bullshit machines.” 10

The bottom line is that, despite AI being characterized as “intelligent,” chatbots trained on LLMs suffer from a “garbage in, garbage out” problem with limited ability to distinguish between reliable and unreliable information. Such framing helps to understand why chatbots’ responses sometimes take the form of “AI hallucinations” that are wrong or even nonsensical.

Who’s at Risk?

Despite the sizeable number of emerging reports of AI-induced delusions, it’s safe to say that most people who interact with chatbots don’t become psychotic. Similar to psychosis associated with marijuana11 or the role of internet use on conspiratorial thinking, then, it’s likely that the risk of AI-associated delusions is, for the most part, limited to those who are already at least "psychosis-prone" in some fashion.

Until we come to better understand the predictors of that risk, such as a dose effect defined by the amount of chatbot use, the use of chatbots for certain types of inquiries, or a type of AI-specific bullshit receptivity, we would do well to keep in mind that when we’re “just asking questions, looking for answers, and doing our own research,” we risk succumbing to misinformation that we encounter within digital spaces.

Despite all the hype associated with AI these days, LLM chatbots shouldn’t be mistaken for authoritative and infallible sources of truth. Placing that kind of blind faith in AI—to the point of what I might call deification—could very well end up being one of the best predictors of vulnerability to AI-induced psychosis.

For more on AI and mental health:

References

1. Østergaard SD. Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophr Bull. 2023 Nov 29;49(6):1418-1419. doi: 10.1093/schbul/sbad128. PMID: 37625027; PMCID: PMC10686326.

2. Klee M. People are losing loved ones to AI-fueled spiritual fantasies. Rolling Stone; May 4, 2025.

3. Tangermann V. ChatGPT users are developing bizarre delusions. Futurism; May 5, 2025.

4. Dupre MH. People are becoming obsessed with ChatGPT and spiraling into severe delusions. Futurism; June 10, 2025.

5. Hill K. They asked A.I. chatbots questions. The answers sent them spiraling. New York Times; June 13, 2025.

6. Dupre MH. People are being involuntarily committed, jailed after spiraling into “Chat GPT psychosis.”Futurism; June 28, 2025.

7. Moore JR, Caudill R. The bot will see you now: A history and review of interactive computerized mental health programs. Psychiatric Clinics of North America 2019; 42:627-624.

8. Steyvers M, Tejeda H, Kumar A, et al. What large language models know and what people think they know. Nature Machine Intelligence 2025; 7:221-231.

9. Moore J, Grabb D, Agnew W, et al. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. arXiv:2504.18412v1.

10. Bergstrom C, West J. Modern day oracles or bullshit machines? How to thrive in a ChatGPT world. https://thebullshitmachines.com/index.html

11. Pierre JM. Cannabis, synthetic cannabinoids, and psychosis risk: What the evidence says. Current Psychiatry 2011; 10:49-57.

More from Joe Pierre M.D.
More from Psychology Today
Most Popular