Skip to main content
Artificial Intelligence

AI Hallucinations in Medicine and Mental Health

Whatever we call them, AI hallucinations are just a new form of misinformation.

Key points

  • AI hallucinations are wrong answers given by large language model (LLM) chatbots.
  • The frequently occur because LLM models are more concerned with plausibility than accuracy.
  • When seeking answers related to mental health, hallucinations tell us that AI isn't ready for primetime.
Jan Vašek / Pixabay
Source: Jan Vašek / Pixabay

The term “artificial intelligence (AI) hallucinations” refers to a phenomenon in which an AI algorithms “perceive patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”1 Although they can occur in computers that “see” with a camera or “hear” with a microphone so that they bear some resemblance to the kinds of perceptual hallucinations like voice-hearing that occurs in people with psychosis, these days the term is most often applied to wrong answers generated by large language model (LLM) chatbots like ChatGPT or Gemini.

The Fallibility of AI Chatbots

Countless examples of LLM chatbots generating ridiculously wrong and potentially harmful responses to questions have made news headlines in recent years. Back in 2024, Mark Zuckerberg’s Meta AI was giving obviously wrong answers to simple math problems.2 Meanwhile, Google AI was providing users diet tips like eating rocks and trying glue as a pizza topping while offering recommendations to clean washing machines by putting bleach and vinegar in it, which combined to form poisonous—and potentially lethal—chlorine gas.3

Earlier this year, the Chicago Sun-Times was outed for using AI to generate a summer reading list of 15 books, 10 of which turned out not to exist.4 Elon Musk’s X-based Grok started churning out posts supporting debunked conspiracy theories about “white genocide” in South Africa and Holocaust denial.5 And last month, an official report from Robert F. Kennedy Jr’s "Make America Healthy Again" Report made unsubstantiated claims allegedly backed up by references to published research studies that were completely made up thanks to AI.6 This same problem—of supplying both erroneous information and fake references—has been previously noted to be pervasive when ChatGPT is used to write articles related to medical topics.7

The potential for harm is obvious. Back in 2023, an AI chatbot offered dieting advice for someone struggling with a restrictive eating disorder.8 Other AI chatbots purporting to offer psychotherapy have resulted in patient suicides.9 And just last week, it was reported that a user who was struggling with addiction and using a “therapy chatbot” for support was told by the AI app to take a “small hit of meth[amphetamine] to get through [the] week.”10

Due to such concerning reports, the American Psychological Association has labeled AI chatbots used for mental health support “a dangerous trend” when used for mental health and has urged the Federal Trade Commission to regulate the technology and put safeguards in place for responsible development and transparent marketing.9

Why Do AI Hallucinations Occur?

To understand why such “hallucinations” happen, it must be recognized that generative LLMs aren’t really intelligent, smart, or truly creative in the way that we typically associate with human cognition or reasoning.11 They’re only algorithms programmed to predict the next word or sequence of words based on statistical probability, where the “goal is to generate plausible content, not to verify its truth.”12

If such responses seem human-like, it’s only because they represent output that’s generated based on a massive number of word strings (e.g., sentences) culled from the internet databases that were written by real people at some point. This is essentially the same process as so-called “AI art” that generates images using extant photographs and art created by real-life human beings and that many have claimed amounts to plagiarism.13

AI hallucinations, therefore, reflect an inherent limitation of LLM chatbots that aren’t necessarily concerned with accurate responses. Since they scour huge amounts of data to predict their responses, they’re vulnerable to a “garbage-in, garbage out” problem. And that's potentially dangerous when providing answers to questions about medical and mental health.

Misinformation by Any Other Name

Recognizing that they're false statements, and taking into account the perpetuation of stigma surrounding psychosis, some have argued that we stop calling them AI “hallucinations” and instead refer to them as “fabrications,”14 “confabulations,”15,16 and even “bullshit” (defined as statements more concerned with the appearance of truth than actual truth).17 Personally, I think we ought to sidestep all of that anthropomorphizing, and just call AI hallucinations a new form of misinformation.

Although LLM programmers are trying to reduce AI hallucinations, it has been argued that this may be impossible. Indeed, recent news indicates that the problem may be getting worse rather than better, with some newer “reasoning systems” generating misinformation as much as 51-79% of the time depending on the task.18,19

In my book, FALSE: How Mistrust, Disinformation, and Motivated Reasoning Make Us Believe Things That Aren’t True, I argue that false beliefs often stem from trust in misinformation. Within the topic of mental health, misinformation already abounds and has been made worse by social media. For example, a recent investigation by The Guardian found that more than half of the top 100 TikTok videos about mental health contained misinformation.19

Now that we’re in the midst of the AI technology boom, we can expect it to become more and more developed, relied upon, and trusted as a source of medical and mental health information going forward. We’re already at a point where answers curated AI routinely appear at the top of our internet searches, seemingly saving us the trouble of searching for information ourselves or checking to verify its sources. And so, while AI will no doubt be marketed as a new, improved pathway to truth, it might just be one more form of misinformation to contend with.

Of course, AI does feed us reliable information too. It might yet transform medicine for the better and could even end up replacing human healthcare workers one day. But the current phenomenon of AI hallucinations gives us plenty of reason to be both skeptical and cautious about what the future holds. In the meantime, with LLM chatbots not necessarily programmed for accuracy, humans would be smarter to conclude that AI is not yet ready for primetime as a reliable source of health information.

For more on AI and mental health:

References

1. IBM. What are AI hallucinations? September 1, 2023.

2. TOI Tech Desk. Meta AI on WhatsApll is getting this basic maths question wrong and why it may not be alone. Times of India; July 18, 2024.

3. Turner B. Google’s AI tells users to add glue to their pizza, eat rocks and make chlorine gas. Livescience.com

4. Channick R. Chicago Sun-Times Sunday insert contains 10 AI-generated fake books in summer reading list. Chicago Tribune; May 18, 2025.

5. Oremus W. How Elon Musk’s “truth-seeking’ chatbot lost its way. The Washington Post; May 24, 2025.

6. Tuquero L. How fake citations appeared in RFK Jr.’s MAHA report: Here are generative AI’s red flags in studies. Politifact.com; May 30, 2025.

7. Bhattacharyya M, Miller VM, Bhattacharyya D, Miller LE. High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus 2023; 15:e39238.

8. Wells K. An eating disorders chatbot offered dieting advice, raising fears about AI in health. NPR.org; June 9, 2023.

9. Abrams Z. Using generic AI chatbots for mental health support: A dangerous trend. American Psychological Association; March 12, 2025.

10. Tangermann V. Therapy chatbot tells recovering addict to have a little meth as a treat. Futurism.com; June 2, 2025.

11. Mitchell M. How do we know how smart AI systems are? Science 2023; 381:6654.

11. MIT Management. When AI gets it wrong: Addressing AI hallucinations and bias.

12. Marr B. Is generative AI stealing from artists? Forbes; August 8, 2023.

13. Emsley R. ChatGPT: These are not hallucinations—they’re fabrications and falsifications. Schizophrenia2023; 9:52.

14. Li R, Kumar A, Chen JH. How chatbots and large language model artificial intelligence systems will reshape modern medicine. JAMA Internal Medicine 2023; 183:596-597.

15. Maleki NB, Padmanabhan B, Dutta K. AI hallucinations: A misnomer worth clarifying. 2024 IEEE Conference on Artificial Intelligence (CAI); 2024:133-138

16. Hicks MT, Humphries J, Slater J. ChatGPT is bullshit. Ethics and Information Technology 2024; 26:38.

17. Murray C. Why AI hallucinations are worse than ever. Forbes; May 6, 2025.

18. Metz C, Weise K. A.I. is getting more powerful, but its hallucinations are getting worse. New York Times; May 5, 2025.

19. Hall R, Keenan R. More than half of top 100 mental health TikToks contain misinformation, study finds.The Guardian; May 31, 2025.

advertisement
More from Joe Pierre M.D.
More from Psychology Today