Skip to main content

Verified by Psychology Today

Personal Perspectives

The Dangers of AI Mental Health Misinformation

A Personal Perspective: Should AI be trusted to diagnose and treat mental health issues?

Key points

  • Large language models can misfire, generating nonsensical verbiage.
  • AI logic can produce harmful misinformation in response to mental health queries.
  • AI generated therapy can’t incorporate the intuitive, sensitive interactions of in-person treatment.
pexels/Google DeepMind
Source: pexels/Google DeepMind

Whenever I go online, I can’t shake the feeling that the internet is no longer prioritizing the needs of its users. As I try to navigate cyberspace, there’s this almost tangible sense of someone putting their hands on the steering wheel, hoping I won’t notice as they direct me away from my goal and down more profitable avenues of engagement.

Advertiser-driven revenue models often reward hyperbolic or provocative content, incentivizing even reputable sources to skew their writing toward the inflammatory. Worse, the actual hardware we use to access the internet is designed to synthesize everything we watch and read into a homogenized slurry—positioning strangers, con men, and algorithms to look just as significant as urgent personal communication. Sadly, it’s never been easier for people with bad ideas to get inside our heads, with potentially disastrous consequences for vulnerable people seeking mental health information online.

All of this assumes the people trying to sell us on their bad ideas are actually, you know, people. And that brings us to the nadir of online mental health misinformation: the procedurally generated content spewed out by large language model (LLM) programs—what tech companies are marketing as “AI.”

As someone who suffers from obsessive-compulsive disorder, I’m fully aware that the discourse around so-called “artificial intelligence” can trigger apocalyptic anxiety, especially because the companies selling this stuff keep performatively fretting about their products’ world-ending potential. CNN reports that 42 percent of CEOs surveyed at a Yale CEO Summit say, “AI has the potential to destroy humanity five to ten years from now.”

How Large Language Models Work

I am not an expert, and I can’t provide more than a cursory explanation of this technology, but the gist of it (with some help from Timothy B. Lee and Sean Trott’s excellent primer in Ars Technica) is: LLMs encode words into lengthy strings of numbers called “word vectors” and then position each number on a virtual graph along hundreds or thousands of axes. Each axis represents one metric of similarity with other words, and longer numbers allow the program to calculate more complicated semantic relationships. From here, sequences called “transformers” apply similar metrics to locate each word in the context of a sentence—for example, the way a proper name (“Joe”) and a pronoun (“his”) often refer to the same person when used in close proximity (“Joe parked his car”). What all of this means is that LLMs “treat words, rather than entire sentences or passages, as the basic unit of analysis.”

I know it’s dangerous to reassure OCD sufferers that their fears are impossible; external reassurance encourages us to rely on other people to manage our anxiety, instead of learning to confront and overcome it on our own. So I’m only going to say this once: You do not need to worry about the AI apocalypse. LLMs are never going to spawn the Terminator, any more than videoconferencing through a virtual reality helmet manifested the Matrix. It's a fascinating technology with a variety of applications, but it has more in common with the auto-complete function on your cell phone than with anything out of science fiction. This becomes apparent when you examine the programs’ output. From what I’ve seen, it appears that the average LLM article is about 40 percent actual content, 50 percent rephrasing that content over and over (like Lucy trying to pad out the word count on her book report in the Charlie Brown musical), and 10 percent utter nonsense.

Potential AI Glitches

LLM output is worrisomely prone to what tech companies poetically refer to as “hallucinations,” where the LLM emphasizes the wrong word associations and produces self-evidently incorrect nonsense—like when Google’s LLM infamously encouraged its users to put glue on their pizza. I take issue with calling these incidents “hallucinations,” which contributes to the sci-fi mythology of LLMs, instead of calling it out for what it really is: a glitch, a bug, a bizarre result from a broken computer program.

But while it’s easy to laugh when multinational corporations generate adhesive pizza recipes, it’s much less funny to imagine such errors in response to a query about mental illness. An LLM tasked with impersonating a therapist might provide an OCD sufferer with endless reassurance, encourage them to buy hand sanitizer in bulk, or prescribe them to self-medicate with a two-liter bottle of Diet Coke every four hours. The dramatic, but entirely fabricated, threat of an android uprising has obscured the actual, tangible harm that LLMs are doing right now—spreading misinformation generally, and especially with regard to mental health. Bots are encouraging people, and especially the young, to engage in harmful behaviors.

When I was in college, I spent a full year suffering from increasingly severe intrusive thoughts related to violence, sexuality, and religion. I did not have an OCD diagnosis, and when I was brave enough to describe my thoughts to my well-meaning but unqualified college counselor, she had no clue what to make of them. With no other explanation, I was convinced that these unbidden and out-of-control thoughts were a sign I was degenerating into total psychopathy.

During my sophomore year, in a moment of creative desperation, I turned to Google: “Why can’t I stop thinking about the worst possible things?” I was duly directed to Wikipedia’s article about pure-O OCD. An online article is never a substitute for a professional diagnosis, but I didn’t need it to be—I just needed the right language to describe my symptoms, and a bit of direction to seek out the appropriate kind of help. The internet of 2007 could provide that.

I shudder to think what could have happened if I’d asked an LLM.

Maybe an LLM could have helped me. Perhaps it would have located the word vectors of my query in proximity to the words “intrusive,” “thoughts,” and “OCD.” It took me three months of intensive treatment with OCD specialists just to get a handle on my symptoms, but maybe an LLM therapist would have done just as good a job. It’s possible an LLM could have coaxed me through the deep shame that stopped me from voicing my symptoms, helped me articulate the insidious complexity of my thoughts, and coached me through the grueling exposure and response prevention (ERP) therapy I needed to recover. Unlikely. Maybe, because of some quirk of its neural network, my LLM would have soberly informed me with absolute certainty that I was experiencing a psychotic break and should turn myself into the police.

The Risks of the "AI Therapy" Industry

This is why I am horrified by the sudden eruption of the self-declared “AI therapy" industry. An “AI therapist” is neither intelligent nor a therapist—it is an Excel spreadsheet auto-arranging letters in the order that therapists usually use them.

I honestly can’t say if the internet has been a net positive or negative for our collective mental health. After all, without Google and Wikipedia, I would never have sought out the diagnosis I needed to escape my OCD spiral. But I am absolutely confident in stating that the internet of the 2020s is a dangerous place for vulnerable people, and that LLMs are part of the problem.

These machines are capable of causing tremendous harm, not because they are “intelligent,” but because they are artificial and can be both comically defective and dispassionately amoral—not unlike many of those who seek to profit from them.

Beware—today’s internet doesn’t care if it hurts you.

Copyright Fletcher Wortmann 2024

References

Alex Clark and Melissa Mahtani. “Google AI chatbot responds with a threatening message: ‘Human … Please die.’” November 20, 2024. https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-huma…

Jackie Davalos and Dina Bass. “Microsoft Probes Reports Bot Issued Bizarre, Harmful Responses.” February 28, 2024. https://www.bloomberg.com/news/articles/2024-02-28/microsoft-probes-rep…

Priscilla DeGregory and Peter Senzamici. “AI chatbot tells teen his parents are ‘ruining your life’ and ‘causing you to cut yourself’ in chilling app: lawsuit.” December 10, 2024. https://nypost.com/2024/12/10/us-news/ai-chatbots-pushed-autistic-teen-…

Matt Egan. “Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years.” June 14, 2023. https://www.cnn.com/2023/06/14/business/artificial-intelligence-ceos-wa…

Jack Kelly. “Google’s AI Recommended Adding Glue To Pizza And Other Misinformation—What Caused The Viral Blunders?” 5/31/2024. https://www.forbes.com/sites/jackkelly/2024/05/31/google-ai-glue-to-piz…

Timothy B. Lee and Sean Trott. “A jargon-free explanation of how AI large language models work.” July 31, 2023. https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-ho…

Lauren Silva. “4 AI Therapy Options Reviewed: Do They Work?” December 6, 2023. https://www.forbes.com/health/mind/ai-therapy/

advertisement
More from Fletcher Wortmann
More from Psychology Today