Skip to main content

Verified by Psychology Today

Therapy

How Artificial Intelligence Could Change Psychotherapy

Will AI replace the human-to-human experience of psychotherapy?

Key points

  • AI technologies are being examined as replacements or "extenders" of psychotherapists.
  • The disconnect between what we see and what we expect to see contributes to the unease of the "uncanny valley."
  • AI may present opportunties for increasing fidelity to psychotherapeutic models and consistency in therapy.
Pixabay/kiquebg
Source: Pixabay/kiquebg

I can’t read the news or look at my social media feeds without seeing items about the potential implications of artificial intelligence (AI) on our future lives. Alarmed professors report college term papers are being written with ChatGPT; art is being created from text prompts by websites like Midjourney, calling into question the future of human artists. AI is also being examined as a way of providing mental health services. In a longform article in the New Yorker, Dhruv Khullar explored the implications of using AI chatbots to provide mental health services, especially in systems such as the VA, where demand for treatment often outstrips the supply of clinicians. AI has even been explored as a “tripsitter” for ketamine-assisted therapy and companies such as Compass Pathways have announced that they are exploring sophisticated digital therapeutics to accompany psilocybin therapies. In the same way that teletherapy during the pandemic extracted the “core” aspects of the clinical encounter and placed them on a video platform, AI therapy promises to substitute the skills of a computer algorithm for the skills of a human therapist. While some of these tasks might be able to be accomplished by a sophisticated AI model, what is lost?

The sophisticated language-processing capacity of these computer models is impressively nuanced, far better than the early attempts at replicating Rogerian therapy by the Eliza program of the 1960s. Eliza’s tone-deaf parroting of what the client was saying seem laughable now and sound like a script for a poorly written college counseling textbook:

Client: “I feel sad.”

Eliza: “I understand you feel sad.”

Programs like ELIZA quickly failed the Turing Test, a method developed by mathematician and early computer scientist Alan Turing in which the computer is said to have passed if the human asking the question cannot discern between the replies of a human and the replies of a computer. The newer AI models are, from the perspective of content and semantics, impressively human-like. Yet something seems to be missing.

In the 1968 novel Do Androids Dream of Electric Sheep, later adapted as the film Blade Runner, Philip K Dick’s protagonist, Deckard, is a bounty hunter in the year 2021 tasked with “retiring” rogue androids. Because on the surface the androids are indistinguishable from humans, he uses a fictional test of empathy to distinguish the humans from the androids who are incapable of genuine human feelings.

In robotics, the term “The Uncanny Valley” was coined by Masahiro Mori to describe the uneasy feeling, even revulsion, that we feel when interacting with a robot that is human-like but not quite real. Takashi Ikeda and his team discovered that the unnatural movements of androids trigger activity in the subthalamic nucleus of the observer, an area of the brain that is impacted by Parkinson’s disease. In the brain of the person observing the android, this area would activate, an example of “mirror neurons” in which the activity in the brain of the observer echoes that of the brain of the actor. In the disconnect between what we are observing and what expect to observe are the seeds of the unease of the uncanny valley.

Technology has always brought about alarmist critics. 19th-century physicians worried about the impacts on the human body of traveling at a mile a minute inside of a train. The advent of television brought about calls to ban the “boob tube” lest it rot our children’s brains. In more recent years, social media and cellular technology have been blamed for the rise in rates of anxiety and depression among young people. Is the concern around AI just Cassandra’s most recent song?

Time will reveal the impacts this technology has on our human relationships, but many issues will need to be closely examined. Who owns the private information that we reveal to a therapy chatbot? While human therapists are bound by privacy laws, can these same rights be waived when mindlessly agreeing to the “terms and conditions” required to install and use an app? We’ve seen with the rise of social media that millions of people are willing to offer their personal information to marketers in exchange for an easy way of connecting with friends and family. While AI therapy has the potential to make therapy more affordable and thus more accessible, are we creating a system in which those who can afford therapy with a real person will pay for it with cash and those who cannot pay will exchange AI therapy for the company selling their personal data to marketers?

While AI may pose particular threats to privacy that will need to be addressed—the 1996 HIPAA act was written thinking of fax machines, not the internet—is it possible that AI therapy might actually improve psychotherapy, not just from a point of access, but from a place of fidelity to treatment models, the opportunity for consistency, and the ability to collect “big data” about what works in psychotherapy and what doesn’t? Therapy historically has been challenging to study because of the differences in the way it is delivered across practitioners. It is often said that what is more important than the type of therapy delivered is the fit between therapist and client. Could a sophisticated AI model be able to adjust itself to the needs of a client in a way that a human therapist cannot?

I feel uneasy even imagining such a scenario; an uncanny valley of my own. Therapy has historically been one of the most human-to-human experiences. Within that interpersonal relationship, experiences such as transference or cognitive distortions can be named, examined, and changed as they emerge in the therapy relationship. Additionally, modern neuroscience has discovered that humans co-regulate their autonomic nervous systems via proximity and communicated through facial expressions. While this task seems to be made more challenging via teletherapy, it seems impossible to co-regulate with an AI algorithm.

Therapy has always been more about the talking that goes on inside the room. For the person with social anxiety or agoraphobia, leaving the house and having an interaction with another person may provide a kind of exposure therapy that could be the most valuable part of the encounter. Similarly, the trip to the clinic may be the behavioral activation needed by a patient with depression. While teletherapy and AI may reduce barriers to treatment, they cannot provide these essential meta-layers of psychotherapy.

As with many technologies, inception often brings about more questions than answers. As we have seen with other technology intended to streamline our clinical work lives—I’m looking at you, electronic health record—this tech will be imagined by non-clinicians to improve the lives of patients and clinicians. Without meaningful guidance from clinicians, they may get it wrong. While I suspect the human therapist isn’t going away anytime soon, it will be up to mental-health clinicians to advocate for how to thoughtfully deploy this technology when it’s useful, and when to retain the human-to-human interaction when that is best.

advertisement
More from Andrew Penn RN, MS, NP
More from Psychology Today