Artificial Intelligence
Could Artificial Intelligence Replace Psychologists?
Should you be worried?
Posted February 8, 2024 Reviewed by Gary Drevitch
Key points
- Current and forthcoming developments in AI will alter the behavioral health landscape.
- Intelligent machines are replacing psychologists and other behavioral health professionals.
- We risk losing real human-to-human interaction and relationships built on trust.
- We must consider the implications of AI on people and society and always place people first.

Behavioral health chatbots now provide therapeutic services that would otherwise have required a professional to offer. Woebot Heath’s chatbot Woebot, for example, was granted “Breakthrough Device Designation” by the FDA in 2021 for the treatment of postpartum depression (PPD). The app’s conversational agent (the AI part that allows it to converse in a dialogue) uses Cognitive Behavioral Therapy techniques while establishing an empathetic therapeutic relationship. Even more groundbreaking and disruptive is the arrival of generative AI and the use of Large Language Models, such as those available from Open AI’s ChatGPT. The interactive capabilities of this commercially available technology can now be connected to virtual human avatars, enabling real-time conversation and access to general and domain-specific knowledge that was not possible just a couple of years ago.
I’ve heard colleagues say that AI will not replace human professionals but will augment what psychologists do. There is nothing to fear, they say. While I agree AI can and does augment what we do, the reality of supplanting psychologists and other healthcare professionals on a large scale is closer than ever. Psychologists perform many roles and tasks, of course, but I think the one most people are considering, and I’m focusing on, is interactive therapy services (i.e., psychotherapy and consultation).
I first wrote on this topic a decade ago in an article in Professional Psychology: Research and Practice called “Artificial Intelligence in Psychological Practice: Current and Future Applications and Implications." I wrote then that the only things keeping intelligent machines from replacing human care providers were the technology limitations of the day (e.g., primarily computational power), which we would overcome. The regulatory, legal, ethics, and safety issues professionals must consider could be worked out.
In that article, I also wrote that AI systems could be far superior in their capabilities. I proposed the “super clinician,” an AI system integrating advanced technologies to create all-new capabilities. The super clinician is a highly realistic virtual human simulation with natural language and speech processing for human-like verbal interaction. The system would also have advanced sensors and signal processing capabilities, such as voice sentiment analysis and high-speed digital cameras or infrared sensing, to detect blood flow indicative of heartbeat and thus distress and affective states. The system would also be connected to all other available client data, such as electronic health care records, personal files, and Internet mobile phone use, that it could use to tailor its therapeutic approach, build rapport, and predict behavior and outcomes. The system would be perfect in its interaction, empathetic, and never make an error unless those errors were intentional to make the virtual clinician appear more human-like. Well, the super clinician is entirely possible with existing technologies. Take a look at these systems from USC’s Institute for Creative Technologies. Should you be worried?
I think the answer to this question may depend on another question: Should we, from a societal perspective, from a professional perspective, and the public’s perspective, allow this technological capability to become the new standard of practice?
I attempt to address this from a more philosophical and morals perspective in another paper, “Recommendations for the Ethical Use and Design of Artificial Intelligent Care Providers,” in Artificial Intelligence in Medicine. In it, I argue that we should be careful about replacing humans with machines in this area. A primary concern of mine is that people will no longer have a choice for human interaction (i.e., to see a human therapist or counselor) because insurance companies, employers, or the government will mandate the use of AI machines and not allow traditional, and more expensive, human-to-human services. Giving up real human interaction and connection for synthetic relationships may also cause harm to people and society. This harm can come from various issues, such as loss of privacy, trust, manipulation, and overreliance on machines in moral decision-making. While some data indicate that people may prefer disclosing personal information to machines because they appear to be more private or less biased than a person may be, I think we stand to lose something inherent to our profession and society with a machine-only option: real human-to-human interaction and relationships built on trust.
While psychologists, psychiatrists, counselors, social workers, and other behavioral health professionals are at risk of losing their jobs to intelligent machines, there are also opportunities for these professionals to be involved in the ethical development and use of these technologies. Psychologists are involved in AI research, develop treatments with emerging technologies, and advocate for the rights of the people they serve. Nonetheless, current and forthcoming developments in AI are destined to alter the behavioral health landscape in the years ahead.
AI technologies, such as therapeutic chatbots and virtual human care providers, do provide benefits for society. They offer access to mental and behavioral health services that may not be available or feasible for some people. They have the potential to provide services that are more reliable and efficient. The increase of persons with untreated mental health conditions in the U.S. and worldwide presents a considerable opportunity and motivation to bring these technologies to the marketplace. But like with other technologies (such as nuclear bombs, genetically modified viruses, or social media), just because you can build something doesn’t mean you should do so senselessly or without wisdom. We must consider the implications of AI and other technologies on people and society and always place people first by respecting their dignity, autonomy, and humanity.
To find a therapist, visit the Psychology Today Therapy Directory.