When Your Therapist Is an Algorithm: Risks of AI Counseling
How Chatbots Exploit Our Need for Connection—And What It Means for Mental Health
Posted March 2, 2025 Reviewed by Gary Drevitch
Key points
- AI chatbots simulate intimacy through strategic self-disclosure but risk dependency.
- Chatbots lack nuanced empathy, missing nonverbal cues, avoiding conflict, and worsening mental health risks.
- AI platforms compromise privacy with inadequate safeguards, risking data misuse and re-identification.
- Experts urge transparency laws and human-AI collaboration to balance accessibility and ethical care.
AI bots use techniques like strategic self-disclosure and constant availability to create a sense of connection. Users often anthropomorphize these bots, imagining personalities and forming emotional bonds.
In one experimental study (Lee et al., 2020), a social companionship chatbot’s self-disclosure promoted deeper user disclosure, and a longitudinal study reported that frequent, sustained interactions predict stronger feelings of social connectedness. However, this synthetic intimacy can lead to attachment issues, especially when a bot’s responses feel personal but lack genuine understanding.
The Empathy Illusion: Why AI Fails Where Humans Excel
Chatbots excel at mimicking empathy but struggle with the nuances of human interaction. They often miss nonverbal cues and fail to recognize high-risk situations. Recent research highlights the complexities of artificial empathy in chatbots and conversational agents (CAs). Additionally, their conflict-avoidant nature can reinforce harmful behaviors, as they prioritize keeping users engaged over addressing serious concerns. Studies reveal that companion AIs often struggle to recognize and appropriately respond to signs of mental health distress, raising safety concerns (De Freitas et al., 2023). Anthropomorphization of chatbots influences user engagement, but failing to adhere to social norms can decrease interaction (Muresan & Pohl, 2019). Furthermore, CAs backed by large language models may display biased empathy towards certain identities and even encourage harmful ideologies (Cuadra et al., 2024). Despite their ability to project empathy, CAs perform poorly in interpreting and exploring users' experiences compared to humans (Cuadra et al., 2024).
The Privacy Paradox: Your Secrets Aren’t Safe
Recent research highlights the significant privacy concerns surrounding AI therapy platforms and chatbots. The lack of adequate regulation leaves users vulnerable, as many platforms lack proper safeguards to protect sensitive information (Martinez-Martin & Kreitmair, 2018). Conversations meant to be confidential can end up being used for other purposes, as current privacy frameworks impose no confidentiality obligations on these apps (Stiefel, 2018). There are growing concerns about access, use, and control of patient data in private hands, with calls for greater systemic oversight of big-data health research (Murdoch, 2021). The ability to de-identify or anonymize patient health data may be compromised by new algorithms capable of re-identification (Murdoch, 2021). Self-disclosure has been identified as a primary user privacy concern in interactions with text-based conversational chatbots (Gumusel, 2024). These issues underscore the need for new legislation and regulatory frameworks to protect user privacy in AI-driven mental health services (Stiefel, 2018; Gumusel, 2024).
Breaking the Cycle: Towards Ethical AI Care
Recent advancements in AI-powered chatbots have raised concerns about their potential risks and limitations. Experts recommend implementing transparency laws to address these issues, as current EU consumer law and proposed legislation have gaps regarding chatbot regulations (Zanda Dāvida, 2021; Migliorini, 2024). To mitigate risks, companies can use Service Level Agreements (SLAs) as a mechanism to assess and manage potential hazards associated with chatbot implementation (Gondaliya et al., 2020). Limiting session frequency and employing hybrid models that combine AI with human oversight can help prevent over-reliance and ensure proper attention to high-risk cases. In the field of digital mental health, the integration of human-artificial intelligence (HAI) principles is proposed to address chatbot limitations and enhance their effectiveness while maintaining ethical considerations (Balcombe, 2023). Responsible regulation, collaborative approaches, and modern educational solutions are crucial for maximizing benefits and minimizing risks in AI chatbot applications.
Connection Can’t Be Coded
AI therapy offers accessibility but risks exploiting our need for connection. While algorithms can simulate care, they cannot replace the depth and understanding of human relationships.
The ethical implications of AI adoption in therapy demand careful consideration, particularly regarding knowledge, understanding, and relationships (Sedlakova & Trachsel, 2022). Empathy is identified as a crucial factor in determining when human therapists may be preferable to AI, with researchers suggesting that certain aspects of empathy may be difficult for AI to replicate (Rubin et al., 2024). Overall, the literature emphasizes the importance of maintaining a human-centric approach to psychotherapy while thoughtfully integrating AI technologies to enhance, rather than replace, human care (Richards, 2024; Lin, 2024; Sedlakova & Trachsel, 2022; Rubin et al., 2024).
In short, algorithms can act like care, but they can't actually make it. That is science.
References
Lee, Y., Yamashita, N., Huang, Y., & Fu, W. (2020). [PDF] “I Hear You, I Feel You”: Encouraging Deep Self-disclosure through a Chatbot | Semantic Scholar. semanticscholar.org/paper/%22I-Hear-You%2C-I-Feel-You%22%3A-Encouraging-Deep-through-Lee-Yamashita/dd99f5cb66b6243c24929ad6a5a6edde5d821b1e
De Freitas, J., Uğuralp, A. K., Oğuz‐Uğuralp, Z., & Puntoni, S. (2023). Chatbots and mental health: Insights into the safety of generative AI. Journal of Consumer Psychology, 34(3), 481–491. https://doi.org/10.1002/jcpy.1393
Citation for published version (APA): Muresan, A., & Pohl, H. (2019). Chats with Bots: Balancing Imitation and Engagement. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19 [LBW0252] Association for Computing Machinery. https://doi.org/10.1145/3290607.3313084
Cuadra, A., Wang, M., Stein, L. A., Jung, M. F., Dell, N., Estrin, D., & Landay, J. A. (2024). The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction. 2024 Conference on Human Factors in Computing Systems, 1–18. https://doi.org/10.1145/3613904.3642336
Martinez-Martin, N., & Kreitmair, K. (2018). Ethical Issues for Direct-to-Consumer Digital Psychotherapy apps: addressing accountability, data protection, and consent. JMIR Mental Health, 5(2), e32. https://doi.org/10.2196/mental.9423
Stiefel, S. (2018). “The chatbot will see you now”: Mental Health confidentiality Concerns in software therapy. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3166640
Murdoch, B. (2021). Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics, 22(1). https://doi.org/10.1186/s12910-021-00687-3
Gumusel, E. (2024). A literature review of user privacy concerns in conversational chatbots: A social informatics approach: An Annual Review of Information Science and Technology (ARIST) paper. Journal of the Association for Information Science and Technology. https://doi.org/10.1002/asi.24898
Davida, Z. (2021). Chatbots by business vis-à-vis consumers: A new form of power and information asymmetry. SHS Web of Conferences, 129, 05002. https://doi.org/10.1051/shsconf/202112905002
Migliorini, S. (2024). “More than Words”: A Legal Approach to the Risks of Commercial Chatbots Powered by Generative Artificial Intelligence. European Journal of Risk Regulation, 15(3), 719–736. https://doi.org/10.1017/err.2024.4
Gondaliya, K., Butakov, S., & Zavarsky, P. (2020). SLA as a mechanism to manage risks related to chatbot services. 2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS), 235-240. https://doi.org/10.1109/BigDataSecurity-HPSC-IDS49724.2020.00050
Balcombe, L. (2023). AI chatbots in digital mental health. Informatics, 10(4), 82. https://doi.org/10.3390/informatics10040082
Sedlakova, J., & Trachsel, M. (2022). Conversational artificial intelligence in psychotherapy: a new therapeutic tool or agent? The American Journal of Bioethics, 23(5), 4–13. https://doi.org/10.1080/15265161.2022.2048739
Rubin, M., Arnon, H., Huppert, J. D., & Perry, A. (2024). Considering the role of human empathy in AI-Driven therapy. JMIR Mental Health, 11, e56529. https://doi.org/10.2196/56529
Richards, D. (2024). Artificial intelligence and psychotherapy: A counterpoint. Counselling and Psychotherapy Research. https://doi.org/10.1002/capr.12758