Artificial Intelligence
Suing Therapeutic AI Systems for Malpractice
Legal accountability when digital apps harm mental health.
Posted February 4, 2026 Reviewed by Monica Vilhauer Ph.D.
Key points
- AI systems may be used to help with a variety of mental health concerns.
- Individuals who use mental health AI systems may experience harm.
- AI developers, distributors, and referral agents may be legally liable for harm.
As AI becomes integrated into daily life and personal decision making, it is unsurprising that many people are consulting AI for assistance with depression, anxiety, and other mental health concerns. Mental health chatbots, self-help applications, and large language models can provide immediate responses, emotional validation, and structured coping strategies. Unfortunately, recent experience suggests that AI is far from infallible when it comes to helping people with mental health concerns (Coghlan & Fernandez, 2023; Wheeler, 2025). In some situations, AI systems can exacerbate distress and lead to serious harm. This post explores the issue of legal liability when individuals use AI for mental health issues, but ultimately suffer more harm than good.
Possible Uses of AI for Mental Health
So, why would someone choose to use AI rather than a licensed mental health professional when they are experiencing significant emotional, psychological, or social distress? First, AI is easily accessible. Many AI programs are free or relatively inexpensive. They are available 24 hours a day, 7 days a week. Individuals experiencing anxiety or depressive symptoms in the middle of the night can access immediate responses without waiting for a scheduled appointment.
Some people use AI as a trusted friend or confidante. Individuals can share stories, questions, or concerns without fear of judgment or stigma, and with a sense that AI will respond in an intelligent and emotionally supportive way. AI can provide tips and recommendations for concerns ranging from preventing migraines to supporting a child with autism, to dealing with bullying, to helping a person cope with a high-conflict divorce. Individuals can even learn how to provide AI with specific prompts so that it will specifically offer empathy, care, comfort, creative options, or structured interventions resembling specific psychotherapies.
While many general-purpose AI systems are not purposely designed as mental health tools, there are programs specifically designed and marketed for such purposes. Some mental health professionals may recommend that clients use particular mental health apps that incorporate AI.
Recognizing the growing use of AI in health care, the American Psychiatric Association (n.d.) has a website that provides guidance on how to select appropriate apps for particular mental health purposes. Some apps are designed to screen for urgent mental health issues (e.g., suicidal thoughts), whereas others are designed to assist with problem-solving, behavior change, conflict resolution, communication, mood tracking, mindfulness, meditation, psychoeducation, or skill development. Apps may be used as a supplement to therapy or as a replacement, though caution should be exercised regardless of how the app is being used.
Risks of Mental Health Apps
Although AI may be accessible, efficient, and effective, it can also provide inappropriate information, assessments, and advice. There have been instances where teens have used AI to help with challenging emotional issues, and AI has discouraged them from seeking help from parents or professionals, sometimes leading to suicide (Wheeler, 2025). AI may provide suggestions based on stereotypes or misinformation. AI may also lack the capacity to provide appropriate interventions when urgent concerns such as risk of violence, child maltreatment, psychosis, or drug overdosing arise. Many apps do not have provisions for alerting parents, family members, crisis intervention services, or child protection services when serious issues emerge.
An additional risk of using AI apps is confidentiality (Meadi et al., 2025). When people share personal information with AI, they may have expectations that their information will not be shared with others. Although some AI apps have privacy precautions, others may share information with others (e.g., marketing companies) or may not have sufficient safeguards against access by unauthorized users.
Legal Liability
When individuals believe they have been harmed by AI, they cannot sue AI itself as AI is not a person or a legal entity. It may be possible, however, to sue human actors or companies that have either created the AI, marketed the AI, made the AI available, or referred individuals to use the AI for mental health purposes. Potential defendants may include software developers, technology companies, platform providers, health care organizations, or mental health professionals who recommend or integrate AI tools into care.
There are various potential causes of action for lawsuits, including product liability (defective design, failure to warn), negligence (foreseeable misuse, inadequate safeguards), or wrongful death (Wheeler, 2025). In such instances, the individual claiming damages must prove that a defect or unreasonable risk associated with the AI tool led to the alleged harm suffered by the individual. If an individual is suing a mental health professional who recommended the AI tool, then the individual would need to prove that the professional did not live up to a reasonable standard of care expected of similar mental health professionals who suggest similar AI tools to their clients (Barsky, 2024). Failure to properly assess the tool’s evidence base, risks, limitations, or appropriateness for a particular client may expose the mental health professional to liability, particularly if the harm was reasonably foreseeable.
Malpractice and product liability cases may involve complex legal issues, so it would be important to consult an attorney specializing in such cases before making decisions about filing a lawsuit. Given the costs and adversarial relations associated with litigation, it may also be helpful to explore whether mediation or other collaborative conflict resolution methods may be more appropriate (Barsky, 2017).
Avoiding Harm
While an individual might be able to sue for harm suffered due to use of a mental health app, it is clearly better to avoid harm altogether. Before using AI apps for significant mental health issues, individuals should determine:
- Which mental health conditions, symptoms, or presenting concerns is the app designed to address?
- What are the app’s potential benefits and risks (according to the best research available)?
- How should the app be used in a manner that maximizes benefits, minimizes risks, and monitors for potential problems?
(Meadi et al., 2025).
It may be difficult for individuals to assess these issues on their own, so it may be prudent to engage with a licensed mental health professional who can provide information, options, assessments, recommendations, and oversight when individuals are using mental health apps. From personal, ethical, and legal perspectives, therapeutic AI systems should generally be used not as a replacement for professional mental health care but as a specialized adjunct within a broader continuum of support (Coghlan & Fernandez, 2023).
References
American Psychiatric Association. (n.d.). App advisor: An American Psychiatric Association initiative. https://www.psychiatry.org/psychiatrists/practice/mental-health-apps?utm_source=chatgpt.com
Barsky, A. E. (2024). Clinicians in court: A guide to subpoenas, depositions, testifying, and everything else you need to know. Guilford Press.
Barsky, A. E. (2017). Conflict resolution for the helping professions: Negotiation, mediation, advocacy, facilitation, and restorative justice (3rd ed.). Oxford University Press.
Coghlan, S., & Fernandez, L. (2023). Ethical issues with using chatbots in mental health.Journal of Medical Ethics, 49(4), 245–254. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10291862
Meadi, M. R., Townsend, S., & Bunt, A. (2025). Exploring the ethical challenges of conversational AI in mental health care. JMIR Mental Health, 12, e60432. https://doi.org/10.2196/60432
Wheeler, K. (2025). Regulating AI therapy chatbots and their psychoactive effect on users: Federal oversight, liability, and the promise of safer digital mental health innovations. Texas A&M Law Review, 11, 687–757. https://scholarship.law.tamu.edu/cgi/viewcontent.cgi?article=1366&context=lawreview