Artificial Intelligence
Do We Trust AI to Help Make Decisions for Mental Health?
New research suggests learning about AI helps us trust AI a little more.
Posted November 4, 2024 Reviewed by Jessica Schrader
Key points
- Providing information about AI systems involved in patient care is important for trust and informed consent.
- Trust increased by 5% and distrust decreased by 4% when people received information about AI clinical systems.
- A key factor in trusting AI is "explainability," or the transparency of AI reasoning for its recommendations.
New research shows that learning about AI in mental health decision-making is essential for people to be able to trust AI systems, but this information unfortunately has only a modest impact on actually increasing trust toward AI. AI-based clinical decision support systems (CDSS) are being developed to help psychiatrists and mental health clinicians, offering new tools to enhance diagnostic accuracy, risk stratification, and treatment planning. However, implementing AI in mental health and psychiatry raises crucial questions regarding patient trust and acceptance of this technology. Do patients trust these systems, and how can AI be incorporated without undermining patient confidence and trust?
A recent study published in European Psychiatry explored patient trust in machine learning (ML)-based clinical decision support systems within psychiatric services. The study examined how much trust patients have for AI-driven tools and whether basic information about these systems might improve trust toward AI.
AI in Mental Health Care Decision-Making Will Impact Patient Trust
AI-based clinical decision support systems use machine learning algorithms to analyze electronic health records, clinical notes, and patient-provided data to make evidence-based recommendations. In psychiatry, these systems can help predict risks like hospitalization, suggest diagnoses, and recommend treatment plans. While AI can reduce human error and offer data-driven recommendations, patient trust and safety remain paramount, especially in psychiatry, where the therapeutic relationship is integral.
Fear or distrust of AI—whether about how data is being used or the role of AI in clinical decisions—could weaken the therapeutic relationship. People should be informed in advance if AI is involved and have control over how their data usage.
Providing Information About AI Only Slightly Improves Trust and Reduces Distrust Toward AI
The study included 992 participants receiving psychiatric care, divided into three groups: one group received an electronic pamphlet with four slides explaining AI-supported decision-making, a second group received general information about psychiatric decision-making, and a third group that was not provided with any information. Afterward, each group completed a survey assessing trust and distrust toward AI-based clinical decision-making support systems in psychiatric services. Questions covered safety concerns, error risk, clinician dependency on AI, and whether participants felt they should have the option to opt-out.
Participants provided information about machine learning reported slightly higher trust compared to those who did not receive information. On average, trust increased by 5% and distrust decreased by 4% when people received information about AI systems.
Overall, people were more accepting of AI when human clinicians had final oversight over recommendations. The study also highlighted that a key factor in trusting AI is "explainability," or the transparency of AI reasoning for its recommendations. Providing explainability will be challenging, however, due to the “black box” nature of many AI systems.
Trust Varies Across Demographics and Conditions
Interestingly, whether information made an impact on trust varied across demographics. Women tended to report higher levels of trust in AI after receiving information, whereas men, who generally reported higher baseline familiarity with AI and machine learning, showed little change in trust levels post-intervention. Participants with mood or anxiety disorders showed a greater increase in trust than those with psychotic disorders. The latter group may experience higher baseline levels of distrust of psychiatric services in general.
Future Directions
Trust is vital for the successful implementation and integration of AI in mental health care. While AI holds promise, its integration in psychiatry raises unique challenges and ethical questions around transparency and informed consent. The study highlights the need to provide AI information to people early and to empower them with autonomy over data usage and participation. People will want to know whether and how AI tools are being used in their clinical care and may prefer opt-out options.
As mental health care becomes more data-driven, it is crucial to keep trust at the center of the therapeutic relationship and to ensure that the patient/client-clinician relationship remains collaborative, informed, and ethical.
For policymakers and health care providers, these findings underscore the importance of investing in clear communication and explainability of AI-based clinical decision-making support systems as critical to its successful integration in mental health care.
Marlynn Wei, MD PLLC © Copyright 2024. All rights reserved.
References
Perfalk E, Bernstorff M, Danielsen AA, Østergaard SD. Patient trust in the use of machine learning-based clinical decision support systems in psychiatric services: A randomized survey experiment. Eur Psychiatry. 2024 Oct 25;67(1):e72. doi: 10.1192/j.eurpsy.2024.1790. PMID: 39450771.