Skip to main content

Verified by Psychology Today

AI Detects Cognitive Distortions in Text Messages

AI has potential for assisting mental health providers in clinical settings.

StockSnap/Pixabay
Source: StockSnap/Pixabay

Artificial intelligence (AI) is now able to detect cognitive distortions from text messages. A new study published in Psychiatric Services, an American Psychiatric Association peer-reviewed journal, shows how AI natural language processing (NLP) can detect cognitive disorders in texts as effectively as human clinicians.

“Recent advancements in mobile phone–based mental health interventions, combined with advancements in computational methods of language analysis, have created new possibilities for developing technology-assisted interventions,” wrote the researchers from the University of Washington School of Medicine.

The American Psychological Association defines cognitive distortion as thinking, beliefs, or perceptions that are either false or flawed that can happen to all people to some degree. There are numerous ways a person’s thoughts may be inaccurate or skewed. Common cognitive distortions include all-or-nothing (polarized) thinking, always being right, jumping to conclusions, overgeneralization, magnification (catastrophizing), minimization, labeling and mislabeling, personalization, fortune telling, mental filter, disqualifying the positive, emotional reasoning, “should” statements, mental filtering, control fallacies, fallacy of fairness, fallacy of change, the Halo effect, blaming others, self-serving bias, biased implicit attitudes, and many others.

For this study, the researchers focused on five common cognitive distortions: catastrophizing, mental filtering, jumping to conclusions, overgeneralizing, and “should” statements. Catastrophizing or magnification is when the worst-case scenario is assumed. Mental filtering is a cognitive distortion that focuses on negative aspects and filters out all of the positive ones. Jumping to conclusions is reaching an unwarranted conclusion with minimal data. Overgeneralizing is when one event is applied to all other events. "Should" statements happens when people think that they should be doing, saying, or thinking something other than what they are currently doing.

“This is the first study to apply natural language processing to text messages between people with serious mental illness and their clinicians, with the goal of identifying cognitive distortions,” the researchers wrote. While the distortions studied occur in the general population, they are often more evident and severe in those who have been diagnosed with a psychiatric condition.

Over 7,350 text messages between 39 patients with serious mental health conditions such as bipolar disorder, schizophrenia, schizoaffective disorder, or major depressive disorder and their mental health providers were collected in a randomized controlled trial during a twelve-week period. Human clinicians from mental health agencies labeled the messages into those five categories.

The scientists created three AI natural language processing classification models to compare against the human annotators. The three AI models used were a bidirectional encoder representations from transformers (BERT), logistic regression (LR) with term frequency–inverse document frequency features and a support vector machine (SVM) with input features generated by sentence-BERT without fine-tuning.

“The performance of BERT was comparable to that of clinical raters for any distortion, mental filtering, jumping to conclusions, and catastrophizing, indicating that the BERT framework (with pretraining) is a good fit for this task,” the researchers reported.

The BERT algorithm outperformed the other NLP models for all of the cognitive distortion labels. According to the researchers, the BERT performance was as good as the human clinicians.

“This study demonstrated that NLP models can identify cognitive distortions in text messages between people with serious mental illness and clinicians at a level comparable to that of clinically trained raters,” the researchers concluded.

Copyright © 2022 Cami Rosso All rights reserved.

advertisement