- Current depression screenings tend to be subjective.
- AI can be a useful tool to screen depression by recognizing biomarkers in the human voice.
- AI might be more accurate than human screenings, although it raises concerns about privacy and ethics.
“Prevention is key.” Wise words often spoken by medical practitioners. There is little argument that the most beneficial outcomes of treatment often result from “catching” an illness in its early stages. For instance, if we feel constant pain in our knee when running, it’s best to see a doctor early as continuing to run may do further damage, possibly leading to surgery.
Mental health conditions are no different but may be trickier to “catch.” Depression can creep up; we may begin to feel fatigued, less motivated, or irritable. Often, we try to power through, blaming other factors such as stress, the weather, or other medical issues until the effects are significant enough that we do require professional assistance. By the time we get to this point, depression may be more difficult to treat. Perhaps we’ve been struggling for weeks, sometimes even years.
Patterns of behavior and consistency are what the human brain excels at. But we can also develop maladaptive patterns, and breaking these consistencies after years of reinforcement poses quite a challenge. However, what if there was a different way to catch very early signs and symptoms of depression using only the human voice?
Current methods of screening depression are oftentimes subjective, consisting of questionnaires, self-reports, or behavioral observation. Even certain empirically validated psychological batteries come with a subjective bias. This leads to possible “yea-saying” or “nay-saying” with respect to interpretations (e.g., individuals may overplay or underplay their symptoms). Further, individuals may be consciously unaware of the severity of their symptoms. When asked, “How’s your appetite?” a client may report eating three meals a day which is considered “normal,” but either doesn’t report or isn’t aware that the amount he/she is eating is significantly less than previously. Most skilled clinicians are trained not only to ask appropriate follow-up questions but also to assess behavioral cues, including body positioning, eye contact, mannerism, and voice.
Biomarkers of Speech
The speech of a client is an important part of something called the “mental status exam” that is completed in psychological assessment. Clients are observed on their speech/voice tonality, volume, cadence, fluency, rhythm, rate, tone, etc. These markers are important descriptors when assessing levels of depression. Due to a clinician needing to screen a significant amount of information in a short amount of time, a lot of subtle or covert information can also be missed. As such, companies like Kintsugi have developed AI voice biomarkers they claim can detect depression with 80 percent accuracy in comparison to roughly 50 percent human clinician accuracy. What’s more impressive is they claim all of this can be done with only a few seconds of voice clip.
Using Artificial Intelligence
The process is simple. A client submits a voice clip a few seconds long. The focus is not on the words said but on how they are said.
According to David Liu, CEO of Sonde Health, “By processing this audio, we can break down a few seconds of voice recording into a signal with thousands of unique characteristics,” a process called audio signal processing. This data then allows scientists to map which vocal features, sounds, structures, or simply “biomarkers” correlate to certain illnesses or diseases. The team at Sonde Health uses six biomarkers that assess for tiny changes in pitch, inflection, or voice dynamic. Certain levels of scores on these changes correlate to depression severity. Clinicians can then use this data to begin formulating treatment plans sooner or make referrals to other services.
AI and Post-Partum Depression
One interesting area of this AI pursuit is the possible detection of post-partum depression. Currently, it is estimated that roughly 50 percent of women struggle with the “baby blues,” but another 20-30 percent develop a more severe form of depression (Illinois Department of Public Health) that may require medication. For some, it may even mean pursuing higher levels of care, such as hospitalization, if symptoms affect functioning.
Spora Health has been using AI to assist with screenings focusing on health equity. In their all-virtual program, when a patient calls and begins speaking with a clinician, Kintsugi’s AI starts listening and analyzing the voice. After about 20 seconds of listening, the AI software can generate a patient’s PHQ-9 and GAD-7, screening assessments that clinicians use to determine levels of anxiety and depression. This information is used to create the most appropriate treatment plans, provide referral services if needed, discuss medication if appropriate, or sometimes simply keep a “closer eye” on a patient.
As interesting and advanced as this technology is, some worry about accuracy and/or intrusion of privacy. Although Kintsugi claims their AI technology predicts with 80 percent accuracy, how would this translate to different cultures, languages, or personality differences? Further, how would this translate to differential diagnoses? Does this also teeter the line of intrusion of privacy by having voice clips of patients? Kintsugi promises complete patient privacy and HIPAA compliance, and their continued research and pursuit are notable. As AI continues to advance, Kintsugi’s AI software is something to keep an eye on, not only in the mental health space but for other medical conditions as well.