Skip to main content

Verified by Psychology Today

Digital Health and the Rise of Mental Health Apps

New research warns that self-diagnosing apps are unreliable and may overtreat.

Source: Shutterstock

The number of mental health apps available to Internet users has exploded in recent years, with hundreds of downloadable programs pitched at those struggling with depression and anxiety, isolation and addiction. There are apps today to track our moods and our heart rate. Apps to monitor alcohol intake and promote well-being. And apps that purport to diagnose reliably while helping to relieve symptoms. Many of them are very popular—but are their recommendations sound?

Much of the research on digital health takes as a given that the apps have vast, untapped potential needing only cannier marketing and programming to realize fully. Certainly, they may help reach the isolated and the underserved in rural and low-income regions. And their personalized advice could carry enormous weight for those already heavily attached to their smartphones—an in-built placebo likely to increase the apps’ perceived value and authority.

In such an environment, to fret about the risk of medical error and unintended consequences, the overdiagnosis of ordinary behaviors, and the rise of depersonalized, algorithmic care could seem overly exacting. In the absence of around-the-clock treatment, with mental health services stretched to the limit, a free or low-cost app can seem just the ticket to interrupt negative thinking with a different, perhaps life-saving perspective.

Still, the American Psychiatric Association is sufficiently concerned about the kind of advice and diagnoses given that it set up a “Smartphone App Evaluation Task Force,” whose chair has warned, “Right now it almost feels like the Wild West of health care.” The apps may lead to excessive self-monitoring without professional guidance or contradiction, with self-diagnosis likely to eclipse supervised care. Meanwhile, the responsibility for following such advice falls squarely on the individual, whose stressors are generally presented in isolation, without offsetting social or environmental factors.

A just-published qualitative analysis of 61 mental-health apps lends even stronger weight to such concerns. In the study, appearing in the latest issue of Annals of Family Medicine, lead author Lisa Parker of the Sydney School of Pharmacy and colleagues from across Australia focused on mental health apps available online in the U.S., Canada, Australia, and the UK. Of central concern to them was how the apps defined mental health and what they signaled as contributing factors to mental illness.

“Mental health problems were framed as present in everyone,” the researchers determined, “but everyone was represented as employed, white, and in a family.” “Only a few apps implied that mental health symptoms might be a normal reaction to external stress.” Far-more common was a push to encourage self-monitoring, with a broader impulse to put “normal life…under the purview of clinical care.”

Because of the risk of making highly consequential diagnoses, thirty of the apps (49 percent of those studied) “provided disclaimers absolving themselves of responsibility” for any associated harm. “We give no representation or warranties about the accuracy, completeness, or suitability for any purpose [of our advice],” one company writes in standard boilerplate.

Loss of privacy is another concern, with many such apps reserving the right to cull and sell “anonymized” data portraits of their users. As Adam Tanner explains in Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records (2017), with the rise of software able to cross-reference and recontextualize such data points within seconds, the notion that we can share intimate details with our apps with fail-safe anonymity is an illusion best exposed to protect patients from a range of risks. These include the potential for discrimination by employers, as well as vulnerability to targeted advertising for products said to treat the condition that the app has diagnosed, perhaps inaccurately.

“The apps we assessed tended to encourage frequent use and promoted personal responsibility for improvement,” the researchers found. “The idea that the normal ups and downs of everyday life need treatment could drive use of these apps by people with minor concerns,” with a high probability of generating overdiagnosis and overtreatment.

Advocates and researchers like to frame such apps as “a way of people getting access to treatment that’s flexible and fits in with their lifestyle and also deals with the issues around stigma.” There are, we’ve seen, good reasons for supporting such aims. But when their marketing and programming make mental health problems appear routine and ubiquitous, Dr. Parker and her colleagues explain, the apps implicitly “promote the medicalization of normal mental states.

That would be a win-win for the app makers, increasing user dependency on their diagnoses and greater susceptibility to recommended treatments. If the latest study is replicated elsewhere, as seems likely, given the high risk of misdiagnosis, the gains respecting individual and public health are far less likely or assured.


Parker, L., L. Bero, D. Gillies, M. Raven, B. Mintzes, J. Jureidini, and Q. Grundy. “Mental Health Messages in Prominent Mental Health Apps.” Ann Fam Med 16.4 (July-Aug. 2018), 338-42. doi: 10.1370/afm.2260 [Link]

Tanner, A. Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records. Boston: Beacon, 2017.

More from Christopher Lane Ph.D.
More from Psychology Today
More from Christopher Lane Ph.D.
More from Psychology Today