Big Data Meets Behavioral Health

Dr. Big Brother, diagnosing you like it or not.

Posted Jun 13, 2018

Pixabay
Source: Pixabay

Who owns the data that you generate, as you engage online and electronically? What about the information that is created ABOUT you? Do you have any ability to see that information, much less delete or control it? To many accounts, this appears to be one of the most far-reaching social and legal battles of modern times. Now, this data is reaching into a place that has had the strongest confidentiality and privacy protections: your mental health.  

When Facebook revealed that Cambridge Analytica, and a host of other groups, had gained access to data regarding millions of Facebook users, we began to get a scope of the breadth of this issue. Using data generated from and about individual people, these firms were able to direct targeted advertisements and electronic engagements, intended to affect the way that people vote, purchase and spend.

B.J. Fogg is a psychologist at Stanford University, who created the Stanford Persuasive Technology Lab, where researchers develop techniques to “change humans,” and to “change what people think, and what people do...” His work has influenced many in Silicon Valley, and modern efforts to use technology in more sophisticated ways to affect behavior, including health. These efforts lead to software and devices which are better at demanding and commanding people’s attention and time. By keeping people glued to screens and devices, more frequently using social media software and platforms, advertisements placed on these platforms get more exposure, generating more revenue.

Dopamine Labs is another group, using “brain hacking” in order to increase users’ engagement and “giving your users our perfect bursts of dopamine…A burst of dopamine doesn’t just feel good: it’s proven to rewire user behavior and habits.” (That’s not actually how dopamine works of course, but everyone loves to use dopamine to sound smart…)  

Pixabay
Source: Pixabay

Documents leaked from Facebook showed that the company was able to identify and respond when users, especially teenagers, felt insecure, worthless, stressed, anxious or overwhelmed. In 2014, Facebook faced criticism when it conducted an experiment on users, in order to test and inspire the effects it could create through “emotional contagion,” and altering the emotional cues that users received. 

Perhaps it’s not all bad, though? After all, Facebook is now using an artificial intelligence system to detect and identify users who may be suicidal. If detected, an individual who is making suicidal statements may be contacted by a Facebook moderator, or receive information about local resources or first responders (police or fire rescue). In some cases, Facebook security team members are even contacting first responders and 911 where the Facebook user is located, and informing them of the suicide risk they’ve detected. In those cases, first responders may then do a “well-person check” and visit the identified individual. This is a common tool used by mental health professionals when they become aware of risk in a patient. Note that users of Facebook cannot currently opt out of this process, and have not consented to having their mental health and risk for suicide monitored. As a licensed clinician, when I meet with a patient, I have to explain, in advance, that if I believe they are a danger to self or others, I am mandated to report them. Buried in Facebook’s new terms of service is a note that they use their data to “detect when someone needs help,” though the detailed information that Facebook links to in that disclosure does not make clear that emergency responders may be contacted by Facebook on your behalf. 

Mindstrong Health, another Palo Alto startup, has a platform that monitors peoples’ smartphone use for indications of depression. It was co-founded by Dr. Thomas Insel, a former director of the National Institute of Mental Health. The company is also participating in research looking the electronic signs of trauma

These are all issues involving user-generated data, where the actual person’s behaviors are being analyzed. But what about data about you, and your mental health? That data is out there as well. It has been the subject of numerous data breaches, at the VA, from managed care organizations, and healthcare providers. Data about you, your mental health diagnoses, and treatments you’ve received, including medications, is all out there for the grabbing.   

Pixabay
Source: Pixabay

Other groups are also collecting and sharing information about you and your behavioral health needs. Three are several new software and electronic projects which have platforms for emergency responders and law enforcement to create databases about citizens with behavioral health issues or needs, for responders to access when they interact with those individuals. RideAlong, one interesting Seattle-based project, has given thoughtful consideration to which law enforcement personnel can access or alter the information they keep in their system, and decided that information on diagnoses for instance shouldn’t be identified. But, there are, as yet, no provisions whereby a person can actually know, or amend, any information about them which is stored in this system. I heard folks from this company present, and while they take confidentiality seriously, they also want to provide law enforcement a tool to share information — about you. 

Collective Medical Technologies is an Oregon company with a similar platform, which creates, tracks, and shares a database of robust information about people who go to hospital emergency rooms. The database can only be accessed by healthcare providers and managed care providers, but it contains records of why individuals go to the ER, what treatment they received, and techniques which were useful in de-escalating them if they were agitated. A behavioral health provider such as myself (Fair disclosure, I, and the organization I lead, are now a participating healthcare provider in this project) can even be notified if one of my patients shows up in the ER, so that I might be able to reach out and get engaged in their treatment. Again, as this data is generated about persons, not by the person, and is being shared by entities subject to confidentiality laws, the individual persons themselves do not have to consent, and may not even know they or their data is in this system. 

Data about your history of behavioral health diagnoses and treatment is used to guide decisions made about you by law enforcement, tech companies, managed care insurance companies, states and governments, and many, many other groups who have access to that information. That history might help you get more access to support, but it might also be used limit your options and choices. Dr. Big Brother, diagnosing you like it or not. 

Pixabay
Source: Pixabay

As a behavioral health provider involved in policy at statewide and national levels, I’ve seen this recent rise in data mining and analysis reaching into the world of behavioral health. I’m conflicted about it all, to be perfectly honest. I can definitely see many places where this information can save lives, prevent tragedies, and help people get help. But, I’m always reminded of dystopian science fiction where the state or artificial intelligence systems detect signs of unhappiness, and make efforts to change or alter those people and their feelings.  Do we have a right to be unhappy? Had famous artists and geniuses like Van Gogh, Robin Williams, Anthony Bourdain — all individuals with histories of known behavioral health needs — been detected and received treatment, they might not have died and continued to produce amazing work. But, what right would they have had, to make a choice to be be mentally ill and left alone?

Data about your behavioral, mental, and emotional health ARE being collected and analyzed. Big data sets like this WILL be used. This information isn’t being collected just so it can sit there. How it will be used, and with whose permission, is all part of a big, scary question that, right now, we aren't part of answering.