“Neurorights” and Why We Need Them
Scientists are working to protect our rights in the light of new technologies.
Posted June 25, 2020
Most of us, especially those with an interest in technology, are fascinated by all that AI (artificial intelligence) and neurotechnologies are achieving. We use it on a daily basis, more than we even realize.
For instance, I use Google’s AI-powered predictions pretty much every day to figure out how I can get to work in the fastest way possible, avoiding sitting in traffic and getting increasingly mad at every single human around me. I bet you have used ride-shares at some point. Guess what? Those apps are also AI-powered. Same as your spam filters, or the online check deposit feature, to mention just a few things that AI is doing for you. AI is also making online content more accessible to people with disabilities, for instance with alt-text HTML for visually impaired folks, or through other apps that enable speech for those with impairments due to injuries such as stroke or other long term conditions like Parkinson’s disease.
Meanwhile, neurotechnologies aim at manipulating the brain and the mind through technological advances. The rise and development of neurotechnologies have been astounding. An example of neurotechnology is deep brain stimulation (DBS), a technique that consists of implanting electrodes in the brain. It is commonly used to treat conditions like epilepsy and Parkinson’s disease in patients who don’t respond well to medication. The benefits of DBS can be life-changing. Because of the exciting possibilities that these kinds of technologies offer, many companies have started investing more resources in developing more, and commercializing them.
It seems that AI and neurotechnologies are revolutionizing the world for the better. Or are they?
What Are Neurorights?
Rafael Yuste, a professor from the NeuroTechnology Center at Columbia University in New York, who is involved with the BRAIN Initiative, realized that much concern was surrounding the increasing advances in AI and its possible encroachment into neurotechnology, but no one was addressing them in any particular way.
Together with Sara Goering, an associate professor at Washington University, and a group of concerned scientists, they realized that AI and neurotechnologies were closely linked, and there were no ethical regulations on how they could be jointly applied. It was impractical to regulate each application on a case-to-case basis, and thus they started drafting the ethical foundations that would regulate the use and development of advanced neurotechnologies: what we now call “neurorights.”
The five ethical principles that comprise neurorights are:
Privacy and Consent. Neurotechnologies can acquire a lot of data from their users (did you know that how you use your smartphone can be used for research studies on behavior?) and these data need to be protected. The individual should be able to opt out of sharing that data with third parties without their consent, to avoid similar messy situations that we have seen in the past with other forms of AI, as in the Facebook-Cambridge Analytica data scandal.
Identity. This represents our ability to think for and feel like ourselves, regardless of how the neurotechnology is applied. For example, patients treated with DBS have reported feeling increased depression, suicidal tendencies, anxiety, emotional hyperactivity, and hypomania. Though these symptoms are not experienced by every single patient who goes through DBS, they are concerning enough to raise the question: Are the implanted electrodes responsible for these changes in personality?
Free Will. Patients get appropriate information about the side effects and possible risks that getting electrodes into your brain entails. However, these adverse emotional responses are generally not included when consent must be granted by the patient.
So who’s at fault if the patient experiences issues with their own free will and unexpected emotional responses? Will they even know this is due to the DBS? Who is held accountable for potentially terrible outcomes? Establishing neurorights will safeguard the agency of patients through responsible development of neurotechnologies.
Augmentation. Some neurotechnologies are being developed with the purpose of enhancing cognitive capabilities. Think of it as “cognitive doping." We need to draw the line when those “enhancing neurotechnologies” can be used appropriately, and how, and prevent possible inequities among those who decide or can afford cognitive augmentation versus those who don’t.
Bias. Neurotechnologies are developed by humans, and humans are biased. We need to ensure that neurotechnologies can be created free from bias.
As an example, imagine asking Google which gift would be best for your mom on her birthday, and all that Google suggested were cleaning products. We recognize this as being highly problematic, but this understanding needs to be built into the neurotechnologies as well.
More recently, it has been exposed how some facial recognition technologies have racist biases built-in. Neurorights are looking at preventing this type of situation with neurotechnologies.
Why Do We Need Neurorights?
With the growing businesses around brain-machine interfaces, like the Black Mirror-esque efforts of Elon Musk to develop technology to, supposedly, learn how to read our minds, it seems necessary to push lawmakers and Human Rights advocates to be, for once, ahead of technology. We need to be able to foresee all the potential disasters that AI and unregulated neurotechnologies can bring upon us.
Dr. Rafael Yuste says it loud and clear: “This is an urgent matter. This is not science fiction, and we urgently need some sort of regulation. There are many things that are being done right now and we know nothing, and many others that we do know and are very worrisome. If we don’t do anything, we will have the typical situation when it will be too late to solve nothing. Many companies are now designing devices that read brain signals to control robotic equipment and code intentions and thoughts to apply them to the technology control."
For instance, it has been just over two years since Facebook recruited a team to work on developing thought-to-type technology. Though they assure us that this is not meant to be for “reading the thoughts of their users," it is exactly what they have been making progress on since 2017. This is why Dr. Yuste urges governments to “keep abreast of the progress in science and technology, and be prepared to respond."
What Can We Do?
Currently, the Chilean government is working with Yuste and the NeuroRights Initiative to modify their Constitution to cover neurorights. If everything goes according to plan, the Chilean Constitution will be the first one in the world and in history to specifically address the regulation of neurotechnologies to protect their citizens. Things like Facebook’s thought-to-type project will be illegal in Chile.
“The physical and mental integrity allows people to fully enjoy their individual identity, and the right to act in a self-determined manner. No authority or individual may, by itself or through any technological mechanism, increase, decrease or disturb that individual integrity. Only the law may establish the requirements to limit this right, and the requirements that consent must fulfill in these cases.”
Clause 19 of the Chilean Constitution
It is difficult to convince authorities and societies of the dangers of neurotechnologies, as most think of them as science-fiction material. In Chile, societal support was instrumental to convince the government of the importance of neurorights. Yuste says that as citizens, we can advocate for neurorights and pressure governments to write laws to protect us.
The ultimate goal that Dr. Yuste is pursuing is to include neurorights in the Declaration of Human Rights in order to establish a clear guideline of what should be protected when it comes to neurotechnologies and how they can impact our capabilities, individual identity (our bodily and mental integrity) and agency (our ability to choose our actions), and ultimately our lives. In addition, Yuste, is spearheading the creation of the Neurorights Initiative at the NeuroTechnology Center that will be dedicated to the achievement of these goals.
However, we are not totally doomed. As Rafel Yuste said during his speech in April at the Inter-Parliamentary Union Assembly in Qatar, “despite what might have sounded like a pessimistic view laid out thus far, I believe the double revolutions in Neurotechnology and AI could be a new Renaissance, since understanding the brain will let us understand who we really are from the inside, […] and this could lead to a new humanism. There will be major implications with positive outcomes for education, science, medicine, law, economy, society, and also international relations since the root of all conflict is often misunderstanding.”
I want to thank Professor Rafael Yuste for taking the time to tell me about neurorights, and his advocacy endeavors.
Original article published in NeuWrite San Diego.
Yuste R., Goering S. (2016) On the Necessity of Ethical Guidelines for Novel Neurotechnologies. Cell.
Yuste R., et al. (2017) Four ethical priorities for neurotechnologies and AI. Nature.
Harari, G. M., Lane, N. D., Wang, R., Crosier, B. S., Campbell, A. T., & Gosling, S. D. (2016). Using Smartphones to Collect Behavioral Data in Psychological Science: Opportunities, Practical Considerations, and Challenges. Perspectives on psychological science : a journal of the Association for Psychological Science, 11(6), 838–854.
Klaming, L. & Haselager, P. Neuroethics (2013) Did My Brain Implant Make Me Do It? Questions Raised by DBS Regarding Psychological Continuity, Responsibility for Action and Mental Competence. Neuroethics 6: 527.