Skip to main content

Verified by Psychology Today

Neuroscience

Brain-Computer Interfaces and Neuroscience

Can technology read your mind?

 Rosso
Dr. Chander
Source: Photo credit: Rosso

Can technology read your mind? Divya Chander, M.D., Ph.D., has a surprising answer to that question. Chander is a neuroscientist and physician on the faculty at both Stanford University and Singularity University. She was educated at Harvard, the University of California San Diego/Salk Institute, UCSF and Stanford. Her post-doctoral work was in optogenetics at the pioneering Deisseroth lab at Stanford University.

In Chander’s presentation at Singularity University’s Exponential Medicine, she referenced the example of a research group at Berkeley that was able to use machine learning, a subset of artificial intelligence (AI) and functional magnetic resonance imaging (fMRI), to try to understand what the brain sees. fMRI measures brain activity by showing the changes in hemodynamics such as the oxygenation, volume and changes in blood flow. Ironically, what’s expediting the recent breakthroughs in neuroscience, AI and fMRI, are not new technology. Instead, it is the combination of using existing technology in novel ways that has led to greater progress towards a brain-computer interface (BCI).

“We can now look into the brain and actually see what it is you are seeing,” said Chander. “Imagine where this technology is taking us … We can now read brains without opening them.”

The UC-Berkeley team of Shinji Nishimooto, An T. Vue, Thomas Naselaris, Yuval Benjamini, Bin Yu and Jack L. Gallant published “Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies” in Current Biology in 2011. They were able to reconstruct natural movies seen by an observer with a two-stage encoding model. The fMRI activity from the subject watching the movie was input into a computer algorithm that learned to associate visual patterns with brain activity. Then this movie reconstruction algorithm was trained on YouTube videos where it predicted the image that was seen in the movie with striking results. Remarkably, according to the research team, the findings suggest that a visual brain-computer interface “might be feasible.”

Can technology read your dreams?

Chander references a Japanese team of researchers who applied similar approach using machine learning, EEG and fMRI to generate predictions of what the sleeper was dreaming. The algorithm predicted correctly 60 percent of the time [1].

Can technology determine consciousness?

Chander stated during her Exponential Medicine presentation that measuring the level of consciousness is feasible. She mentions the work of neuroscientist Adrian Owen, who developed a technique using fMRI to communicate with patients in a vegetative state by measuring the change in blood flow to certain parts of the brain. “We can now use non-invasive imaging to assess states of consciousness,” said Chander.

Is a Brain-Computer Interface Feasible?

Successful business entrepreneurs would agree that a brain-machine interface is achievable. The global brain computer interface (BCI) market will reach an estimated $1.84 billion by 2023 according to Knowledge Sourcing Intelligence’s December 2017 industry report.

Innovative leaders are actively investing in neuroscience. For example, in 2016 successful entrepreneur Bryan Johnson invested $100 million of his own fortune in Kernel, a neuroscience startup that aims to merge human with artificial intelligence through a brain-computer interface [2]. Trailing Johnson’s pioneering lead in this segment, Elon Musk followed suit and co-founded Neuralink Corporation in 2017 with a similar goal of building a brain-computer interface [3].

Facebook has a team of 60 engineers seeking to create a brain-computer interface that will allow users to type with the mind noninvasively [4]. According to TechCrunch, the Facebook team is partnering with several academic research organizations in this project, namely Johns Hopkins Medicine and John Hopkins University’s Applied Physics Laboratory, Washington University School of Medicine in St. Louis, UC Berkeley, and UC San Francisco.

“We are becoming synthetic. We are able to now integrate with electronics and stuff in silicon in order to advance human therapeutics … all because we cracked the neuro code.” – Divya Chander

Neuroscience and artificial intelligence has combined to make amazing breakthroughs in brain-computer interfaces. The application for this combined technology can help the neurologically challenged and disabled alike. Once fine-tuned, this insight can be used not only in health care and medicine, but also across industries in multiple commercial functions. Humanity is at the beginning of a revolution in neuroscience.

Copyright © 2018 Cami Rosso All rights reserved.

References

1. Stromberg, Joseph. “Scientists Figure Out What You See While You’re Dreaming.” Smithsonian.com. April 4, 2013.

2. Bryan Johnson. URL: https://bryanjohnson.co/. August 3, 2018.

3. Hull, Dana. “Elon Musk’s Neuralink Gets $27 Million to Build Brain Computers.” Bloomberg. August 25, 2017.

4. Constine, Josh. “Facebook is building brain-computer interfaces for typing and skin-hearing.” TechCrunch. Apr 19, 2017.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today