Brain Computer Interface
Breakthrough Brain-Computer Interface Decodes Self-Talk
BCI predicts instructed imagined speech with high accuracy from brain activity.
Posted August 22, 2025 Reviewed by Monica Vilhauer Ph.D.
Brain-Computer Interfaces (BCIs) are cutting-edge assistive technology that offer hope to people with disabilities who have lost the ability to speak or move due to various causes such as neurodegenerative diseases, neurological disorders, or traumatic brain injuries. A new landmark BCI study led by Stanford Medicine neuroscientists demonstrates a brain-computer interface capable of decoding instructed inner speech on command with up to 74% accuracy.
“We discovered that inner speech is robustly represented and demonstrated a proof-of-concept real-time inner-speech BCI that can decode self-paced imagined sentences from a large vocabulary (125,000 words),” wrote the study’s senior author Frank Willett, PhD, Co-Director of the Neural Prosthetics Translational Laboratory and Assistant Professor of Neurosurgery at Stanford University, in collaboration with a team of over 20 scientists at Stanford Medicine, Massachusetts General Hospital, Harvard Medical School, Emory University, Georgia Institute of Technology, University of California, Davis, and Brown University.
In addition to Willett, the study co-authors include Erin Kunz, Benyamin Abramovich Krasa, Foram Kamdar, Donald Avansino, Nick Hahn, Seonghyun Yoon, Akansha Singh, Samuel Nason-Tomaszewski, Nicholas Card, Justin Jude, Brandon Jacques, Payton Bechefsky, Carrina Iacobacci, Leigh Hochberg, Daniel Rubin, Ziv Williams, David Brandman, Sergey Stavisky, Nicholas AuYong, Chethan Pandarinath, Shaul Druckmann, and Jaimie Henderson.
The researchers report that the word error rate of the BCI’s ability to decode inner-speech in real-time using the 125,000-word vocabulary was as low as 26%.
Brain-Computer Interfaces enable a person to control external devices using thoughts to perform tasks to improve the quality of daily living such as operating wheelchairs, robotic limbs, computers, smartphones, and more devices.
Many existing BCI systems rely on brain recordings of neural activity of the patient attempting speech. The researchers for this study sought to decode imagined speech, also known as inner speech, inner monologue, self-talk, silent speech, internal speech, speech imagery, inner voice, internal monologue, verbal thinking, covert self-talk, and internal dialogue.
Four study participants with Tetraplegia who were part of the BrainGate2 clinical trial feasibility study had BrainGate Neural Interface System sensors placed on the speech-related areas of cortex, the motor cortex, in order to record brain activity.
The participants included two males and one female diagnosed with Amyotrophic Lateral Sclerosis (ALS) and a female stroke survivor diagnosed with tetraplegia and dysarthria. Analog signals were digitized via the NeuroPlex E system by Blackrock Microsystems as the participants were asked to either attempt speech or imagine internal speech.
The researchers discovered that there is a high correlation between the neural representation of imagined speech and attempted speech and that it could be distinguished via a neural dimension that represents motor intention.
“We investigated the possibility of decoding private inner speech and found that some aspects of free-form inner speech could be decoded during sequence recall and counting tasks,” the researchers reported.
An interesting finding was that unintended decoding of imagined speech could be prevented by requiring the user to think of a keyword to unlock the brain-computer interface. The team found that with one participant the keyword strategy worked with up to 98.75% accuracy in real-time experiments.
Brain-computer interfaces use artificial intelligence (AI) to identify patterns in noisy recorded brain activity in order to predict intended speech. The scientists for this study opted to use a Recurrent Neural Network (RNN) architecture with five layers to convert brain activity from imagined speech into a time series of phoneme probabilities. RNN is a type of AI deep learning model that can process and output sequential predictions from sequential inputs and is often used for speech recognition, natural language processing (NLP), image captioning, and sentiment analysis.
With this new breakthrough discovery, the scientists have demonstrated that brain-computer interfaces can decode instructed imagined speech with a large vocabulary, giving hope and paving the way for faster, more robust assistive technology to help the paralyzed and disabled in the future.
Copyright © 2025 Cami Rosso All rights reserved.
