New Synthetic AI Data May Improve Brain-Computer Interfaces
AI used to “imagine” brain activity to improve brain-computer interfaces (BCIs).
Posted November 29, 2021 | Reviewed by Kaja Perina
Artificial intelligence (AI) machine learning is used in brain-computer interfaces (BCIs) to help identify patterns and decode brain imaging data. A new study published in Nature Biomedical Engineering by researchers at the University of Southern California (USC) apply deepfake AI technology in order to improve the performance of brain-computer interfaces to help those with speech impairment or mobility issues.
“"It is the first time we've seen AI generate the recipe for thought or movement via the creation of synthetic spike trains,” said lead author Shixian Wen in USC report. “This research is a critical step towards making BCIs more suitable for real-world use."
Brain-computer interfaces, also known as brain-machine interfaces (BMIs), are assistive technology that read and decode electrical activity in the brain in order to control external devices such as a wheelchair, speech synthesizer, prosthetic limbs, smartphones, computer cursor, keyboard, and more. The BCI market is expected to increase at an annual compound growth rate of 15.5 percent during 2020-2027 to reach USD 3.7 billion in revenue by 2027.
Finding ample training data for brain-computer interfaces where brain signals are mapped to specific actions, if possible, may be an extremely time-consuming challenge. AI algorithms require massive amounts of training data in order to learn to identify patterns and features. To solve this data challenge, the USC researchers developed a generative adversarial model (GAN), a type of AI neural network architecture used for training deep learning. In GANs, two artificial neural networks (ANNs) train one another by competing—a generative neural network creates synthetic data samples, and a discriminative neural network tries to determine whether the data samples are generated or from actual data.
The researchers developed a GAN trained on data from a recording session with a monkey performing a reaching task in order to learn the mapping of the movement to spike trains, which are the binary waveform representation of brain activity. The brain and behavior data was collected from the monkey using a Cerebus system by Blackrock Neurotech (formerly Blackrock Microsystems). This GAN synthesizer then generated synthetic neural data that was then combined with new real data to train a brain-computer interface. According to a USC report, this approach improved the training speed by up to 20 times faster.
According to the researchers, their model can be adapted to synthesize new spike trains, which can speed up the training of brain-computer interface decoders. Because the model is completely data-driven, this approach can be used for a wide range of brain-computer interface decoders and is not limited to motor control BCIs.
“For brain–computer interfaces (BCIs), obtaining sufficient training data for algorithms that map neural signals onto actions can be difficult, expensive or even impossible,” reported the USC researchers. “Here we report the development and use of a generative model—a model that synthesizes a virtually unlimited number of new data distributions from a learned data distribution—that learns mappings between hand kinematics and the associated neural spike trains.”
Copyright © 2021 Cami Rosso All rights reserved.