Brain-Computer Interface Enables Quadriplegic Man to Feed Himself
The device included robotic arms and implanted microelectrode arrays.
Posted July 1, 2022 | Reviewed by Tyler Woods
A new study published in Frontiers in Neurorobotics demonstrates how a brain-computer interface enabled a quadriplegic man to feed himself for the first time in three decades by operating two robotic arms using his thoughts. Brain-computer interfaces (BCIs), also known as brain-machine interfaces (BMIs) are neurotechnology powered by artificial intelligence (AI) that enables those with speech or motor challenges to live more independently.
“This demonstration of bimanual robotic system control via a BMI in collaboration with intelligent robot behavior has major implications for restoring complex movement behaviors for those living with sensorimotor deficits,” wrote the authors of the study. This study was led by principal investigator Pablo A. Celnik, M.D., of Johns Hopkins Medicine, as part of a clinical trial with an approved Food and Drug Administration Investigational Device Exemption.
A partially paralyzed quadriplegic 49-year-old man living with a spinal cord injury for around 30 years prior to the study was implanted with six Blackrock Neurotech NeuroPort electrode arrays in the motor and somatosensory cortices in both the left and right brain to record his neural activity. Specifically, in the left hemisphere of the man’s brain were four implanted arrays: two 96-channel arrays in the left primary motor cortex and two 32-channel arrays in the somatosensory cortex. In the right brain hemisphere, a 96-channel array was implanted in the primary motor cortex and a 32-channel array was placed in the somatosensory cortex.
The participant was asked to perform tasks as the implanted microelectrode arrays recorded brain activity via a wired connection to three 128-channel Neuroport Neural Signal Processors. He was seated at a table between two robotic arms with a pastry on a plate set in front of him. He was tasked to use his thoughts to guide the robotic limbs with an attached fork and knife to cut a piece of the pastry and bring it to his mouth.
The aim was to have the robotic arms conduct most of the task with the participant empowered to take control in some areas. The researcher hypothesized that this shared control of the robotic limbs for a task that requires both fine manipulation and bimanual coordination would enable greater dexterity. The robot was given the approximate location of the plate, food, and participant’s mouth beforehand.
“Using neurally-driven shared control, the participant successfully and simultaneously controlled movements of both robotic limbs to cut and eat food in a complex bimanual self-feeding task,” reported the researchers.
Copyright © 2022 Cami Rosso. All rights reserved.