Skip to main content

Verified by Psychology Today

Michael Chorost, Ph.D.
Michael Chorost Ph.D.
Neuroscience

Why Luke Skywalker's arm has to talk to the brain.

It's not enough just to listen to neural commands.

When I was a kid, I wondered how the mind controlled the body. If the mind was an airy, ethereal spirit, how did it get muscles to contract and bones to move?

I spent a good deal of time moving my arm very, very slowly, trying to intuit how the whispers and urgencies of my conscious mind hooked themselves up to the meat flesh of my body.

How, I wondered, did the ghost control the machine?

Today I see it differently. Like many scientists, I now see mind as a product of the electrochemical signals between neurons. From that perspective, mind is body. You could say that solves the problem by throwing the ghost out of the machine.

From that perspective it shouldn't be hard, in principle, to build an artificial arm. You just intercept the brain's neural traffic to the arm, decode it, and tell a robotic arm to make those movements. It looked pretty easy for Luke Skywalker. Right?

Wrong. It's going to be much harder than that.

To be sure, there's been a lot of progress lately in intercepting neural signals. Todd Kuiken's group at the Rehabilitation Institute of Chicago has pioneered "targeted nerve reinnervation," a technique in which a stump's nerves are rerouted into the chest where they can be read by electrodes taped to the skin.

Those signals have been used to make strapped-in robotic limbs carry out some simple movements. See, for example, this video.

And in monkeys and a few humans, 100-electrode arrays have been sunk into the brain's motor cortex to read approximately 100 neurons. A computer uses that signal to move cursors on a computer screen and do other interesting stuff.

But the crucial weak link is the software that interprets the neural firing. It's trained by being ordered to correlate neural firings with bodily motions.

For example, let's say an amputee has one of those 100-electrode arrays in her brain's motor cortex for controlling a robotic arm. (No one's actually put together a setup like this yet, but it seems quite possible in principle.) When she tries to bend the elbow that doesn't exist anymore, four particular neurons fire. That's recorded in a database.

Then, in the future, whenever the software sees those four neurons fire, it'll initiate an elbow-bend in the robotic arm. (That's an oversimplification but it's the basic idea.)

So far, so good. But now let's say the user wants to do something that the software hasn't seen before, say pull open a stuck drawer. That requires an elbow-bend, but it also requires many other motions. The pattern of neural activity in the brain is going to be very different. Unrecognizably different. Those particular four neurons may not fire at all.

What that means is, the software used to interpret neural activity is fundamentally incapable of understanding new commands. It can only understand commands it's seen before. It's inherently backward-looking.

You might think, "Well, let's just have an extended learning period where we catalog every activity the user performs and put it all in a giant database."

The problem with that is that even small variations in motor activity evoke different neural firings. Bending an elbow with a coat on may be different, neurologically speaking, than bending it in shirtsleeves. That's because the weight of the coat requires bringing different muscles into play, at different speeds, and at different intensities. The software won't recognize such a command at all.

In other words, any change in the real world will befuddle software based on pattern-matching. You might say that whereas databases are finite, real life is inexhaustible.

So how is an organic arm different? After all it's just a fancy machine in principle, isn't it? But there's at least three major differences.

  • It probably doesn't doesn't store previous orders in a database and compare incoming commands to it.
  • Whereas neural prosthetics have access to at most 100 neurons, a real arm is connected to millions or billions of neurons.
  • A real arm sends signals back to the brain as well as receiving them.

That last might be the most crucial difference. Because the brain gets abundant data back from an organic arm, it is able to construct and update a neural model of what the arm is doing. You might say that the brain has a "virtual arm" in its circuitry that it can manipulate.

And since the neural connections between brain and arm are so rich, manipulating the virtual arm is effectively the same as manipulating the real one.

If that is the case, then software of the classical machine-learning kind will never be able to interpret orders it hasn't seen before. And a robot arm that can't send feedback will be extremely limited - perhaps to the point of not being worth the bother.

Scientists have realized this. They're beginning to experiment with ways of stimulating neurons in ways that evoke sensations. But that's going to be a very long and hard road. Luke Skywalker's prosthetic arm is a long way away.

Michael Chorost researched neural prostheses while writing his book WORLD WIDE MIND: THE COMING INTEGRATION OF HUMANITY, MACHINES, AND THE INTERNET. It'll be published in February 2011 and is now available for preorder on Amazon.

Photo credit: Rehabilitation Institute of Chicago.

advertisement
About the Author
Michael Chorost, Ph.D.

Michael Chorost, Ph.D., is the author of World Wide Mind: The Coming Integration of Humans,.

More from Michael Chorost Ph.D.
More from Psychology Today
More from Michael Chorost Ph.D.
More from Psychology Today