According to Moore's Law, the number of transistors on a chip doubles approximately every two years. Now, we have Stevenson & Kording's Law: the number of neurons that can be tracked in the brain doubles every 7.4 years.
This law was proposed yesterday by Ian Stevenson and Konrad Kording in an essay titled How advances in neural recording affect data analysis (Nature Neuroscience, January 26, 2011.)
Their "law" is intriguing because it suggests the number of neurons that can be tracked individually will increase geometrically over time. Neuroscientists would love that.
That's because neuroscience is currently hobbled by how few neurons it can monitor simultaneously in a brain. Out of a human brain's 100 billion neurons, researchers can presently monitor only about 200 at a time. This is sort of like trying to predict a presidential election by polling three people. Tracking more neurons would mean knowing more about what is going on in a brain.
Actually, neuroscientists can already do impressive things by watching just 100 or 200 neurons. They've enabled paralyzed people to manipulate computer cursors. They've also enabled monkeys to control robot arms.
Not to take anything away from such achievements, but they're more limited than they look. The software can only recognize a pattern of neural firing that it's seen before. If the user tries to do something the software wasn't trained to recognize, nothing will happen. And the user is asked, or forced, to do only what the machine can recognize.
That's partly because so few neurons can be observed. The pattern one hopes to see is so weak that often it's not clear that it's actually there. Having more neurons on hand would give more confidence that a given pattern of firing really is happening.
To derive the law, Stevenson and Kording looked at papers published between 1960 and 2010 to see how many neurons were reported being monitored.
From eyeballing their chart in the article, one can see that in 1960 and 1970 it was possible to monitor one or two neurons at a time. In 1980, about two to fifteen. In 1990, about seven to fifteen. In 2000, about fifteen to 100. In 2010, about 100 to 200.
Stevenson and Kording write that if their proposed law holds up, it should be possible to monitor 1,000 neurons by 2025.
To be sure, it could be premature to infer a law from this relatively small absolute increase. That is, an increase from 1 to 200 neurons over fifty years.
On the other hand, Gordon Moore formulated his famous law using just six years of data collected between 1959 and 1965, with numbers of transistors that would be trivial by today's standards. Yet Moore's Law has held up, reliably and robustly, for decades.
Stevenson & Kording calculate that if their law similarly holds up, it will be possible to monitor every single neuron in a human brain in...wait for it...220 years.
They acknowledge that their proposed law couldn't be sustained just by scaling up existing technology. They write, "This prediction, extrapolated from the past 50 years of growth, seems absurd given today's technology."
Given today's technology, it is absurd. Today, the brain regards electrodes as invaders. It mounts an immune response, coating them in a sheath of proteins that progressively degrades their performance. And a brainpan already stuffed with 100 billion neurons only has so much extra room to spare for listening devices.
Obviously, totally new technologies would have to be developed. In Wired's November 2009 issue I wrote about optogenetics, a technology that may allow larger numbers of neurons to be tracked and influenced.
But ultimately a large-scale readout technology would have to work on a cellular, even molecular, scale. Stevenson and Kording allow themselves a moment of fancy, writing, "One might imagine a system in which each neuron records spike times onto RNA molecules that could then be read-out by sequencing the results, one neuron at a time."
I asked Dr. Stevenson what that meant via email. He replied, "In theory, a calcium sensitive protein could allow different amino acids to be written onto RNA depending on whether a neuron spikes or not. After an experiment, an animal's neurons (mouse or c. elegans, for instance) could be removed individually and the molecular record could be read-out by sequencing the RNA contained in each neuron."
In plain English, this means attaching an RNA "spy" molecule to each neuron, then taking the brain apart neuron by neuron in order to read each one off. Uh, I'll pass on that.
And it only lets you capture one instant in time. "This obviously wouldn't work for brain-machine interface applications where real-time signals are needed," Stevenson added.
But it does raise the idea that future brain-machine interfaces may work on a molecular basis rather than using big clumsy electrodes. And maybe some exotic molecular or optical transport mechanism could send the data to the outside world.
Stevenson and Kording explain that understanding large datasets of neural activity is a huge challenge. As the number of neurons tracked increases, the number of potential "meanings" of their behavior goes up exponentially. Even Moore's Law can't handle combinatorial explosions. Ways will have to be found to keep the data computationally manageable, they say.
But the challenge is deeper than just reducing large amounts of data. As I wrote in a recent blog entry, statistical approaches to decoding neural behavior are fundamentally backward-looking. They can only recognize patterns of activity they've seen before.
But brains are novelty-seeking, novelty-creating devices. They are constantly finding and creating the new. I venture to guess that no one, no one, has a clue how to write a computational algorithm that can understand a brain's new thought or new action.
So even if Stevenson & Kording's Law holds, there may be some spark, some quality of consciousness, that can never be apprehended solely by watching ever-larger numbers of neurons.
By itself, Stevenson & Kording's Law would not enable robust brain-machine interfaces to come into existence. It is necessary but not sufficient. Something else, something currently totally unknown, will have to be invented.
Read more about cutting-edge neuroscience in Michael Chorost's book WORLD WIDE MIND: THE COMING INTEGRATION OF HUMANITY, MACHINES, AND THE INTERNET. It's coming out on February 15, 2011.