Skip to main content

Verified by Psychology Today

Neuroscience

Brain Fever Rising

Are the latest investments in the brain sciences all they are cracked up to be?

Guest post by Michael Silberstein, Ph.D.

Professor of Philosophy, Elizabethtown College

Brainmania

It would appear that the global century of the brain has officially begun in earnest. As foreshadowed in his State of the Union address in February, President Obama formally unveiled the BRAIN Initiative — short for Brain Research through Advancing Innovative Neurotechnologies— in April of this year. One of the “Grand Challenges of the 21st century,” the new brain campaign is expected to lead to multiple technologies, bolster the economy, and create new cures for many diseases and degenerative brain disorders.

A few years prior to this, various federal and private entities kicked off the U.S. National Institutes of Health’s Human Connectome Project (HCP), the goal of which is to map the entire brain’s neuronal circuitry, wiring diagram, or structural networks. This so-called connectome is the brain’s network of interconnected nerves and fiber bundles. The general working assumption is that these networks relay signals that encode various kinds of information between specialized and localized regions dedicated to cognitive functions such as memory and decision making, and somehow binds them all together into a functional whole. This leads naturally to the notion that many brain disorders are “disorders of connectivity”—failures in the brains communication network—and thus the HCP is essential for advancing medicine and psychiatry.

The HCP is so-called in explicit analogy with the human genome project (HGP). If you remember back when that project was unveiled, you will also recall the overblown claims that mapping our genome would finally get to the root of human nature and individual personality. That didn’t turn out to be the case, for reasons discussed below, but that hasn’t stopped some of the leading HCP researchers, such as MIT’s Sebastian Seung, from making similar claims about HCP. Seung’s widely viewed TED talk is titled, “I am my connectome”, and his recent book on the subject is Connectome: How the Brain’s Wiring Makes Us Who We Are.

In contrast with—but also complementing—the HCP, the primary goal of the BRAIN Initiative is to create new tools and technology that allow us to “record every spike from every neuron” in the brain of some living and active organism, and eventually, of course, humans. Having achieved the first goal, the hope is to create methods for manipulating cells in the “neural circuit” under investigation, to see what effect that has on the rest of the circuit or on other circuits.

That is, if we think of the connectome as the pattern of transistors and switches on a computer chip or motherboard, the BRAIN Initiative is attempting to observe all the electrons travelling along the pattern of transistors and switches in the brain. Putting it all together, the idea is to map the dynamics of individual neurons and the circuits that connect them, thereby learning how individual cells and complex neural circuits interact in both time and space.

Critical Thinking About the Brain

Obviously, the preceding research programs are in the trendy genre of big-data driven science. The working assumption here is that lack of such data is the major stumbling block to getting the brain to give up its secrets. But is this really so? There is rarely such a thing as raw data in science. What we count as data and how we perceive it is typically a function of some set of theoretical preconceptions.

Thus, we would do well to remind ourselves that as neuroscientist Miguel Nicolelis (in his 2011 book Beyond Boundaries) puts it, the entire history of neuroscience has been a battle between “localizationists” and “distributionists.” The former hold that “brain functions are generated by highly specialized and spatially segregated areas of the nervous system” and the latter claim that, “rather than relying solely on unique specialization, the human brain calls on populations of multitasking neurons, distributed across multiple locations, to achieve every one of its goals.” Localizationists tend to believe that the deepest explanations in neuroscience (pun intended) will always come from smaller and smaller spatial and temporal scales such as the neuronal, the molecular, etc., whereas the distributionists tend to believe that deep explanation can come from higher level features of the brain such as topological properties that transcend the details of any deep structural features.

So which perspective is correct? The trouble for HCP and the BRAIN Initiative is that no amount of mapping alone will really address this question, and the question is absolutely crucial, because it is in light of the answer that we will interpret the data. And one thing apparently missing from this funding is support for theoretical explorations of this sort.

Which brings us back to the analogy between HCP and HGP. The HGP grew out of an era in which genetic determinism was a common belief among biologists. HGP was motivated in large part by its own version of localizationist and modularist thinking, for example, in the form of the gene doctrine that there is one and only one gene responsible for each protein. Upon sequencing the human genome, however, many scientists were shocked to discover that humans only had 30,000 genes as opposed to the predicted 100,000. They were also surprised to find out that only 300 unique genes distinguish humans from mice. And, again surprised to learn that the very same genes (or, more accurately, gene networks) can give rise to many different proteins.

In light of all this, famed Harvard biologist Stephen Jay Gould said in The New York Times: “The collapse of the doctrine of one gene for one protein, and one direction of causal flow from basic codes to elaborate totality, marks the failure of [genetic] reductionism for the complex system we call cell biology.” In the decade since this statement was made more and more evidence has accumulated that Gould was right. Hence the rise of epigenomics and the choir of people claiming genetic determinism is dead.

It should come as no surprise, then, that more than a decade after the completion of the HGP, it has yet to produce the advertised benefits in the form of curing diseases, reducing genetic birth defects, etc. This is because the overwhelming majority of human diseases have multiple intertwined causal factors including: many genes in interaction with one another, multiple signals within the cellular environment (such as hormones, electrical signals from other cells, nutrient supply, etc.), and factors in the bodily and external environment. Furthermore, all of these causal factors affect one another in complex and nonlinear ways.

If the same principles of function-structure mapping reassert themselves at the neural level—and there is already every reason to think that this is so—then the neuron doctrine is likely to go the way of the gene doctrine, even if most neuroscientists have not yet accepted that fact. But consider that the work of Olaf Sporns (one of the fathers of connectomics), which focuses on dynamical and network features of the brain, appears to demonstrate that very different neurochemical mechanisms and wiring diagrams can instantiate the same functional networks and thus perform the same cognitive functions. In these models it is primarily the large-scale topological features such as various types of “small-world” network structure that explain cognitive function—as opposed to lower level, local features. Furthermore, cognitive science already has the functional equivalent of epigenomics in the form of embodied, embedded, and extended accounts of action and cognition.

Conclusion

Given the zero-sum nature of scientific funding and the ever-shrinking dollars going into pure research, should we be funding localizationist-oriented work disproportionately over distributionist projects? There is now more than one book accusing modern neuroscience of being little more than neo-phrenology given in part its localizationist presumptions. This localizationist bias is on display in the following passage from National Academy of Science members Swanson and Bota wherein they justify HCP:

Human Genome Project success implied that a comprehensive account of structural connections forming brain circuitry a connectome of the organ of thought and feeling would be the next great biological milestone. This vision assumes that brain circuitry structural architecture provides a necessary foundational model to understand functional localization at molecular, cellular, systems, and behavioral organization levels. A Human Connectome Project goal might be framed as providing the detailed structural data needed to create a foundational nervous system structural model analogous to the DNA double-helix structural model (2010, PNAS, p.1, “Foundational model of structural connectivity in the nervous system with a schema for wiring diagrams, connectome, and basic plan architecture”).

As Sporns and Nicolelis illustrate, not all scientists involved with HCP and other major brain related projects have such a bias. Nonetheless, leading neuroscientists often sell the public a picture of much more unity, certainty and consensus with regard to localizationist presuppositions than there actually is, or ought to be, in neuroscience. If we are not very careful, the big new brain research project monoliths are in danger of foundering on the same rocks that sank the highest hopes of HGP.

advertisement
More from Michael L. Anderson Ph.D.
More from Psychology Today