Sometimes a research paper comes along that is so important—and so far over my head—that I know I have only one way to translate it for this column. That's when I contact the researcher and ask, "Please, could you explain this the way you would to guests at your dinner table?" I came across such a research report this week, and so I turned to neuroscientist Trent Kriete of the University of Colorado and asked for the "tell it to your friends" version. Here's what he told me about a computational framework he's developed that just might explain a lot about how the human brain works:
Every day we encounter situations where we must understand parts of the world around us in ways that we have not used before. Maybe a coworker switches roles at work, or perhaps we hear a word used in a new way. Typically, we handle these situations with such ease that we don't appreciate what an impressive feat we are performing; but for decades, biologists have been debating how our brains can possibly accomplish such a difficult task.
It may seem easy—at least in principle—to engineer a system that can apply general rules to novel situations. For instance, most of us have used a template when writing a letter that goes to many people, such as thank you cards after a wedding or graduation. This process is made easier by keeping everything the same except for certain “slots” into which we can quickly insert the content that needs to change, such as the person’s name or address. In computer software, one part of the program stores the variables while another performs the computation that writes the letter and inserts the correct information in its rightful place. Thus, the question of how a computer system could handle a never-before-seen scenario becomes rather easy to answer.
The problem is, biology does not always solve things in the same way an engineer would. The brain has billions of neurons connected together in complicated ways, and those neurons must work in concert to solve all the problems we encounter in our everyday lives. The central question becomes, then: How does the human brain allow us to generalize things we have never seen before? To answer that question, researchers have offered two very different ideas. The first says that the brain must work exactly like a computer—integrating variables to solve a problem. The other idea argues that we have to take biology seriously and that our brains are, at their core, learning systems. Researchers who support the latter idea argue that the computer-variable mechanism is unlikely because it’s hard to imagine how all the relevant variables could be learned, given what we know about how the brain works.
This week in the Proceedings of the National Academy of Sciences, a team of researchers from the University of Colorado-Boulder, the University of California-Merced, and Princeton University proposes a middle-ground approach derived from a computational framework built to simulate actual brain mechanisms. Their model demonstrates how the brain could implement something like the variables used by a computer system, but do so in a way that is consistent with what we know about the biology of the brain. Specifically, the study theorizes that structures known to exist in the brain could implement a variable system in a biologically realistic way. When the researchers factored in structure and connectivity between two areas of the brain (the prefrontal cortex and the basal ganglia), their simulated "brain" could generalize to situations it had never seen before.
The researchers—Trenton Kriete, Randall O’Reilly, David Noelle, and Jonathan Cohen—believe their work builds a bridge between two different sides of a debate that has persisted for decades. Their research shows that, on the one hand, the brain has some structure that allows it to encode variables and generalize to things it has never experienced before, suggesting that those who said that the brain had to implement a computational variable system were at least partly right. On the other hand, by building a biologically realistic model of the brain, the results ground the theory in what we know about how the brain is structured and how it learns. Critically, this system has limits since it must learn the mapping between these brain structures and, as a result, will not be perfect as a true computer system would be.
For More Information
Trenton Kriete, David C. Noelle, Jonathan D. Cohen, and Randall C. O’Reilly. Indirection and symbol-like processing in the prefrontal cortex and basal ganglia. PNAS. Published online before print September 23, 2013
Many thanks to Trent Kriete for serving as guest author for this report.