The New Synthesis in Cognitive Science

Thinking results from neural process that can function as symbols.

Posted Jun 04, 2013

My colleague Chris Eliasmith has just published an amazing book, How to Build a Brain (Oxford University Press). It provides a new way of thinking about how brains make minds and synthesizes the major approaches to cognitive science. To simplify, we can sketch this history as follows:

Thesis (1950s): Thinking results from the manipulation of physical symbols like those that operate in digital computers.

Antithesis (1980s): Thinking results from sub-symbolic brain processes through the interaction of large numbers of neurons.

Synthesis (2013): Thinking results from neural process that can function as symbols.

Cognitive science got rolling in the 1950s with the insightful idea that new ideas about computing could suggest how thinking works as a mechanical processes. This idea was a major advance over previous analogies such as clockwork, vibrating strings, and telephone switchboards, and it generated many important psychological insights. But there remained many unsolved problems in the field of artificial intelligence, such as how purely computational symbols could have meaningful relations to the world.

In the 1980s, an alternative approach called connectionism arose with the claim that ideas about neural networks provide a better way of understanding how the mind works. Representations in neural networks do not look like symbols in natural language or computer programs because they are distributed across many simple neuron-like entities that interact with many others. Processing is highly parallel, requiring the simultaneous firing of many neurons, not serial like the step-by-step inferences that occur in linguistic arguments and most computer programs. Connectionism generated many insights about psychological processes such as concept application, but had difficulty explaining the high-level symbolic reasoning that is also part of intelligence.

Eliasmith’s new book provides the first plausible synthesis of symbolic and connectionist approaches to cognition. He proposes the new idea of semantic pointers, which are “neural representations that carry partial semantic content and are composable into the representational structures necessary to support complex cognition.” As in connectionism, semantic pointers are patterns of firing in large neural populations, but Eliasmith has figured out how to make them also work like symbols in high-level reasoning. His book describes the Semantic Pointer Architecture, SPA, which is a general account of how neural structures and processes can generate many kinds of psychological functions, from low-level perception and action generation to high-level inference such as what people do in intelligence tests. In my own work, I have found the semantic pointer idea to be wonderfully suggestive for developing new theories of intention, emotion, and consciousness.

Eliasmith’s synthesis can also embrace newer trends in the cognitive sciences. First, much current work in artificial intelligence uses powerful statistical techniques rather than symbolic models. Like neural networks in general, SPA gets much of its power from being able to deal with statistical properties, not just symbolic regularities. Second, some philosophers have rejected both symbolic and connectionist views of cognition in favor of vague claims that thinking is embodied, embedded and extended in the world, and tied to action by means of dynamic systems. SPA provides a detailed, neurologically plausible, and mathematically rigorous account of how the dynamics of embodiment, embedding, and action work.

If you want to learn more about semantic pointers, you can watch Chris’s new video. Or read his recent short article from Science. But if you want to get a deep understanding of how diverse mental processes operate in the brain, you really have to read his book.