What do Blackberries, iPads, and brains have in common? All function well because of powerful integration of hardware and software, combining syntax, semantics, and pragmatics.
The predominant, commonsense view of minds still takes them to be spiritual entities, capable of immortality, communication with God, and free will. In contrast, science attempts to understand minds as working mechanistically, but the kinds of mechanisms proposed have changed dramatically over the centuries. Here is a rough chronology of the mechanistic analogies that have dominated scientific theories about minds and brains:
•1600s: clocks with gears
•1700s: vibrating strings
•1900s (first half): telephone switchboards
•1900s (second half): digital computers
Let me propose the following addition to this list:
This proposal derives from considering the virtues of my iPad, whose many uses I described in a previous post. The iPad's hardware includes a large, touch-sensitive screen, fast processor, long battery life, wireless communication, motion sensor, microphone, and speaker. The iPad's software include its speedy operating system and more than 300,000 applications that are available, many of them free. Most importantly, the iPad has a wonderful degree of integration of hardware and software, with a host of programs making elegant use of all the physical capabilities of the device. Smartphones such as the iPhone, Blackberry, and Android models add additional features such as cameras, greater portability, and of course telephones, but suffer in comparison from much smaller screens.
Brains are even better than smartphones and iPads in having still closer integration of hardware and software. Intelligent operations require a combination of:
• syntax - symbols with structure, as in grammars for languages;
• semantics - meaning attached to the symbols; and
• pragmatics - purposeful uses of symbols.
Current digital computers have are fast and effective at working with syntax, as we see in mathematical operations and the millions of lines of code that govern many software applications. But brains still surpass computers in most aspects of intelligence because they do not need to have semantics and pragmatics provided by an external programmer. People have external sensory systems for vision, touch, hearing, smell, and taste, as well as internal systems for sensing what is going on inside the body. Our bodies also give us the ability to act, so that sensory systems and motor control generate feedback loops with the world that provide many of our mental symbols with semantics. Moreover, bodies contribute to pragmatics in generating basic goals for food, water, shelter, and relatedness to other people; these goals are signaled by emotions that combine appraisal of the relevance of current situations with perception of bodily states. Digital computers have largely been syntactic engines, but human brains are engines that simultaneously and effectively combine syntax, semantics, and pragmatics in one effective package.
Smartphones, along with developments in robotics such as autonomous vehicles, are a step in the same direction, by virtue of the ways that they integrate hardware and software. Smartphones provide much more interaction with the world than most 20th century computers thanks to (1) input components such as cameras and microphones, (2) output components such as vivid screens, speakers, and earphone jacks, and (3) two-way wireless communication. Of course, smartphones are still far inferior to brains in many respects, including having the capacity to learn from experience and to be emotional and conscious.
What would it take to build a smartphone, iPad, or other computer capable of consciousness? Because science still lacks detailed understanding of how brains become conscious, it is difficult to give a precise list, but I think that enough is known to suggest the following desirable additions to make a smartphone conscious:
1. Improved sensory systems for vision and other modalities.
2. Motor outputs provided some controlled interactions with the external world.
3. Mechanisms for integrating different sensory modalities, binding them together into unified wholes. For example, coffee needs to be simultaneously viewed as brown, hot, fragrant, and flavorful.
4. Learning mechanisms to produce new representations and new connections among representations.
5. Mechanisms for still higher-level representations of representations, allowing for self-consciousness.
6. Increased inferential capacity allowing for processing of language-like representations.
This is a daunting list, but imaginable given the current state of neuroscience and engineering. Instead of trying to understand minds and brains in terms of smartphones, this list actually turns the analogy around by proposing to make computers smarter by making them more like brains.
I don't expect there to be a "consciousness app" on smartphones or iPads in my lifetime, but admit the possibility that advances in hardware AND software will eventually lead to its production. Until then, we can rest with the conclusion that I reached in my book The Brain and the Meaning of Life: minds are brains.
To come: Free will is an illusion.