On Christmas Eve, December 24, 2013, Queen Elizabeth II issued a royal pardon to British mathematician Alan Turing.
Turing is best known for his development of the Bombe, a machine that successfully deciphered coded Nazi messages during World War II. The Germans encoded their messages using Enigma machines, which Adolf Hitler's military believed made them impenetrable. It is not an exaggeration to say that Turing’s genius turned the course of the war, allowing the Allies to be victorious.
Lesser known, however, is Turing’s insight into the nature of thought and intelligence, an insight that ushered in the age of computers and the cognitive revolution in the field of psychology. Let me explain.
What Turing Taught Us About the Possibility of Artificial Intelligence
In Turing’s time, behaviorism was the dominant psychological framework. Its hallmark was an exclusive focus on observable behavior, which meant that a psychologist’s job was to explain how stimuli evoked responses from an organism (Pavlov), and how behavior is modified by its consequences (Thorndike and Skinner). Looking inside the “black box”—the mind inside the skull—was forbidden. Because internal states (like thoughts, plans, goals, desires, and feelings) were not directly observable, they were considered strictly off-limits to legitimate behavioral scientists. An old joke summarized the field this way: Two behaviorists walk into a bar. One says to the other “You’re fine. How am I?”
And then came Turing’s work on finite-state automata. In 1936, Turing published a paper in the Proceedings of the London Mathematical Society which proposed a theoretical “machine” (mathematical abstraction) that could in principle carry out any recursive function. The “Turing machine,” as it came to be called, is a very simple system. It consists of (a) a tape containing symbols, usually blanks and slashes; (b) a scanner to read the tape; and (c) four operations: move right, move left, write a slash, and erase a slash.
The crucial point is that what the scanner does at any given moment is fully determined by exactly two factors: the symbol it reads on the tape (input) and its current internal state. There is no “Deus ex machina”, no external force or intelligence that tells it what to do. Yet despite its simplicity, this architecture comprises a machine of enormous computational power. In fact, the Turing machine formed the theoretical basis on which the modern digital computer is built. If you love your computer, thank Alan Turing.
How Turing’s Machine Brought Us Your Laptop and Tablet Computer
Here is how we got there: In the 1940’s a number of major developments in our understanding of intelligent behavior came about, all of which could be traced back to Turing’s “machine”. First, mathematician Claude Shannon described how information could be represented as binary choices among alternatives. The amount of information transmitted through a channel (e.g., a telephone wire) could be measured in bits, or binary digits, where one bit represents a choice between two equally probable alternatives. This made it possible to quantify the concept of information, and, more importantly, showed how electronic circuits could carry out Boolean logic. In Boole’s system, propositions (sentences) can be represented as binary truth values (true–false). Electromechanical relays also allow only two states: A circuit is either closed or open, on or off. Because of the binary nature of the two systems, electronic circuits could be used to simulate the logical operations of the propositional calculus. In other words, inference (logical thinking) could be automated.
Second, John Von Neumann developed a theory of artificial automata that process information and internally represent rules and instructions. The rules allowed machines to choose which action to execute based on which inputs are received. Digital computers (ENIAC and EDVAC) were designed based on these insights--machines that could think. Our everyday lives today consist of seamless interactions with computer hardware and software. That is how profoundly Turing’s insight has impacted our lives.
How Turing’s Machine Brought Us Modern Psychology
During this same time, mathematician and philosopher Norbert Wiener and colleagues developed servomechanisms, machines that correct themselves by computing the differences between goal states and current states and employing operations to reduce the differences. The result is machines whose behavior can be described as purposive and goal-directed. It didn’t take long for scientists to begin asking this: If machines an be described as purposive and goal-seeking rather than mindlessly responding to stimuli, why couldn’t animals be described this way? If the scientific study of machine intelligence was chock-full of descriptions of internal states that constituted information processing pathways, how could the study of internal human states this way possible be considered unscientific?
In 1948, neuropsychologist Warren McCulloch and logician Walter Pitts closed the gap between human and machine by proposing that, since neurons also operate as binary units (either they fire or they do not), they could be thought of as logical units carrying information. Essentially, McCulloch and Pitts showed that the brain could be understood as a Turing machine: Patterns of neural firing constituted thoughts and thinking. In other words, the mind is what the brain does—thinking is to the brain what a program is to a computer.
In 1960, psychologists George Miller, Eugene Galanter, and Karl Pribram published a book entitled “Plans and the Structure of Behavior” in which they called for a cybernetic approach to behavior. The idea was that humans should be viewed as active information processers, not as passive recipients that respond “reflexively” to the pushes and pulls of the environment. The gauntlet had been tossed at the feet of behaviorism, and the cognitive revolution began. In 1980, the Cognitive Science Society was established to foster interdisciplinary research on intelligent behavior among psychologists, computer scientists, mathematicians, linguists, philosophers, anthropologists, and neuroscientists. One shining example of such interdisciplinary research efforts was the development of neural networks, computer systems that consist of a large number of highly interconnected processing elements (neurons) that work together to solve specific problems, such as facial recognition, stock market analysis, medical diagnosis, self-driving cars, and other complex cognitive/perceptual tasks. The irony is that they accomplish these feats in ways that hearken back to behaviorism. At their simplest, they learn by strengthening connections between representations of items or events that lead to solutions and weakening those that lead to error.
Why the Pardon?
I began this article by stating that the Queen issued a royal pardon to Turing today, 60 years following his death by suicide. His crime? Homosexuality. His sentence: Chemical castration and custodial care. The result: Two years after his chemical castration, he committed suicide (at the age of 41) by by eating an apple laced with cyanide. There are many lessons to be learned from Turing’s life, not the least of which is that benevolence and genius is too often misunderstood when it occurs in those who are different from the majority.
Dr. Denise Cummins is a research psychologist and author of Good Thinking: Seven Powerful Ideas That Influence the Way We Think (Cambridge, 2012), as well as co-author of Minds, Brains, and Computers: An Historical Introduction to the Foundations of Cognitive Science (Blackwell, 2000).
More information about me can be found on my homepage.
Follow me on Twitter.
And on Google+.