The incredible ability and flexibility of human intelligence has long been a feature we consider to set us apart from the rest of nature. While other animals certainly think, none can juggle abstract concepts, manipulate language into poetry or engage in significant self-reflection. But in recent years, a new player has appeared on the scene. With the advent of artificial intelligence (AI), we've had to start thinking differently about the nature of intelligence and even whether this rapidly evolving technology may one day challenge our intellectual supremacy.
Last month's triumph by Watson, the IBM supercomputer, on the game show, Jeopardy, probably raised as many questions as it answered. (This seems only appropriate for a show that requires all responses be given in the form of a question.) The majority of these had to do with the nature of Watson's intelligence: Did it understand the information it was analyzing in any real sense? How much could it's probabilistic processes be equated with the way our own brains parse language? Was it a giant step forward in artificial intelligence or just a clever utilization of massive computing power? And perhaps most importantly, as human contestant, Ken Jennings joked, were we welcoming the arrival of our new machine overlords?
For as long as we've reflected on the nature of the mind, intelligence has been at the forefront of our speculations. While we tend to ascribe certain functions exclusively to the human mind, we're usually willing to crack open the doors of our restricted club and acknowledge that certain other animals - primarily primates and cetaceans - also share a number of these traits.
Broadening our definition, we recognize some more limited types of intelligence in "lower" animals, plants and even single-cell organisms. But once we cross the line into inanimate matter, we often balk. We tell ourselves that machines may use algorithmic trickery to mimic some of the incredible feats our minds perform, but they're certainly not displaying intelligence.
Don't be so sure.
Over the years, computers have become ever more capable of emulating tasks once thought exclusive to humans. In the late 1970s, the first commercial AI expert system was developed to help configure computer systems. It used more than 3,000 rules to configure more than ten different computer systems. By the late 1980s, expert systems were increasingly used in industry to solve routine problems. In the 1990s, chess-playing computers became increasingly common, calculating moves based on the game's well-defined rules. By 1997, IBM's Deep Blue supercomputer defeated then-world chess champion Garry Kasparov in a six game match. The supercomputer applied the game's rules using brute-force methods, running through every possible play several moves out.
But of course, these are all rule-based examples - systems that follow "mechanical" steps in order to achieve an outcome. We know there's far more to intelligence than that.
It can be argued that at least at one level, the brain is a highly capable pattern recognition machine. We can recognize the face of a friend in a crowd in an instant. Or isolate a single conversation in a noisy room. Or detect a pattern of behavior from a small set of clues. It's a phenomenal ability. One that can't be achieved by following a series of programmed If-Then statements.
But in recent decades, computers have become surprisingly good at this too. Using algorithms called "neural networks", machines can now read printed material with tremendous accuracy, transform spoken word into text with virtually no training and sort and retrieve pictures based on a desired image. Machine-based pattern recognition is even being used to identify suspicious activity in credit card transactions and areas related to national security. This is pattern recognition at a level well beyond what even our own minds can achieve.
Okay, but the brain is more than a set of rules or a pattern recognition system, isn't it? Intelligence is often viewed as an extensive range of functions and abilities which combine to make up the complete mind. This makes sense from an evolutionary standpoint, since cells that aggregate and differentiate can eventually lead to discrete organs of increasing complexity. The many regions and functions of the brain suggest this is exactly how it evolved.
In artificial intelligence, Marvin Minsky's "society of mind" concept shares a similar view. In it, subprocesses called "agents" combine to eventually give rise to something that is greater than the sum of its parts. Features such as self-reflection and even consciousness could potentially emerge from such an assembly. This is an amazing quality in emergent systems; they resolve into processes and effects that cannot be fully predicted or even anticipated.
But for some, this still isn't enough to convey the elusive label of intelligence. After all, surely you need biological processes in order to be truly intelligent?
Perhaps and perhaps not. But in the event the phenomena we call intelligence does need something that at least approximates the biology of neurons, then neuromorphic engineering may one day lead to it. Neuromorphic engineering is a developing field that seeks to build biologically inspired microprocessors and other hardware that emulate neural functions. It's yet another approach in the effort to advance machine intelligence.
On top of all of this, computer processing continues to improve at an astonishing pace. Moore's Law (named for Intel co-founder Gordon Moore) states that the number of components that can be placed on a processor doubles every one to two years, resulting in a corresponding increase in computer processing power and reduction in cost. Computer storage and memory follow a similar, somewhat steeper curve of exponential improvement. The result is that our ability to build evermore powerful computers with increasing abilities will probably go on for quite some time.
Taking all of this together, machine intelligence that is equal or superior to human intelligence is probably likely, if not inevitable. While we're still years or decades away from this, the evolution of machine intelligence will have occurred in an eye blink compared to the glacial pace by which biological evolution arrived at our own incredible minds. And even if it doesn't occur, continuing improvements in AI will offer us significant opportunities to study and reflect on the nature of our own intelligence.
In coming dispatches of "The Intelligence Report," I'll be exploring these ideas in greater detail, considering their personal and social ramifications and offering other thoughts about the future of new intelligences. In case you think this is all about machines, don't worry. I'll also be writing about emerging technologies that could impact human intelligence, including intelligence amplification, neural prosthetics and brain-computer interfaces.
And just like Jeopardy, I suspect it will raise as many questions as it answers.