About two years ago, Stephen Hawking said in a BBC interview that “the development of full artificial intelligence could spell the end of the human race.” Should we heed this warning and be afraid of artificial intelligence (AI), or should we be excited about the potential benefits that it could bring?
Notwithstanding Hawking’s gloomy predictions, I’m very excited about the development of AI, even though there is a long way to go before we can produce general AI. One of the main reasons for this excitement is that humans often make poor decisions, and that AI may help us to make better ones.
In psychology, there are many examples showing that human rationality is limited. We suffer from cognitive limitations such as limited attention and limited short-term memory. We suffer from biases such as the tendency to confirm our views and to ignore evidence that contradict them. And, of course, we suffer from all sorts of prejudices.
Sadly, experts suffer from these limitations and biases too. These affect important decisions made in politics, finance, science and medicine (Gobet, 1997; Gobet, 2016). For example, many diagnosis errors are made in radiology and psychiatry, and political scientists are victims from hindsight biases and overconfidence, among other biases. Psychologists are not immune. In a classic study on the confirmation bias, Mahoney (1977) asked reviewers of a psychology journal to evaluate a manuscript submitted for publication. Different versions of the manuscript were used; while the introduction and method section were identical, the results were either consistent with the reviewer’s theoretical approach, inconsistent, mixed, or absent. Mahoney found that the reviewers showed strong biases against the manuscripts that reported findings that did not fit with their own theoretical views.
Artificial intelligence comes of age
Humans’ cognitive limitations are very serious, and the consequences substantial. I believe that AI offers the prospect of helping humans make better decisions. In specific domains, such as chess and the oriental game of Go, we know that, when facing complex problems, even top experts are far from optimal decision makers. We know this because computers are much better than humans.
Recently, there have been some stunning developments in AI research. Here are three examples: In February 2011, IBM Watson trounced two human super-experts at the game show Jeopardy! In this game, which deals with general knowledge, contestants are given an answer and have to find the question. Playing well requires good natural language skills to understand the question, which is sometimes subtle, access to vast knowledge, and the ability to process this knowledge efficiently to make correct inferences. In February 2015, the journal Nature published an article describing an AI system developed by Google Deepmind able to learn to play Atari video games by itself, without instruction (Mnih et al., 2015). It reached a high level in most of the games. In 29 out of 49 games, it performed better than human experts. In March 2016, AlphaGo, another product of Google Deepmind, beat a top human Go grandmaster. Go had long been a challenge in AI research, given the complexity of the game. In addition, techniques that worked well with chess, based on look-ahead search, were not successful with Go, which appeared to be mastered only with human intuition. As noted in a previous post, AlphaGo used a combination of pattern recognition learning, reinforcement learning to tune its knowledge, and Monte Carlo search.
The skeptical philosophers
Philosophers have proposed many reasons why AI is in principle impossible, but I don’t think any of these reasons are valid (for a discussion, see Russell & Norvig, 2009). For example, in a famous book called “What computers can’t do”, Hubert Dreyfus (1972) argued that computers could never play chess at a high level because they lack intuition. However, the victory of Deep Blue in 1997 over chess world champion Gary Kasparov showed that this prediction was incorrect. Nowadays, computers are so much better than humans at chess that matches between humans and computers have disappeared. As we have just seen, the program AlphaGo beat one of the best human Go players earlier this year, a game where intuition is more important than calculation.
Costs and benefits
Obviously, there are dangers with the development of AI. While scenarios of machines enslaving humans, such as in The Matrix, or Hawking’s predictions that machines will destroy the human race, are very unlikely, other risks are real. For example, machines might make mistakes even if they have good intentions. They will make some occupations obsolete, with risk of unemployment. And, of course, humans might use machines with evil intentions.
But I think that we face even greater dangers if we do not develop AI. We live in a dangerous world, with for example the threats of terrorism, international conflicts and global warning. (Indeed, humans do not need AI to destroy themselves!) With these threats and many others, AI could help us make better and more rational decisions.
As president of the United States of America, would you rather have Hillary Clinton, Donald Trump or a super-intelligent computer? As a neutral observer, I would rather go with the computer!
This text is based on introductory comments made at the public debate Policy Provocations 2016 on Could artificial intelligence become a threat to mankind: Will we ever build machines that we can say are intelligent? 21st September 2016, The City of Liverpool College
Dreyfus, H. L. (1972). What computers can't do: A critique of artificial reason. New York, NY: Harper & Row.
Gobet, F. (1997). Can Deep Blue™ make us happy? Reflections on human and artificial expertise. In R. Morris (Ed.), AAAI-97 Workshop: Deep Blue vs. Kasparov: The significance for artificial intelligence (pp. 20-23).
Gobet, F. (2015). Understanding expertise: A multidisciplinary approach. London: Palgrave.
Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1, 161-175.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529-533.
Russell, S. J., & Norvig, P. (2009). Artificial intelligence: A modern approach (3rd edition). Upper Saddle River, NJ: Prentice Hall.