In a recent post, I suggested that artificial sexual experiences in the future will rival, or surpass, the real thing. The fear of enslavement by intelligent machines and the fear of machines that are smarter than us is a staple of science fiction following Isaac Asimov's lead. Now one celebrity scientist, Stephen Hawking, is scared. Should the rest of us be?
Why Hawking is freaked out
Hawking's point of departure is a new movie, Transcendence, starring Johnny Depp. The film involves robot versions of humans that express the full range of human emotions with analytical power surpassing the collective intelligence of everyone on earth throughout all of history.
We expect science fiction to exaggerate, of course, but there is a sense in which our worst fears have already been realized. We live in an era where computers are already better than humans at chess, and even at playing jeopardy. Self-driving cars are safer than those controlled by human operators. Planes are now mostly driven by computer programs and the arrangement of planes in the sky as they await runways is a marvel of digital technology.
Computers read some medical scans better than doctors and are even better at testing scientific hypotheses than most scientists are. Computers are still shaky at playing three-dimensional sports like soccer but they are improving rapidly, thanks in part to the development of intelligent prosthetic limbs.
Closer to home, Google knows what I want to search before I finish typing my request. Bots record my movements on the internet and follow me around to various sites suggesting products for me to buy. Some helpfully patrol the internet looking for bargains tailored to my needs.
Technology is already a great deal smarter than any of us. Whether we must worry is largely a question of whether robots are well disposed towards humans, and whether that relationship is stable. This problem is well captured by the relationship between Jeeves, the butler, and Bertie Wooster, the employer, in the P.G. Wodehouse novels.
Bertie Wooster could always trust Jeeves and their relationship was certainly predictable and as stable as the British class system on which it was based. Even if Jeeves was smarter than Wooster and better at taking in relevant information from the environment, Wooster did not have to worry about having his position usurped.
Having a smart butler was an advantage, just as having a muscular gardener would be but it was not a threat because Wooster trusted Jeeves to use his intelligence for the master's benefit. Wooster no more worried about Jeeves subverting the social order than he worried about the gardener entering the french windows to lay him low with the blow of a shovel to the side of the head. There are no peasant revolutions in Wodehouse.
There is no equivalent of a rigid class system to keep robots in check, although Isaac Asimov suggested that robots should always be programmed in ways that ensured their efforts were directed to serve the good of humanity. That can get tricky, particularly in a future world where robots are designing themselves.
Hawking recognizes that future technologies may help us to eradicate longstanding human problems of war, disease, and poverty. Yet, he is wary of potential problems:
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, and developing weapons we cannot even understand,” Hawking frets.
In an era of high speed trading, it might be argued that one of these hurdles has already been passed and artificial intelligence is already very good at doing science. And who really understands what is happening at the frontiers of new weapon development.
Yet, the most basic question to ask about AI is not whether it is potentially dangerous but whether it is trustworthy.
Why we should really worry
Many science fiction dystopias conceive of robots rising up and wresting control away from human leaders. That is probably the last thing we need to worry about unless the robots are controlled by a malicious human intelligence.
There are many reasons why artificial intelligence can be threatening but I would put accidents far above malice. Some futurists worry that tiny self-replicating entities could envelop the world causing ecological collapse – a nanotechnology apocalypse known as the “grey goo hypothesis.”
It is not hard to imagine how computer viruses might infect many control systems so that financial markets froze up, the electrical grid went black and planes collided in midair over airports whilst self-driving cars plowed into walls. Don't even think about a robot performing heart surgery when its software becomes compromised!