I predict that the release of Siri on the iPhone 4Gs will someday be considered a milestone in the history of artificial intelligence. Not because it is some kind of new advance in the Turing Test, but because it puts artificial voice-responding agents into the hands of the general consumer, and that's pretty cool. I can't imagine anyone more excited about this day than Douglas Hofstadter circa 1979. Hofstadter's book Godel, Escher, Bach
, published that year, is a masterpiece, incorporating computer science, math, Zen Buddhism, music, art, and uses dialogues, invented computer languages, and elegant prose.
The book is still awesome. It also functions as a cultural time capsule. It was written right after the first explosion of personal computers, around the time of the Apple II and the Intel 8068.
Near the end of the book, Hofstadter asks ten questions about the future of artificial intelligence and gives his own guesses about their eventual answer. For this post, I take a look at those questions from the perspective of 32 years later.
Short answer: we still don't really have satisfying answers to most of these questions. Despite Siri, and despite Watson's performance on Jeopardy earlier this year, we have gotten, charitably, less than 5% of the way towards a computer that can pass the Turing test (perhaps a lot less than that). And that makes reading the book today kind of a bummer. Siri is probably what Hofstadter '79 would have predicted for 1984, not 2011.
Still, 32 years is a long time, and it's fascinating (for me at least) to reconsider these questions and the mindset that produced them. Herewith, Hofstadter's questions, his predictions, and my comments.
1. "Will a computer program ever write beautiful music?"
Hofstadter's Answer: "Yes, but not soon." "It would have to understand the joy and loneliness of a chilly night wind."
My comment: I dunno, I feel like a computer could write beautiful music without understanding anything at all. Then again, don't ask me. I think that Brian Eno's "Music for Airports" (released in 1978) is beautiful, and it totally reminds me of the joy and loneliness of a chilly night wind. And it was written at least partly by a deterministic mechanical process. I think so much of beauty is added by the beholder that it wouldn't be hard for a computer to write music that lots of people would consider beautiful. So I think Hofstadter missed that.
2. "Will emotions be explicitly programmed into a machine?"
Hofstadter's Answer: "No. That is ridiculous." He implies that computers won't be programmed to fall in love, but will do so anyway. A charming thought!
My comment: I don't think it's ridiculous to program emotion in. Lots of contemporary neuroscience emphasizes the relative discreteness of emotions from other aspects of cognition (and lots emphasizes their integrated nature too, of course). I am personally somewhat skeptical of these ideas, but I don't think they are ridiculous.
3 "Will a thinking computer be able to add fast?"
Hofstadter's Answer: "Perhaps not." It can't be allowed access to computational circuits, "otherwise it'll get addle-CPU'd."
My comment: I'm going to go out on a limb and say that any computer that passes the Turing test will be able to add REALLY fast. Maybe it can't access its own computational circuits (although I don't buy the 'addle-CPU'd argument'), but it will have access to other circuits that can add fast. I think this is just a 70's failure of imagination - a computer with its own computer embedded within it was probably weird then - now that pocket calculators are like a nickel, it's not weird at all.
4 "Will there be chess programs that can beat anyone?"
Hofstadter's Answer: "No. There may be programs which can beat anyone at chess, but they will not be exclusively chess players."
My comment: Hofstadter's biggest failure was his boldest prediction. Thing is, I doubt he was the only one who believed this at the time. Still kind of shocking that people in the 70's couldn't imagine that chess was algorithmic enough to routinize. Or maybe they just couldn't imagine the full implications of Moore's Law plus 20 years.
5 "Will there be special locations in memory which store parameters governing the behavior of the program, such that if you reached in and changed them, you would be able to make the program smarter or stupider or more creative or more interested in baseball?"
Hofstadter's Answer: "No." "There will be no 'magic' location in memory where, for instance, the 'IQ' of the program sits."
My comment: Since we haven't gotten Turing Test passers yet, we are still forced to guess here. But I think, if we ever get to a point where we are smart enough to make a program that can pass the Turing test, then we can use those same insights to modify the program in certain ways, like making it more interested in baseball. So I think there will be a smartness knob, but the proximal mechanism by which it works will be complicated and messy. That's just my gut feeling though.
6 "Could you 'tune' an AI program to act like me, or like you - of halfway between us?"
Hofstadter's Answer: "No."
My comment: I am more inclined to agree with DRH here. Not unless there are giant intellectual leaps made in Personality Psychology.
7 "Will there be a 'heart' to an AI program, or will is simply consist of 'senseless loops and sequences of trivial operations'?"
Hofstadter's Answer: "When we create a program that passes the Turing test, we will see a 'heart' even though we know it's not there."
My comment: Duh. A home run for DRH.
8 "Will AI programs ever become 'superintelligent'?
Hofstadter's Answer: "I don't know." "The idea of superintelligence is very strange." Touchingly, Hofstadter expresses hope that AI programs are themselves curious about AI.
My comment - OK, obviously, the reason he doesn't know the answer is that superintelligence isn't defined. I'm guessing he's answering a question he got from his friends and audiences at his talks, and I'm guessing those people were thinking of HAL. And as long as we are speculating, I am guessing that it was really all about people feeling insecure. They were really asking, "Will computers make me feel dumb?" The answer to that is no! No one in 2011 feels outclassed by their computer. Computers are our friends!
9 "You seem to be saying that AI programs will be virtually identical to people. Won't there be any differences?"
Hofstadter's Answer: Without a body, it would have different perspective on what is interesting and important. "My guess that any AI program would, if comprehensible to us, seem pretty alien."
My comment: I can't decide on this one. Couldn't we just program it to not seem alien? Hofstadter's writing feels to be suffused with this kind of Strong Emergentism, meaning he acts like, all of a sudden computers will attain sentience, and we have no idea how that happened. From my perspective today, this seems unlikely. There will be intermediate steps, and we will have a lot of control over it. Maybe that's because it's easier for us to imagine a computer that can pass the Turing Test, and how it would work, even if we aren't much closer to having one.
10 "Will we understand what intelligence and conscious and free will and 'I' are when we have made an intelligent program?"
Hofstadter's Answer: "Sort of." "No one will ever understand the mysteries of intelligence and consciousness in an intuitive way."
My comment - Uh... I'm gong to wait and see what Ben Hayden 2043 has to say about this.
(Assuming he hasn't been enslaved by robots in the Singularity Event of 2039.)