Over-Simulated

Staying human in a post-human world

Jeopardy! gets a computer champion: Does it put our humanity in the form of a question?

Human vs. machine? Jeopardy gets computer champ, humans still win

A really interesting techno-cultural milestone is about to be passed; an IBM supercomputer named Watson will soon be crowned champion of our favorite TV trivia game where your response has to be in the form of a question. That's right, a cleverly programmed and very powerful machine is on the cusp of becoming Jeopardy! champion.

What makes this worth the hype in the following videos, hype also found in a Sunday's NY Times Magazine article by Clive Thompson "Smarter Than You Think: What Is I.B.M.’s Watson?" and on the Singularity Hub, is the truly deep psychological challenge presented by the often clever, even pun-y, clues Alex Trebek reads to contestants.

[watch video at http://www.youtube.com/watch?v=3e22ufcqfTs]

Find a Therapist

Search for a mental health professional near you.

In other words, being able to respond correctly to a "natural language" query is no small task for a computer to accomplish. IBM is well within its rights to crow about the achievement.

[watch video at http://www.youtube.com/watch?v=FC3IryWr4c8]

Let's look closer. One step in determining a correct response is they very "computer-y" process of accessing and searching a huge database of cultural knowledge. This is a step in a machine answering a trivia question with which all have experience, even take for granted—you have used that google thing haven't you? But our experience with computers trying to simulate an actual conversation—what the IBM spokesperson calls a "a question-answering system"—is a very different experience. I'm pretty sure we've all experienced what terrible conversationalists computers are. Consider what frequently happens when you call an insurance company or utility and get one of those infernal voice-only telephone response systems; they are terrible. That is what makes IBM's achievement so amazing. It really is a triumph (and will probably one day make the company a tremendous amount of money).

But, as one of the researchers said in the second video, is it really fair to say it is "capable of understanding your question"? I don't think so. Isn't it better to use the more accurate description and say the computer successfully simulated understanding your question? This is not just a semantic trick nor an exercise in academic wordiness. It changes the meaning of Watson's championship from being one more piece of lost human uniqueness into a celebration of a fascinating human-made technology that just might be able to be used for human purposes. In other words, feeling the awe this technology deserves is possible only if we ignore it's capacity for simulation entrapment, i.e., getting so caught-up in the technologically-mediated simulation that you forget you're interacting with a machine.

While the best way to enjoy Watson's win is to be both in it and out of it at the same time, it is not easy to do so. People gravitate either towards the entrapment of seeing Watson "understanding" in the same way we understand language or towards dismissing the entire event. Part of the problem is that our psychology leads us to see human qualities like understanding whenever possible; we're tuned to experience empathy.

Fritz Heider and Mary-Ann Simmel were mid-century psychologists who asked subjects to watch the following film:

[watch video at http://www.youtube.com/watch?v=76p64j3H1Ng]

Like I'm pretty sure you just did, their subjects saw a story, complete with intentions, attraction, and maybe even feelings of love present.  We saw that little triangle have feelings for the little circle even though we know it is impossible; it's just a geometric shape that was made to move in a particular pattern by the film-maker. But we see humanity nonetheless. Same thing with Watson. Just like the Heider-Simmel triangles Watson looks like it's "understanding"—even "playing"— even when all it is doing is quickly (really really quickly) obeying a set of mathematical instructions put there by a team of really smart people.

When we watch Watson respond correctly in the form of a question we attribute human qualities—such as understanding—not because the machine is human-like but because we are human.

Let me tell a quick story illustrating how not everything that looks like understanding is understanding. 25 years ago I worked on an inpatient unit with college-age schizophrenics. I was giving a series of psychological tests to a young man tragically going through his first psychotic break with reality. He was chaotic and confused. During a test of intellectual capacities I asked him a question he should have failed because he had gotten the previous, and easier items, wrong: "A man drives 275 miles in 5 hours, how fast was he going in miles per hour?" To my great surprise he quickly and correctly replied "55." He should not have ben able to understand this question nor the division involved. So I asked him how he figured out the answer and he got angry. He said "my father, my father, my father is a good man, a good man, he always drives the speed limit."

Can we say he understood the problem and the arithmetic required for it's solution? I don't think so. The process is just too different. And the same goes with Watson responding to a Jeopardy! clue. It works by statistically associating the co-occurance of linguistic terms across a vast database. It's a really, really clever way to simulate natural language understanding. Bravo to the programmers! But Watson does not "understand" the clues anymore than that suffering young man understood the arithmetic problem with which he was presented.

When we become entrapped by the simulation and ignore what we know about the processess involved in something like Watson winning at Jeopardy! we diminish our shared humanity. Incorrectly attributing to a machine a rich inner life like ours, whether it be understanding or love or longing, does no favors for the machine, the programmers, or any of us. Happily, it is not our only option. The other possibility is to embrace the differences thereby letting the human achievement that is technology like Watson enhance our humanity.

If we're going to live in a world in which we're forced to talk with cost-saving customer service computers instead of other people, they should at least work as well as Watson. But we shouldn't let ourselves become so enthralled by the experience that we lose sight of where machines stop and people begin.Vive la différence!

Todd Essig, Ph.D., is a training and supervising psychoanalyst at the William Alanson White Institute with a clinical practice treating individuals and couples.

more...

Subscribe to Over-Simulated

Current Issue

Let It Go!

It can take a radical reboot to get past old hurts and injustices.