Empathy
Empathy and its Discontents
If we're strangers to ourselves, are we androids to other people?
Posted July 26, 2012
When Apple announced an upgrade to its mobile operating system, we also learned about some improvements to Siri, the iPhone's personal assistant. Siri had been one of the most anticipated new features on the latest iPhone and has mostly succeeded in gimmickry, if not wizardry. She has apparently gotten smarter, and will now be able to give us sports scores, tell us what's playing at the local theaters, and launch our apps. What is likely to happen is that Siri will continue to both please and disappoint. She will have new lines—firing off comebacks to oddball questions and letting loose with an apparent sense of humor. She is also bound to fail in one important way: she will continue to lack empathy. Of course, Siri isn’t the first form of artificial intelligence to let us down.
ELIZA was developed in 1966 by Joseph Weizenbaum, a computer scientist at MIT. Named after the character Eliza Doolittle in George Bernard Shaw's play, Pygmalion, ELIZA parodied a Rogerian therapist. She would introduce herself by saying, "What is your problem?" and respond with such banalities as, "Can you tell me more about that?" Such responses would then elicit elaborative responses from the user, creating a simulated sense of being empathized with and listened to. Simple pattern matching was used to provide pre-scripted responses to common user statements. Weizenbaum, who later became disenchanted with artificial intelligence, never imagined ELIZA as a replacement for human empathy. And ELIZA’s users (or patients) became quickly bored of her mechanical pleasantries. Still, ELIZA was a hit among early computer users and became an early demonstration of computing magic. ELIZA was also early proof of our fondness for being listened to by a machine.
Androids, robots, and automatons have been characters, if not themes, in science fiction—indeed fiction in general—arguably since Mary Shelley’s Frankenstein. While some novels use projection as a device to look at aspects of our humanity that are difficult to bear, others—notably Isaac Asimov’s collection, I, Robot, ask moral and ethical questions about the role of technology in our lives. What is most interesting is how bewitched we are by the idea that an android, robot, or talking telephone might be able to listen to our heart, read our mind, or somehow know about our true intentions.

In his novel, Do Androids Dream of Electric Sheep, Philip K. Dick examines empathy among humans and androids in terms of both social currency and political force. In his post-apocalyptic world, a nuclear war has devastated the Earth with radiation. Androids begin inhabiting the planet, causing unrest with the humans that remain. A new religion is born—Mercerism—which focuses on the distinctly human capacity for empathy. People are led to believe that the major difference between humans and androids is the human capacity for empathy. Human bounty hunters use a modified Turing Test to sort out who is human from who is android—the Voigt-Kampff Test. The people who are left on Earth are able to use an “empathy box” to connect with the majority of humanity who is left. Ultimately, however, Mercerism is discovered to be a fraud and the androids show uncanny examples of empathy. The human survivors also lose their empathy. While it is eerie to consider androids developing empathy, nothing seems more uncanny than a human being lacking empathy.
The science of empathy tells a similar story. Whatever else we are, we are deeply social and empathic creatures. We have a social brain that involves perceptual, affective, and empathic systems. We are wired to observe and experience the emotional states of other people. Such processing of emotion is both unconscious and embodied. When robotics professor Masahiro Mori formulated the hypothesis of the uncanny valley, it appeared as if a glitch in our wiring had been discovered. To the extent that a simulated human shows a greater likeness to a real human, we experience greater empathy—positive responsiveness. We reach a point, however, when the similarity produces a negative response—a feeling of uncanniness, as it were. When graphed, this colossal dip in positive responsiveness has been called the uncanny valley. One only needs to view the animated film, The Polar Express, to experience this strange emotion firsthand.

Of course, the uncanny valley does not represent a glitch, but is most likely an adaptation. We are wired to notice the difference between ourselves and other people, and we are constantly updating this information unconsciously. Recent fMRI research has pointed to substantial activation in the parietal cortex when subjects were presented with uncanny androids. In particular, the part of the brain’s visual cortex that processes bodily movements and contains mirror neurons showed the highest level of activation. This suggests a neural basis to the dissonance between an android’s human-like appearance and its motor movements.The discovery of mirror neurons in 1996 by Giacomo Rizzolatti and his colleagues at the University of Parma has led to promising avenues for future research. In a study of monkeys, they found that neurons involved with motor control became active when the monkeys engaged in specific forms of grasping, such as picking up a peanut. Then they found that 20 percent of the neurons became active again when watching another monkey engage in similar behaviors. They called these neurons mirror neurons due to their presumed, imitative function. It has been hypothesized that similar neurons are present in the parts of the brain concerned with social cognition—such as the inferior frontal gyrus, temporo-parietal junction, and inferior parietal lobule. Since such brain regions are central to social cognition, it stands to reason that a key neural component of empathy may involve mirror neurons. Of course, more research is needed to substantiate this.
Can we learn something about empathy by understanding its absence? Psychologists and neuroscientists have made a distinction between cognitive empathy and affective empathy. Individuals with autism spectrum disorders have been viewed as having deficits in cognitive empathy—lacking a theory of mind. Such empathy involves imagining how another person might think—particularly in social situations. The simplistic view would be that people with autism struggle to perceive interpersonal situations from another person’s perspective. Such a view is not without criticism, however, and it may not be particularly helpful to view individuals with autism as having empathy deficits. Less controversial is the view that psychopathic individuals exhibit a lack of affective empathy. Such individuals have deficits in imagining how another person might feel. This deficit forms a core feature of antisocial personality disorder. Individuals who lack fellow feeling entirely are thought to be deceptive, manipulative, narcissistic, and potentially dangerous.
We are social creatures with a disposition to recognize emotional deceit. We are attuned to the affective architecture of other people yet are wired to recognize emotional fraud. In the end, the frustration we feel with our devices may reflect the frustration we feel with one another. If we are, as Freud might say, strangers to ourselves, we are surely androids to other people at least some of the time. Perhaps we should be more patient with Siri.