The Emotional Machine and You
How will we change as technology learns to communicate with our emotions?
Posted Mar 07, 2014
“Affective computing” is the branch of computer science that involves the reading, interpretation, replication and sometimes manipulation of human emotions. Pattern recognition algorithms have advanced to the stage where they can now tell if you’re happy, sad, angry, etc., based solely on visual and audio cues – even if you’re trying to hide what you’re feeling. The fact is, we human beings communicate a tremendous amount at a nonverbal level. Our expressions, gestures, vocal inflection, posture and gait all say something to those around us, often in stark contradiction with our spoken words. Yet these channels of communication have been entirely unavailable to our computers. Until now.
Affective computing is increasingly being used in marketing, education and behavioral therapy and will in time become a common interface for social networks and gaming platforms. (See my recent article, “How Your Computer Will Read You Like A Book--And Then Sell You Stuff” for more on this.) But as this technology becomes more nuanced and adept, it seems likely many people will increasingly connect with these devices on an emotional level. After all, we often anthropomorphize nonhuman, even nonliving entities, treating them as though they were at least, in part, human. Once such objects become easier to identify with, won’t we be inclined to bond with them in ways we couldn’t before?
This raises many questions and concerns. Our basic emotions evolved long before our more rational functions. As a result our decision-making processes can often be short-circuited or negated by our feelings. We respond rashly to certain situations that trigger our limbic system, activating flight-or-fight behavior that can be poorly suited to modern situations. Too often, we decide with the heart when the head could choose so much better. Unfortunately, our evolution leaves us vulnerable to this emerging technology in ways we could never be with another human being.
Let me be clear: AI – artificial intelligence – is wicked hard. It has challenged computer scientists for the better part of a century. But it is advancing rapidly. There’s a tendency in fictional work such as “Her” to establish the created entity as having both near-human level intelligence as well as a capacity for acquiring emotion. Often this character will fall just a little outside human norms in order to support a theme of human uniqueness. But that is fiction. A system designed to manipulate emotions doesn’t need any more intellectual capacity than an insect and it most certainly doesn’t need to be sapient or even sentient. Given our evolutionarily developed responses to emotion, we would be extremely vulnerable to such a program. A baby’s cry, an adversary’s rage, a lover’s glance – all of these elicit strong, very specific responses. Responses that quickly disconnect us from our more rational faculties and could be used to direct our behavior.
It’s difficult to see how this can be adequately controlled simply through government regulation. Yet without a solution we leave ourselves open to systematic and pervasive manipulation. Will it be necessary to develop filtering systems or methods for otherwise canceling out this emotional targeting? More importantly, can this be done without sacrificing one of the essential things that makes us human?