Skip to main content

Verified by Psychology Today

Artificial Intelligence

The Emotional Life of Intelligent Machines

What single question do I most frequently get asked about Emotion AI?

Richard Yonck/Shutterstock
Source: Richard Yonck/Shutterstock

It’s been a little over a year since the publication of my book, Heart of the Machine, an in-depth examination of the potential technical and social repercussions of affective computing, a branch of computer science focused on systems that can read, interpret, replicate and otherwise interact with our emotions. In that time, I’ve had the opportunity to speak about this exciting emerging technology at festivals, public readings, conferences, think tanks and of course, at bookstores. In all this interaction with readers and the public, there’s been one question that has been by far the most commonly asked: “Does this mean these devices, computers or robots can experience emotions like we do?”

Though this is explicitly addressed in the book, I always answer with an unequivocal, “No, it does not.” If there is time, I’ll even explain that while these machines and programs may eventually experience something akin to human emotions some day in the future, this won’t be the same as it is for people for several very important reasons. Nevertheless, I find it intriguing and even a tad illuminating of the human psyche that this is the first question on so many people’s lips.

Is it that we’re concerned about maintaining our uniqueness as emotionally intelligent beings? Are we concerned that the “irrational” nature of emotion could lead to AIs run amok? Do we fear the idea that machines might one day interact with us on the most personal levels? Or perhaps it’s the other way around? Is it possible, even likely, that many of us actually want to see the development of emotional machines? Do some of us secretly wish for this, seeking some connection that many people feel is currently missing from their lives?

It’s worth exploring this a little more closely. First of all, AI research has long had a goal of achieving something equivalent to human intelligence – what’s often called strong AI or artificial general intelligence (AGI, for short). But to date the vast majority of successes have been in far more constrained applications – usually referred to as narrow AI. Voice commands, facial recognition, chess playing, poker, even autonomous vehicles – these are all narrowly focused applications of artificial intelligence. There is little common sense or general knowledge, even within a program as capable as the version of Watson that bested Jennings and Rutter, the all-time human champions of the game show, Jeopardy. All of these programs are limited to performing in a very specific domain and should they be applied beyond the boundaries of that domain, they break or fail. This is what is known in computer programming as software brittleness.

Intriguingly, research has shown that a great deal of the human intellect owes its flexibility of thought and ability to make decisions to the fact we are emotional beings. Our knack of determining where to put our short- and long-term focus, our faculty for culling through a glut of details and data, our ability to adapt to everchanging circumstances on-the-fly, is all driven and moderated by the values our emotions place on that moment and situation.

The HBO series, Westworld is a great example of this. Though the show is fiction, it does a fine job of highlighting the importance of emotion in the intellect of its android hosts. It's only after emotional connections are introduced that these hosts become indistinguishable from the human guests, developing consciousness as well as a ferocious will to live. Absent our emotions, we would be very like those hosts, veritable automatons, far more fragile in our day-to-day actions and decision making than we really are. Of course, without our emotions, it’s also highly unlikely we would have survived this long as a species either.

So, given all of this, is it possible that instilling the equivalent of emotions in AIs could help to address some of the brittleness previously discussed? Perhaps, though that is a very big leap, as well as a challenging idea to prove without actually implementing and testing it. But it brings up another very important consideration: How would we implement such a strategy? For all of the amazing speed and capability of machine intelligence, it is after all built on an entirely different substrate than human intelligence. This means that you and I originate from a biological basis beginning from amino acids that lead to proteins, cells, organs, systems and eventually a similarly derived cognitive command center known as the brain, with all of its neurons, dendrites, axons, ganglia and hundreds of related cells, cortexes and processes. Computers and AIs on the other hand begin from “doped” silicon or other semiconductor materials. These are incrementally organized into transistors and other elements that in turn became circuits, registers, buses, memory and processors operated on by software routines, modules, APIs and user interfaces. Such machines shuttle and manipulate bits in contrast to our body’s use of molecules, hormones, neuropeptides and electrical potentials. So, while we may strive to emulate biological processes with silicon, we're unlikely to succeed by doing it in any direct fashion. As it is, the majority of our previous successes in AI have depended on recognizing this limitation. In nearly all of these cases, we’ve tailored our engineering to achieve tasks through methods more appropriate to the tools and materials at hand.

This doesn't mean that drawing inspiration from nature – what’s known as biomimicry – can’t be useful. But this approach has its limitations, particularly when applied to different substrates. For instance, while the first airplane designs took inspiration from birds, had the Wright brothers insisted on faithfully mimicking avian flight, they never would have gotten off the ground. Instead they worked with the materials available to them at the time to manipulate more general forces, such as lift, drag and thrust, to successfully get them airborne.

Such differences limit what we can do to emulate emotion in a nonbiological substrate. Perhaps most importantly, while there are significant cognitive components that integrate with our experience of emotions, these predominantly originate from our body's endocrine system, the chemical messenger system that directs so much animal behavior. Obviously, computers don’t have bodies and robots don’t have hormones that activate in response to environmental and situational conditions. Rule-based systems can and have been built to emulate this feature of biology, but again are far more brittle than actual biological messenger systems.

This isn’t to say the task is impossible. Perhaps certain types of neural nets, such as generative adversarial networks (GANs), could one day be trained to mimic the triggers and behaviors of an endocrine system? I don’t know this with any certainty, though I suspect in time something like this could be feasible. Nevertheless, these would still be very different from the chemical messengers that humans rely on and so would only be approximations of how our own minds and bodies respond to external and internal conditions.

There are many other reasons, machine intelligence will never be the same as human intelligence, even if it does eventually achieve human levels or even exceed it. In the meantime, there are still very considerable hurdles to overcome before that day arrives, perhaps around the middle of this century. (This is the median consensus of several surveys of AI researchers, though you can find opinions that range from five years in the future to never.)

Will ersatz emotions be required to get AI beyond a certain limited level of general intelligence? I believe it’s likely, though there are numerous other challenges that will need to be overcome as well. But perhaps more importantly, modeling an aspect of machine intelligence on human-centric emotional systems may eventually allow these systems to share similar values to our own. Aside from various concerns about unfriendly or uncaring superintelligent AIs, this is extremely important because we will be developing these machines to control the increasingly complex systems in our rapidly advancing world. This strategy would be very much in our interest. With growing frequency, there won’t be time for human intervention when our industrial and electronic infrastructure is threatened in the future. Because of this, we will be forced to turn ever more control over to our machines in order to keep up and we need to be able to trust them. We need to do more than hope that a system that makes on-the-fly decisions will be in line with our own priorities, whether that system is piloting a passenger jet or operating a highly toxic chemical refinery.

So, to reiterate: No, machines will not experience emotions as we do, not for a very long time, if ever. But in the far shorter term, we may find that there are considerable benefits to developing methods for emulating human emotion in AIs, leading to intelligent machines that can feel, at least a little like we do. And who knows? If we do it well enough, perhaps in a decade or two, some of them may even be our new best friends.

References

Bostrom, N. Superintelligence. Paths, Dangers, Strategies. Oxford University Press. 2014.

Damasio, A. Descartes’ Error: Emotion, Reason, and the Human Brain, Putnam. 1994.

Yonck, R. Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence. Arcade Publishing, NY. 2017.

Yonck, R. “Toward a Standard Metric of Machine Intelligence"; World Future Review. 4: 61-70. May 2012.

advertisement
More from Richard Yonck
More from Psychology Today