The Future of Artificial Emotional Awareness
What can we expect from emotion AI?
Posted February 13, 2020 | Reviewed by Jessica Schrader
A market research group tests a new Super Bowl ad, tracking viewers’ emotional responses in tenth-of-a-second increments. A set of augmented reality glasses teaches a child with autism to recognize facial expressions. An automobile in-cabin detection system alerts drivers when they become sleepy or their attention wanders.
Welcome to the world of affective computing, also known as emotion AI. A relatively new branch of computer science, it is already being used today to emulate and augment this key aspect of human intelligence. Yet, the above examples represent only a bare fraction of the current applications and potential use cases. As emotion AI develops over the coming decades, we can expect to see it used in nearly every area of our lives.
Why? Because emotion is such a core aspect of the human condition, a critical component of who we are. It is in many ways the most fundamental means we have to communicate with each other, our best means of understanding what is going on inside another person’s head. For these reasons and more, it is one of the most crucial channels of communication for us, and soon it will be for our technologies as well.
Over the years, we’ve sought to make our devices increasingly natural and easy to use. One of the benefits of all the excess processing power in our computers today is that we’ve been able to redirect it to generate methods of interacting with technologies that allow us to work more on our terms rather than those of the machines. We no longer have to learn arcane instructions that we then enter at a command-line prompt as we did in the early days of the digital revolution. Today, we can do so much more with touchscreens and our voices than we ever could on an early computer keyboard. Continuing this progression, our devices are increasingly going to detect our moods in order to anticipate our needs, often before we're aware of them ourselves.
Of course, any technology as powerful as this carries with it the potential dangers of abuse. Already there are a considerable number of valid fears about facial recognition being used by corporations and governments around the world. Following from this, concerns about pernicious applications of affective computing are also on people’s minds. As so often happens with new technologies, we have to negotiate this new terrain with great care in order to ensure our well-being as individuals and as a society.
Since writing Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence, I’ve routinely had to explain that none of this technology is giving computers the ability to experience or understand emotions themselves. Even the computer’s interpretations of what we express are limited to programmatic responses. An expression is detected and the program responds in one of perhaps a handful of ways. Someone expresses a different feeling and the result is a different programmed response.
Related to this are the complaints that these emotional detection systems are far less accurate than people are in reading expressions, especially in less than ideal conditions. Furthermore, these critics continue, a smile doesn’t necessarily correlate to happiness and a scowl isn’t always an indication of anger. To which my response is an emphatic, “Of course!”
There are a couple of points that really need to be addressed here. First, computer programs and particularly various forms of artificial intelligence routinely have to pass through stages of subpar performance during their development. That’s how technology works. But in time, as algorithms improve and processing power grows, often these systems are eventually able to perform at superhuman levels of proficiency. Not always, to be sure. But very often.
The second issue has to do with the question of awareness. We human beings have a remarkable cognitive toolbox with which we interpret the world. When we see someone smile, we understand that it doesn’t necessarily mean they are happy. This is because we have theory of mind, the ability to put ourselves inside someone else’s mind and shoes to help us understand what that expression might mean. Even then, we aren’t always accurate in our assessments. In addition to this, we understand motivations and cause-and-effect, and we have the common sense we’ve acquired over the course of our lifetimes. AI has none of these things. Yet.
What often isn’t factored into this criticism about AI’s interpretations of emotional expression is that work is already well underway at universities, research institutions and intelligence agencies all over the world to develop—or at least approximate—these capabilities in artificial intelligence. Often referred to as the third wave of AI, programs are being developed that will one day give AI the power of abstract reasoning, the ability to understand causality and a foundation of common sense.
We’re doing this because our computers need these abilities if they are to be entrusted with our increasingly complex systems and infrastructure. Whether or not this will eventually lead to some sort of technological self-awareness isn’t the point. What is important is to understand that long before anything like that happens, these systems will probably be able to read and interpret our emotional expressions at least as well as we can – be those expressions facial, verbal, gestural or otherwise. It takes time, but given sufficient commitment and funding, technology routinely outpaces its biological inspirations.
These are but a few of the things we need to consider as this field of affective computing develops and matures. The technology is here to stay. How to use it safely and responsibly is up to us to decide. There are so many potential benefits such capabilities could bring. A means to detect and help treat depression and PTSD. Early autism detection. Enhanced patient care in hospitals. Increased engagement through enriched customer experiences. And so much more.
But there are much darker possibilities as well. Political manipulation and control. Predatory marketing. There is any number of ways such technology could be misused. The important thing is to recognize not only that the possibilities for abuse exist, but that this has almost always been the case. It is up to us to figure out how we are going to develop and use all of our newfound abilities, including those of emotion AI, so that we can realize the greatest benefit from this most human-centric technology.
Yonck, R., Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence. Arcade Publishing, NY. (2017)