Wilma Koutstaal Ph.D.

Our Innovating Minds


How Do We Read Emotions in Robots?

Of social robots, innovation spaces, and creatively trying things out.

Posted Aug 13, 2017

Thimmesch-Gill, Harder, & Koutstaal (2017)
How is Nao feeling?
Source: Thimmesch-Gill, Harder, & Koutstaal (2017)

Soon social and assistive robots will become ever more a part of our lives. They could be in our homes, our hospitals, and our schools, helping us with childcare, elderly care, in rehabilitation from injury or disease, and as social and assistive aids in all sorts of capacities. 

But how much do we know about the psychology of our interactions with robots?  What should any one social or assistive robot look like? How should it move and react to us –– and to what sorts of information? Should it appear to show "emotions" and be responsive to our own emotions? How much like a person should an assistive robot be? How innovative can we be in designing robots to be responsive assistants and sure supports including in times of stress or in tension-fraught situations? 

Let's take a look at two different recent research studies that explore how we understand and respond to expressions of emotion in robots. Each set of studies closely examined just a few aspects of how a robot might look, or move. An interdisciplinary team of researchers from Israel and the U.S. spearheaded the first study; the second study is from our own lab involving collaboration with experts in robotics, virtual reality, and human factors and design.

A shoulder to lean on –– or getting the cold shoulder?

The robot known as Travis does not look like a human.  Travis is small, with a vaguely creature-like structure, with large ears but no eyes, nose, or mouth.  About 11 inches tall, when placed on a desk or table, Travis's head is nearly in line with that of a person seated in front of it, and can "nod," sway, or look away.  Travis has what might be seen as one outstretched leg, and an extended hand, that holds the smartphone that runs it.   

Jonathan Binks, 2017
Sketch of the robot Travis.
Source: Jonathan Binks, 2017

Upon their arrival at the research lab, participants (102 undergraduate students) were told that they were taking part in a study of a new speech-comprehension algorithm developed for robots in which the robot would try to understand what they said.  They were asked to tell the robot about a current problem, concern, or stressor they were facing, such as a recent argument with a friend or family member, or a personal illness.  They were asked to describe the problem in three parts, and to say when they were finished each part, after which the robot would reply on the smartphone.

Unbeknownst to them, participants were randomly assigned to one of two groups.  In the "responsive" group, Travis faced toward the participant as he or she spoke, occasionally nodding and gently swaying back and forth.  Also, at the end of each of three parts in the participant's conversation, a simple message from a preset collection of messages was displayed on the screen, such as "I completely understand what you have been through," or "You must have gone through a very difficult time." (Actually, Travis didn't choose these messages; a hidden "Wizard of Oz" experimenter selected and sent them at the appropriate time.)

Participants in the "unresponsive" (cold shoulder) group encountered a rather different Travis.  There was no nodding or swaying, and the text displayed at the end of each of the three parts of the conversation simply asked the participant to continue on to the next part. 

After their "conversation" all participants rated the robot on a series of simple questions about how responsive they thought Travis had been. They were asked, for example, to rate on a scale how much they agreed that: "The robot was aware of what I am thinking and feeling," and "The robot really listened to me."  Other questions asked about how sociable and competent Travis seemed.

The videotapes of each participant's interactions with Travis were coded by two independent raters, who were unaware of the condition the participant was in.  Analysis of the conversations showed that participants interacting with the "responsive" versus "nonresponsive" ("cold shoulder") Travis did not differ in how much they revealed about the negative event they had recently experienced.  

However, the responsive Travis was found to be rated as significantly more sociable and more generally competent (capable, reliable, and knowledgeable).  More importantly, when the independent judges examined and rated the videotapes of their interactions, participants in the responsive condition showed significantly greater responsiveness and approach to the robot:  more often leaning toward and getting closer to Travis, smiling, and maintaining eye contact with him.  Much the same pattern of outcomes was found in a follow-up study in which participants first revealed a positive recent event and Travis gave more positive feedback. 

It seems that even when a robot does not closely resemble a person, but is only vaguely creature-like, we may rapidly become attuned to quite slight signs of the robot's responsiveness to our own words or actions.  This implies that robot design may not require many different sorts of gestures or actions to boost our perception of a robot's responsiveness or sociability.  In turn, this may open up many new possible robotic design spaces and forms of functionality.    

A warm bath or an icy cold one?

But what if we encounter a robot when we are much more directly and immediately stressed? When we are not only remembering something stressful, but actually currently experiencing real physical and cognitive stress? Will our own stress impair our ability to "read" a robot's nonverbal emotional expressions?  

Although studies have shown that we can read emotional expressions in robots under neutral everyday conditions, our study is one of the first to explore how we respond to robot body poses under acute stress.  

We simulated stressful circumstances by asking participants to immerse their non-dominant hand in ice-cold water for periods of time and then stressed them further with difficult mental arithmetic problems.  They were then asked to judge (for physically present or virtual reality robots) emotionally-expressive or neutral static body poses in the 23-inch tall humanoid robot "Nao" (SoftBank Robotics).  Nao's facial expression and eyes remained constant, only Nao's body posture changed across each of 25 different poses that expressed positive vs. negative emotion and low vs. high levels of arousal/excitement.

Did stress influence how participants (N=96) "read" the body poses?  A second group of participants (those in the control group) were lucky enough to be randomly assigned to a lukewarm hand immersion condition with simple mental counting tasks.

Intriguingly, we found that stress had relatively few effects on emotion perception, except for poses that were highly animated or excited looking.  When participants were themselves stressed, they saw the robot's negative poses as more negative but less animated or excited.  In other words, stress shifted their impression of the "happiness" of the robot so that negative was even more negative, but excited was less excited.

When we are under stress we can be expected to see unhappy robots as very unhappy but happy or neutral robots will appear the same — just as if we were not under stress.  And we do know, based on other research, that sometimes people have trouble telling apart a highly positive excited pose from a highly angry or negative excited pose.  This suggests that we should design robots — when we anticipate that robots will be interacting with us under very stressful circumstances such as emergency or disaster settings — so that they don't exhibit excessively animated/excited poses or nonverbal emotions.

How was the virtual reality robot perceived?  We were gratified to find that for the vast majority of poses participants saw the emotional expressions similarly when Nao was physically present or presented in virtual reality.  The one exception was that participants saw happy expressions of the virtual reality Nao as somewhat less happy than the same poses when Nao was actually physically present in the room with them.  This finding raises the possibility that we might be able to use virtual reality robots as a "sandbox" for experimentally testing out robots of different designs and forms of emotional expressiveness.

Throughout our studies, we found that participants often genuinely enjoyed seeing the different emotional expressions that Nao could portray.  Participants were also quite adept at reading those expressions even when Nao gave them no verbal information.

To think about

  • How willing would you be to interact with a robot in your home?  What about in a healthcare or emergency setting?  How would you react if an unfamiliar robot came to help you to escape from a burning building?  In each of these situations, how would you judge how helpful or responsive you thought the robot would be?
  • If you were familiar with robots in your daily life, how might this change how you would respond to an unfamiliar robot in a tension-filled environment?
  • The poses for Nao were based on a set of poses originally developed by professional puppeteers, theater actors, and using animation best practices.  Where else might designers, engineers, and roboticists look for inspiration in guiding their "trying out"?


Birnbaum, G. E., Mizrahi, M., Hoffman, G., Reis, H. T., Finkel, E. J., & Sass, O. (2016).  What robots can teach us about intimacy: The reassuring effects of robot responsiveness on human disclosure.  Computers in Human Behavior, 63, 416–423.

Thimmesch-Gill, Z., Harder, K. A., & Koutstaal, W. (2017).  Perceiving emotions in robot body language: Acute stress heightens sensitivity to negativity while attenuating sensitivity to arousal.  Computers in Human Behavior, 76, 59–67.