How Does Communication Work?

Part 2: The function of verbal vs. non-verbal cues in face-to-face interaction.

Posted Jan 23, 2020

Wikimedia Commons, used with permission
Non-verbal communication through eye gaze and gesture
Source: Wikimedia Commons, used with permission

In Part 1 of this three-part series on “How Does Communication Work?” I introduced the distinction between kinesics (body language, including gestures, eye gaze, and facial expressions) and paralanguage (intonation and other aspects of tone of voice that accompanies verbal, or spoken language).

We saw that these two types of non-verbal cues contribute in important ways to social meaning in face-to-face spoken communication. But they each fulfill different functions in communication, providing different types of meaning than the language itself. I explore these issues here.   

The psychology of interpersonal behaviour

In landmark research in the 1960s and 1970s, the late British psychologist Michael Argyle claimed that while we use spoken language (aka verbal cues) to convey information about events and states of affairs, non-verbal information, such as intonation (paralanguage), facial expression and gesture (kinesics), is used to express communicative attitudes towards others, and, in part, to establish and maintain rapport in interpersonal interactions. His best-selling books on the psychology of interpersonal behaviour and bodily communication remain classics and made Argyle, an Oxford don, one of the best-known social psychologists of the twentieth century.

One reason for the expressive power of paralanguage and kinesics is that it is less face-threatening to communicate emotional messages via non-verbal cues. Another is that we are less able to suppress our non-verbal emotional responses. Indeed, this was Darwin’s observation in one of the earliest treatises on non-verbal communication: The Expression of the Emotions in Man and Animals, published in 1872.

Many aspects of our body language, especially our display of primary emotions such as fear, anger, happiness, and sadness—in facial expressions—are involuntary. And this means that the information we obtain from non-verbal communication can undercut what the words themselves say—the civil “Nice to see you,” uttered by your ex after a messy break-up, can be contorted by the gritted teeth, through which the apparent pleasantry is spat at you; this provides a truer tell of how she or he feels upon bumping into you. Non-verbal cues also persist when the words dry up: body language is continuous, persisting during the awkward silences, and providing a steady stream of information for us to read the other’s true thoughts and feelings.

Specifically, we use non-verbal communicative cues to express our emotions, our attitudes towards our addressee and others, and towards the message being conveyed. We also use non-verbal cues to manage the flow of ongoing talk between speakers and addressees—such as when we have finished speaking, and wish to give up the ‘floor’; we also use them to present our personality, and in cultural rituals such as greetings.

An obvious example of the latter is in the difference between so-called high-contact versus low-contact cultures: whether or not a culture is "touchy-feely." Compare the stereotype of the expressive warmth of Mediterranean Europeans or inhabitants of Latin America—lots of casual sleeve-touching, open-body postures, smiling and eye contact—with that of the reputed sangfroid of the British stiff upper lip, arguably best embodied by the dysfunctional, emotional frigidity of the James Bond novels and films.

Information derived from language vs. non-verbal communication: The 7%-38%-55% rule

In the field of psychology, there has been extensive research into just how much store we place in non-verbal communication, especially in the realm of communicating emotion. In early, famous research, psychologist Albert Mehrabian compared the relative contribution of language, paralinguistic cues, and body language, in conveying social meaning.

In one study, subjects were asked to evaluate whether a speaker was being positive or negative when uttering specific words, such as terrible or dear. Mehrabian examined two modes: the linguistic and the paralinguistic. Sometimes a negative word (e.g. 'terrible’) was spoken using a positive tone of voice—for instance, higher pitch range and rising pitch—while a positive word (e.g. 'dear') was spoken using a negative pitch contour.

Mehrabian found that in such cases, the paralinguistic cue trumped the linguistic mode. When the emotional expression in the paralinguistic mode diverged from the meaning of the word, subjects put greater store in the non-verbal cue.

Right Attitudes, used with permission
the 7-38-55% rule of communication
Source: Right Attitudes, used with permission

In a second study, Mehrabian added a second type of non-verbal cue to the mix, namely facial expressions. In addition to hearing the positive or negative words, subjects were also shown photographs of people’s faces, with positive (e.g. happy) versus negative (e.g. sad) expressions. He calculated the relative importance of kinesics versus paralinguistic cues and found that subjects set relatively greater store in facial expressions than emotional expression due to tone of voice, by a ratio of 3:2.

Based on these two sets of findings, Mehrabian used a formula to work out how much emotional meaning subjects were deriving from the three distinct modes: language, paralanguage and facial expression (a form of kinesics). He found that only seven percent of his subjects’ emotional responses to others came from language—the literal meanings of the words they heard.

In contrast, an impressive 38 percent came from paralanguage, leaving over half, a whopping 55 percent, to be derived from facial expression. Based on his data, Mehrabian found that when communicating emotional responses, over 90 percent of what we derive comes not from what others say—from their words—but from how they say it, and what they do while they say it.

While Mehrabian’s overall conclusion—what has come to be known as the ‘7%-38%-55%’ rule—strikingly illustrates the significance of the non-verbal aspects of social interaction in communication, caution is nevertheless in order. His research was focused on the way in which we express emotions, and the judgment others pass on us when we do so. Moreover, he used single words, which were tape-recorded; and his subjects were all female. His experiments, therefore, while pioneering, were far from complete.

Subsequent research found that when subjects are making a judgment about the person, rather than just their emotional expression, verbal communication becomes more important. The verbal or linguistic mode also appears to be more important when someone is making an assessment as to whether another person is honest or deceptive. And the more language there is—by which I mean the more content that is provided through language to adequately explain something, such as when recounting a complex food recipe, or when providing detailed instructions on how to build something—the more important verbal communication becomes in our responses to others.

Finally, and perhaps predictably, the gender of the addressee and speaker affects the relative importance of the mode used for communication purposes, whether it be verbal or non-verbal. Nevertheless, what this all reveals is that effective communication requires both verbal and non-verbal cues—and it’s the non-verbal cues that appear to be especially geared towards facilitating emotional expression and inducing empathy.

The function of silent messages

In our everyday spoken interactions, we sometimes send mixed messages. Telling someone that of course, they haven’t offended you, while avoiding eye contact and presenting closed body language, tells them that they very much have. In such scenarios, people tend to privilege the non-verbal cues over the words themselves: body language trumps the words they say. Our silent messages are often the most powerful.

For communication to really succeed it must make use of different modes, typically at the same time, and avoid mixed messages: when body language and paralanguage are not aligned with what the words themselves are saying.

Take gesture: Our gestures are minutely choreographed to co-occur with our spoken words. Not only do they nuance and complement our spoken words—a pointing finger makes it clear which pastry we have selected, while nothing offends quite like showing someone the finger, a gestured insult—we also seem unable to suppress them.

Watch someone on the telephone; they’ll be gesticulating away, despite their gestures being unseen by the person on the other end of the line. In lab settings when psychologists run experiments in which subjects are required to communicate without gesture, spoken language suffers; if gestures are suppressed, then our speech actually becomes less fluent. We need to gesture to be able to speak properly. And, by some accounts, gesture may have even been the route that language took in its evolutionary emergence.

Eye contact is another powerful signal we use in our everyday encounters. We use it to manage our spoken interactions with others. Speakers avert their gaze from an addressee when talking, but establish eye contact to signal the end of their utterance. We gaze at our addressee to solicit feedback but avert our gaze when we disapprove of what they are saying. We also glance at our addressee to emphasize a point we’re making.

Eye gaze, gesture, facial expression, and speech prosody are powerful non-verbal cues that convey meaning; they let us express our emotional selves, as well as providing an effective and dynamic means of managing our interactions on a moment by moment basis. Face-to-face interaction is multimodal, with meaning conveyed in multiple, overlapping and complementary ways. This provides a rich communicative environment, with multiple cues for coordinating and managing our spoken interactions.

In essence, language, paralanguage, and kinesics form a unified communicative signal. The suppression of one non-verbal cue, for instance, gesture, has a deleterious effect on the spoken signal. Visual information achieved by kinesics, aural information conveyed by paralanguage, and linguistic information conveyed via (spoken) language are all required to fully realize the intended speaker’s meaning conveyed by an utterance.