"Excuse Me I'm Sorry" by Geralt / Pixabay / CC0 Public Domain
Source: "Excuse Me I'm Sorry" by Geralt / Pixabay / CC0 Public Domain

Consider the following (probably familiar) scenario: You’re having a “discussion” with a friend, spouse, child, or some other person in your life about something that he or she did to make you unhappy.  After a good bit of heated talking, you manage to wrench an apology from the offending party, but hearing the sought after words “I’m sorry” leaves you feeling strangely dissatisfied.  The words are right, but something about the way in which those three syllables are delivered makes the apology sound more like a verbal assault than a peace offering.  Depending on the speaker’s tone of voice, “I’m sorry” can come across as angry, sarcastic, bitter, defensive, or even mocking—quite the opposite of the sincere acknowledgement of personal error you are looking for.

In any conversational exchange, the actual words we utter carry only a part of our intended meaning.  It is the tone of voice in which we deliver these words that often signals their true meaning, turning an apology into an attack, or a compliment into an insult.  As complex as is the relationship between lexical content and tone of voice, human speakers have been effortlessly manipulating tone to make meaning for as long as humans have been speaking.  It is only recently, however—with the development of neuroimaging technologies—that we have begun to have some understanding of how the human brain makes sense of this complicated state of affairs.

When we speak of “tone of voice,” what we’re really referring to is the series of “intonation contours,” or modulations of pitch, with which we deliver the collection of syllables that make up a given utterance.  It is the melody, as it were, to the lyrics of our daily language use.  Several neuroimaging studies have implicated specific regions of the brain as being involved in the processing of this tonal information (the bilateral frontal and temporal regions, and even a “general pitch center” in the lateral Heschl’s gyrus and the adjacent superior temporal gyrus).  A recent study at the University of California, San Francisco built upon this work to discover, not only the locations in the brain where such processing takes place, but also “what the neural activity in those regions encodes—that is, the precise mapping between specific stimulus features and neural responses.”

Participants in the study listened to a set of sentences designed to independently vary intonation contour, phonetic content, and speaker identity, while subdural electrodes recorded their cortical activity.  Sentences such as “Movies demand minimal energy” and “Humans value genuine behavior” were presented in different pitch patterns—with vocal stress being placed in varying parts of the sentences—and in different speakers’ voices, and cortical responses to the stimuli were recorded.  As hypothesized, some of the electrodes recorded differential responses to the lexical content of the sentences, or to the identity of their speakers, while other cortical sites responded exclusively to relative pitch variations, making no distinction between either the lexical “meaning” of the sentences, or the identities of their speakers.  For example, when the sentences “Movies demand minimal energy” and “Humans value genuine behavior” were presented in the intonation of a question, with a rise in pitch at the end, the pitch-sensitive sites treated them as identical, recognizing no distinction whatsoever between the  meaning of the two “questions,” despite the fact that they had absolutely no lexical or semantic similarities.  The finding demonstrates that “intonational pitch undergoes specialized extraction from the speech signal, separate from other important elements,” such as the lexical meaning of what is being said, and the identity of the person who is saying it.  In other words, different types of voice information that make up a given utterance are processed in different “dissociable” pathways in the brain.

So the next time you confront your significant other about breaking your favorite coffee mug, or your heart, and he or she responds with the obligatory “I’m sorry,” consider the complex neural process that goes into triggering your reaction to it.  The lexical content of the utterance proceeds along one pathway to be decoded as an apology, while the pitch contour—characterized by a distinctly aggressive rise on the second syllable—proceeds along another pathway to be decoded as insincere and snarky.  Still another pathway carries information regarding the speaker’s identity which, when decoded, elicits memories of many prior similar encounters.  The integration of these separate threads of information leads you to the overwhelming conclusion that the time has come to part ways with this troublesome person.  When you break the bad news, however, be mindful of the tonal contours of your utterance. Otherwise, the pitch center of your partner’s brain might decode the utterance as irony, and you could be stuck with him or her forever.

You are reading

Time Travelling with Apollo

The Many Ways of Saying, and Hearing, "I'm Sorry"

How the human brain makes sense out of tone of voice

Remembering Accidentally on Purpose

Involuntary memory may compensate for a decline in voluntary memory as we age.

Mnemonic Misery

New research shows that feeling bad can help you remember well.