Music is a universal language. Or so musicians like to claim. “With music,” they’ll say, “you can communicate across cultural and linguistic boundaries in ways that you can’t with ordinary languages like English or French.”
On one level, this statement is obviously true. You don’t have to speak French to enjoy a composition by Debussy. But is music really a universal language? That depends on what you mean by “universal” and what you mean by “language.”
Every human culture has music, just as each has language. So it’s true that music is a universal feature of the human experience. At the same time, both music and linguistic systems vary widely from culture to culture. In fact, unfamiliar musical systems may not even sound like music. I’ve overheard Western-trained music scholars dismiss Javanese gamelan as “clanging pots” and traditional Chinese opera as “cackling hens.”
Nevertheless, studies show that people are pretty good at detecting the emotions conveyed in unfamiliar music idioms—that is, at least the two basic emotions of happiness and sadness. Specific features of melody contribute to the expression of emotion in music. Higher pitch, more fluctuations in pitch and rhythm, and faster tempo convey happiness, while the opposite conveys sadness.
Perhaps then we have an innate musical sense. But language also has melody—which linguists call prosody. Exactly these same features—pitch, rhythm, and tempo—are used to convey emotion in speech, in a way that appears to be universal across languages.
Listen in on a conversation in French or Japanese or some other language you don’t speak. You won’t understand the content, but you will understand the shifting emotional states of the speakers. She’s upset, and he’s getting defensive. Now she’s really angry, and he’s backing off. He pleads with her, but she doesn’t buy it. He starts sweet-talking her, and she resists at first but slowly gives in. Now they’re apologizing and making up….
We understand this exchange in a foreign language because we know what it sounds like in our own language. Likewise, when we listen to a piece of music, either from our culture or from another, we infer emotion on the basis of melodic cues that mimic universal prosodic cues. In this sense, music truly is a universal system for communicating emotion.
But is music a kind of language? Again, we have to define our terms. In everyday life, we often use “language” to mean “communication system.” Biologists talk about the “language of bees,” which is a way to tell hive mates about the location of a new source of nectar.
Florists talk about the “language of flowers,” through which their customers can express their relationship intentions. “Red roses mean…. Pink carnations mean… Yellow daffodils mean…” (I’m not a florist, so I don’t speak flower.)
And then there’s “body language.” By this we mean the postures, gestures, movements and facial expressions we use to convey emotions, social status, and so on. Although we often use body language when we speak, linguists don’t consider it a true form of language. Instead, it’s a communication system, just as are the so-called languages of bees and flowers.
By definition, language is a communication system consisting of (1) a set of meaningful symbols (words) and (2) a set of rules for combining those symbols (syntax) into larger meaningful units (sentences). While many species have communication systems, none of these count as language because they lack one or the other component.
The alarm and food calls of many species consist of a set of meaningful symbols, but they lack rules for combining those symbols. Likewise, bird song and whale song have rules for combining elements, but these elements aren’t meaningful symbols. Only the song as a whole has meaning—“Hey ladies, I’m hot,” and “Hey other guys, stay away!”
Like language, music has syntax—rules for ordering elements—such as notes, chords, and intervals—into complex structures. Yet none of these elements has meaning on its own. Rather, it’s the larger structure—the melody—that conveys emotional meaning. And it does that by mimicking the prosody of speech.
Since music and language share features in common, it’s not surprising that many of the brain areas that process language also process music. But this doesn’t mean that music is language. Part of the misunderstanding comes from the way we tend to think about specific areas of the brain as having specific functions. Any complex behavior, whether language or music or driving a car, will recruit contributions from many different brain areas.
Music certainly isn’t a universal language in the sense that you could use it to express any thought to any person on the planet. But music does have the power to evoke deep primal feelings at the core of the shared human experience. It not only crosses cultures, it also reaches deep into our evolutionary past. And it that sense, music truly is a universal language.
Patel, A. D. (2008). Music, language, and the brain. Oxford, UK: Oxford University Press.
Slevc, L. R., Okada, B. M. (2015). Processing structure in language and music: a case for shared reliance on cognitive control. Psychonomic Bulletin & Review, 22, 637-652.
Tan, S.-L., Pfordresher, P., & Harré, R. (2010). Psychology of music: from sound to significance. New York, NY: Psychology Press.
David Ludden is the author of The Psychology of Language: An Integrated Approach (SAGE Publications).