How We Extract Meaning From Language
The forgotten factor in explaining how humans learn the meaning of words.
Posted September 13, 2021 | Reviewed by Devon Frye
- Humans excel in processing language.
- Various explanations have been proposed for how we acquire language, ranging from training to pure instinct.
- One important explanation often gets ignored: Language itself is organized in such a way that it creates meaning.
Much has been said about the remarkable ability humans have to extract meaning from language. That ability is remarkable indeed. In a split second, we perceive a spoken or written word and immediately assign meaning to it while concatenating (stringing together) the various words in a sentence. At the end of the comprehension process, the whole sentence is greater than the sum of its parts.
It is an ability that almost comes naturally, at least for spoken words. Some 353,000 babies born today will understand some basic words about nine months from now. Around their first birthday, they will learn their first words, often words like “mummy” or “daddy.” Around their second birthday, they will be putting words together in their first sentences.
And if those first sentences don't end up occurring around the second birthday, there is likely no reason for concern. Odds are that they will get there sooner or later and become the talkative creatures that we are. Almost any child, anywhere in the world, can learn how to understand and speak a language—one of the 6,500 languages in the world.
How Do We Acquire Language?
The big question cognitive scientists have been struggling with—linguists, psychologists, educators, anthropologists, philosophers, and AI researchers alike—is how language is acquired so naturally. A century of research has not provided a definitive answer. However, parts of an answer have been proposed.
Some researchers in the early part of the 20th century argued that language acquisition is not much more than verbal behavior training. Just like a pigeon learning to make a pirouette—step by step, when provided with some reward—children learn how to speak. Rather than an edible reward, the attention and praise from parents could be considered the perfect reinforcers for acquiring our remarkable language processing skills.
Other researchers in the 1950s vigorously rejected the idea that language is acquired through training. Language-learning children, they argued, are not pirouette-making pigeons. Instead, the language-learning child already has a "language mind" on its own. It does not start out with a blank slate, but instead has a built-in instinct for understanding and speaking. It is because of this preconditioned language acquisition device that children can acquire language so rapidly and effortlessly. Language acquisition is not nurture, they argue; it is nature.
The nature argument can also be found in a prominent explanation three decades after. The enthusiasm about the emergence of computers convinced some researchers that the human brain must also be like a computer. If computers and the human brain can both understand language, the brain must use the same computational architecture. Language acquisition was thus seen as a combination of nature and nurture. Nature provided the neural network architecture that could be trained on the nurturing linguistic input.
Over the last two decades, however, concerns have been raised about the analogy of the human mind as a computer, crunching linguistic symbols into other linguistic symbols. Instead, the argument went, meaning can only come from linking linguistic symbols to perceptual information. That is, language must be grounded to be meaningful.
The meaning for the word “dog” does not come from the fact that it may occur in the same sentence with “cat,” as a computer may compute. Instead, the meaning of the word comes from the fact that in the mind’s eye (and ear, nose, and hand), we can see the four-legged animal, mentally hear its barking, imagine its particular dog smell, and picture what it feels like to pet it. That is how language attains meaning.
Is There a Right Answer?
I personally appreciate all four explanations for language acquisition, each in its own right. The grounding explanation makes sense: When I read an exciting story, I envision all the aspects of that story. In my mind I see the main character being chased and hear the steps in the alley.
I also appreciate the computational network explanation. After all, computational algorithms are able to estimate similarities between words—and why would the human mind not use procedures similar to it?
The instinct explanation does not come across as foreign to me either. The fact that children are able to acquire language so easily indicates that they must have a knack for it. At the very least, they need to have the brain architecture equipped to learn a language.
And training helps too, as any second language learner knows. In fact, even for a native language speaker, some signs of comprehension from the person you're talking to certainly help in training your language skills. The feedback I get from a conversational partner at a noisy party, for instance, certainly shapes the volume of my verbal behavior.
Does Language Itself Help Us Understand?
There is, however, one aspect with regards to these four explanations that has been bothering me over the years. We could argue that it's because of training skills, a language instinct, a computational architecture, and/or an ability to ground symbols that we are so good at language.
But we can also turn that argument around. Because we are so good at language, we have the training skills, the language instinct, the computational architecture, or the perceptual simulation skills.
What I mean is this: When considering explanations of how humans are so good at language, we often tend to leave one important aspect out—the source, language itself. It may very well be the case that we are so good at language because the language system provides us with all the linguistic cues for us to extract meaning from those seemingly meaningless symbols.
How Language Conveys Meaning
If that sounds confusing, let me try to give an example with the order of words we use. Language users have a large amount of flexibility in the order in which they use words, perhaps only constrained by syntactic rules. If we use adjectives, like “sad” and “happy,” we could use them in a sentence like “sad songs can hardly make you happy.” But even though language users have flexibility, when saying them together, they tend to use these adjectives in more of a fixed order, whereby “happy” precedes “sad” more often than not.
Think about it by focusing on these two words. We say “happy and sad” even though we could say “sad and happy.” We say “plus and minus” even though nothing stops us from saying “minus and plus.” We say “good and bad” even though nothing stops us from saying “bad and good.”
In fact, think of any two words, one with a positive and one with a negative connotation, and note the order in which they appear in a sentence. It's far more likely than not that the word with the positive connotation precedes the one with the negative connotation, rather than the other way around. It is as if the language system is helping us in our efforts to extract meaning.
Of course, I am well aware of the fact that there is considerably more to language acquisition than words like “happy” and “sad” and the order in which they are placed. But it is one of many examples suggesting that language itself provides cues on how to extract meaning from it.
These linguistic shortcuts help us in our language processing. We do need training, instinct, network, and grounding—but we also need the patterns of language themselves. And considering a forgotten explanation for language acquisition—language also creates meaning—offers exciting opportunities for the cognitive sciences in answering the question: How do humans process language?
Louwerse, M.M. (2021). Keeping those words in mind: How language creates meaning. Rowman & Littlefield/Prometheus Books.