Skip to main content

Verified by Psychology Today

Cognition

Why Did Language Evolve?

A design feature for human meaning-making.

In my previous post, Universal Scenes of Experience and the Emergence of Grammar, I discussed the trajectory of the emergence of grammar. In this post, I continue this theme by asking: Why did language evolve? And in so doing, I seek to address the related question: What is language (for)?

A design feature for human meaning-making
In evolutionary terms, the embodied representations in the conceptual system preceded language. A conceptual system enables an organism to represent the world it encounters, to store experiences, to learn, and so respond to new experiences as a consequence. A conceptual system is what enables us to be able to tell friend from foe, competitor from potential sexual mate, and to act and interact in situationally-appropriate ways. Our repository of concepts facilitates thought, categorization of entities in the world, and our action and interaction with, in and through our spatio-temporal environment.

While many other species have conceptual systems, humans are unique in having language. And the range and complexity of human conceptions appears to far exceed that of any other species. An obvious implication is that it is language that may provide, in part at least, a means of harnessing our conceptual systems, releasing its potential—a conclusion that has been reached by a number of leading cognitive scientists.

The psychologist, Lawrence Barsalou, has suggested that the function of language is to provide an executive control function, operating over body-based concepts in the conceptual system. And this view seems to be on the right lines. Language provides the framework that enables sophisticated composition of concepts.

Of course, language is not necessary to combine concepts—the psychologist Karen Wynn, for instance, has shown that pre-linguistic human infants, already perform ruidmentary mental arithmetic, and can combine numbers using mathematical operations such as addition and subtraction, without recourse to language. And of course, adult humans, who have developed acquired aphasia, retain normal intelligence even in the face of the catastrophic loss of language. So, it can’t be the case that language is required in order to combine ideas and produce compositional thought. But language does enable us to combine concepts, in novel ways, that allows far more sophisticated conceptions than would otherwise be possible.

Language achieves this by virtue of constituting a grammatical system, with words and grammatical constructions cueing activations of specific body-based states in the brain. Their integration gives rise to complex ‘simulations’—re-activations, by the brain, of stored embodied concepts—which is the stuff of thought. This means that language provides added value to our conceptual systems. It allows us to control and manipulate the very concepts that evolved for evolutionarily more rudimentary functions, such as object recognition and classification. Under the control of language, we can make use of body-based concepts in order to produce abstract thought, and to communicate with other minds—in the absence of telepathy, language both facilitates and enhances a rare and sceptred form of meaning-making.

To illustrate, read the following English sentence, then close your eyes and conjure up, in your mind’s eye, exactly which hue of ‘red’ comes to mind: The red fox (Vulpes vulpes) is the largest of the true foxes and the most abundant member of the Carnivora.” Now do the same with the following observation, uttered by none other than Gwyneth Paltrow: Beauty, to me, is about being comfortable in your own skin. That, or a kick-ass red lipstick.

My bet is that the use of red in the fox example calls to mind a dun or browny red. But in the lipstick example, what comes to mind is a vivid, or truly red. What we are doing, when we read these sentences, is activating a hue based on past experiences of different types of red. The perceptual hue is coming, in these cases, not from the word red. The precise perceptual hue—the meaning—of red doesn’t reside there in the word: it can’t, otherwise the word form red would convey the same thing on each occasion of use. Rather, what we are doing when we read each sentence is re-activating a stored mental representation—a concept—one that is rich, vivid and detailed.

As you closed your eyes, you will have been able to visualize, in your mind’s eye, exactly the shade you were imagining. This re-activation of a perceptual experience is made possible precisely because we each carry around with us a complex conceptual system: the repository of the mind’s concepts. This further reveals that what we mean, when we use the word red, is, strictly speaking, not a function of language. Of course, language, in these examples, is helping us to narrow in on the right kind of perceptual hue: the right kind of red. But much of this narrowing in is coming from the other words in each sentence, like fox, and lipstick, which help us figure out what sort of hue to visualize. But whatever the linguistic function of the word red in these examples, the hue is most definitely not conveyed by the word itself.

What’s going on is that, here, the word red is cueing that part of the color spectrum that relates to the hue red. But here’s the really important part. Each sentence is activating a different part of the red color spectrum. We derive distinct simulations for red. And this is achieved via language, which nuances which part of the red color spectrum we should activate. These visualizations, while not as vivid as actually seeing a fox or, indeed, Gwyneth Paltrow’s lip-stick-adorned mouth in the flesh, are nevertheless rich experiences. More generally, representations in the conceptual system are what we might refer to as ‘analogue’ in nature: they encompass the vivid, multimodal character of the experiences they are representations of.

Language guides how our conceptual system is engaged, in meaning construction: it shepherds the nature of the simulation that is derived. Linguistically-mediated thought enables the re-activation of stored experiences: it shapes the simulations. To introduce an analogy, if the conceptual system is the orchestra, then language is the conductor, which coordinates and nuances the instruments, and without which the full splendour of the symphony couldn’t be realised.

Let’s take another example, a familiar quotidian one: a cup of coffee, perhaps one that you’ve bought on the go, in a paper cup from a high street coffee shop chain. You’ll feel the cup in your hand: the warmth of the coffee coming through the cup. You’ll sense its weight and the shape of the paper cup, as you clasp your hand around it. You’ll also, inevitably, smell the aroma of the coffee perlocating through the lid up into your nostrils. And as you sip, carefully, from beneath the hot, foamy covering of the coffee, you experience the taste. Now, a number of different sense-perceptory modalities are engaged in even this simple act: of holding, raising to your lips and drinking a slurp of coffee. There is the motor action, as you grasp the cup, gauge its weight and move your hand and arm in synchronicity, so that the cup approaches your lips. And as you slurp, you are coordinating the pursing of your lips, the imbibing of the coffee, with the motor event of raising the cup to drink.

The way in which our brains construct even a relatively simple experience, like this, doesn’t involve sending all this information to a single spot, where the brain integrates the information. Instead, different areas of the brain are specialised for processing various kinds of information: taste, touch and weight, sight, sound, and so on. And these different ‘sensory modalities’ are integrated in a when rather than a where: a place in the brain; synchronised oscillation of neurons in the different sensory processing areas of the brain allow coordination, and integration, of different aspects of the multimodal information associated with a single event: raising a coffee cup to your lips and tasting, and smelling the coffee.

And then later, when we remember what the coffee looked and tasted like, we reactivate this same body of sensory-motor experiences. And in this way, our recollection of those experiences is analogue in nature: it recreates the diverse, sensory character of those experiences. And none of this depends upon language.

The nature of the representations available from the conceptual system, as already mentioned, I dub ‘analogue’ concepts. And his follows as they loosely resemble—are analogous to—the experience types they are representations of. They are rich and multifaceted. And they reflect all the aspects of the experience types that they are records of.

So if concepts are analogue in nature, what are the representations like that language encodes? In slightly different terms, what, then, does language bring to the table, in the meaning construction process? While language provides a gateway to the conceptual system—one of its principle functions in the meaning construction process—it is far more than a mere conduit to conceptual knowledge. After all, to shape the simulations we produce, when we use language to bootstrap analogue concepts, language must be bringing with it a type of representation that is different from those that inhabit the conceptual system.

One line of evidence for thinking that language does have a distinct type of representation—one that is qualitatively distinct from concepts that populate the conceptual system—is this: neuro-psychological conditions where patients suffer damage to parts of the brain responsible for encoding analogue concepts. For instance, patients with Parkinson’s disease display difficulty in carrying out motor movements, suggesting their motor representations are damaged. Nevertheless, these patients are still able to use and more or less understand corresponding action verbs, like to kick, and to hammer. Likewise, patients with motor neuron disease are still able to process action verbs. The conclusion, from this, and the one I reached in The Language Myth, is that part of the concept remains, even in the absence of the corresponding body-based state. A conceptual representation must consist of more than simply an embodied analogue concept.

This is strikingly illustrated by patients suffering from apraxia. This is the condition where patients retain part of the knowledge associated with a concept. But due to brain damage of the relevant motor area, they are unable to perform the corresponding action. For example, a patient with apraxia might know the word for a hammer, and even to be able to explain what hammers are used for, and what they are usually made of. In fact, such a person would be able to demonstrate quite a lot of knowledge about hammers via language. However, a patient suffering from apraxia would be incapable of demonstrating how to use a hammer: they would have no inkling of how to hold and how to swing a hammer. This reveals that we construct our conceptual representations from various sources, not exclusively body-based analogue concepts.

But in the absence of an analogue concept, something nevertheless remains: language would seem to provide a semantic contribution too—one that persists even in the absence of the corresponding analogue concept. In short, language must be providing representations— but of a different sort—that allow access to the analogue representations in the conceptual system. And more than that, these linguistic representations guide the way in which analogue representations become activated. After all, the different simulations for red, in the “red fox”, and “red lip-stick” sentences, are a consequence of being massaged by language to produce the correct interpretation. We can conclude, from this, that the essential ingredient for human-like meaning-making is the interaction between the conceptual system, on the one hand, and the linguistic system on the other.

The meaning of grammar
So to return to the question: what are the language-specific representations that persist in the face of the loss of corresponding analogue concepts? A direct window into the representations provided by language can be gleaned from examining the grammatical system of language.

A common misconception is that the grammatical system is meaningless—that it provides a formal set of instructions—but that meaning resides elsewhere. But on the contrary, an investigation into what it is that grammar does shatters any illusions: human grammar provides grist to the meaning-making mill.

A central design feature of language is that it divides into two systems: the lexical and grammatical subsystems.To show you what I mean, consider the following sentence:

The poacher tracked the antelopes.

Notice that I have marked in boldface certain parts of this sentence—either whole words, like the, or meaningful sub-parts of words, like –ed, signalling past tense, and –s, the English plural marker. What happens when I alter those parts of the sentence? Have a look now:

Which poacher tracked the antelopes?
The poacher tracks the antelopes.
Those poachers track an antelope.

The new sentences are still about some kind of tracking event, involving one or more poacher(s) and one or more antelope(s). And when I change the little words like a(n), the and those, and the sub-parts of words like –ed or –s, we then, inevitably, interpret the event in different ways. The boldfaced elements provide information about number—how many poachers or antelopes are/were there?—tense—did this event happen before now or is it happening now?—old/new information—does the hearer know which poachers or antelopes we’re talking about?—and whether the sentence should be interpreted as a statement or a question.

These little words, and word sub-parts like –ed, are known as ‘closed-class’ elements: they relate to the grammatical subsystem. The term ‘closed-class’ reflects the fact that it is typically more difficult for a language to add new members to this set of linguistic forms. This contrasts with the non-boldface ‘lexical’ words which are referred to as ‘open-class’. These relate to the lexical subsystem. The term ‘open-class’ captures the fact that languages typically find it much easier to add new elements to this subsystem, and do so on a regular basis.

In terms of the meaning contributed by each of these two subsystems, while ‘lexical’ words provide direct access to the analogue concepts in the conceptual system, and thus have a content function, ‘grammatical’ elements perform a structuring function in the sentence. They contribute to the interpretation in important but rather more subtle ways, providing a kind of scaffolding, which supports and structures the rich content accessed by open-class elements. The elements associated with the grammatical subsystem contribute schematic meaning, rather than rich contentful meaning. This becomes clearer when we alter the other parts of the sentence:

The supermodel kissed the designers.
The moonbeams illuminated the treetops.
The book delighted the critics.

What all these sentences have in common with my earlier example—The poacher tracked the antelopes—is the ‘grammatical’ elements, again in bold-face. The grammatical structure of all the sentences is identical: we know that both participants in the event can easily be identified by the hearer. We know that the event took place before now. We know that there’s only one supermodel/moonbeam/book, but more than one designer/treetop/critic. Self-evidently, the sentences differ in rather a dramatic way, however. They no longer describe the same kind of event at all. This is because the ‘lexical’ elements—those not bold-faced—prompt for certain kinds of concepts that are richer and less schematic in nature than those prompted for by ‘grammatical’ elements. They prompt for analogue concepts.

The lexical subsystem relates to things, people, places, events, properties of things, and so on. In contrast, the grammatical subsystem encodes a special type of concept having to do with number, time reference, whether a piece of information is old or new, whether the speaker is providing information or requesting information, and so on.

To get a clearer sense, then, of grammatical meaning, now consider the following example that relates to renegade landscape gardeners, dubbed ‘cowboys’:

These cowboys are ruining my flowerbeds

Here the grammatical elements are again in boldface. And if we strip away the semantic contribution of the ‘content’ words—the nouns cowboy and flowerbed, and the verb ruin—we end up with something like: these somethings are somethinging my somethings. Although this meaning provided by these closed-class elements is rather schematic, it does provide the information that ‘more than one entity close to the speaker is presently in the process of doing something to more than one entity belonging to the speaker’. This is actually quite a lot of information. And if we now exchange the content words for different ones, we end up with a description of an entirely different situation, but the schematic meaning provided by the closed-class elements remains the same:

These painters are defacing my walls

As this example illustrates, the meaning provided by closed-class elements remains constant despite contextual differences deriving from the content words relating to size, shape, and so on. For example, the demonstrative determiner that in the expressions that flower in your hair and that country encodes distance from the speaker regardless of the expanse of that distance. Equally, the modal verb will in the sentences I will succeed! and The human race will become extinct encodes future time regardless of the distance of that future time. As this shows, the function of the closed-class, or grammatical, system is to provide a pared-down, or highly abstract representation. This structure provides a skeleton over which elements from the open-class system are laid in order to provide rich and specific conceptual content: a simulation.

This demonstration reveals that grammatical meaning is schematic in nature. It provides structural information. And so, the essential human design feature for meaning construction is to have two qualitatively distinct types of representations that play a complimentary role in the meaning making process. While analogue concepts—directly accessed by open-class words, and housed in the non-linguistic, conceptual system—convey the what of a simulation—the closed-class elements encoded by human grammar—by language—provides the packaging that allows us to nuance how the analogue concepts are presented. Grammatical meaning mediates how our conceptual knowledge gets activated in the meaning-construction process: closed-class elements thus provide the how of a simulation.

Parametric concepts
So, now that we’ve seen the way in which meaning conveyed by language is qualitatively different from analogue representations—concepts—in the conceptual system, let’s explore this notion in a bit more detail. It turns out that all linguistic units—whether open or closed-class—convey schematic meaning. And this is so, regardless of whether they directly index analogue concepts—as in the case of open-class words—or not—as in the case of closed-class elements.

To begin to get at this idea, I want to illustrate using another aspect of grammar. While we may not always be aware of it, words are divided into different ‘lexical classes’: nouns, verbs, adjectives, prepositions, and so on. And the distinction relates to a division of semantic labour. Nouns, for instance, refer to things—prototypically, objects, people and animals, although there are important caveats—while verbs concern relations that evolve through time. Another important lexical class is that of adjectives, which designate properties of things (nouns). So, let’s examine the difference between adjectives and nouns.

Take the adjective red, and the noun redness that I discussed in my previous post. These words encode the semantic parameters ‘property’ and ‘thing’. And unlike the body-based perceptual state—the hue: red—which is analogue in nature, ‘property’ and ‘thing’ are highly schematic notions: they are schematic or ‘parametric’ concepts. Unlike the rich, perceptual experience of different sorts of red which come to mind when we variously imagine lipstick, foxes, and so on, there is nothing about the parametric concepts ‘property’ or ‘thing’ which is like the perceptual experience of redness.

Parameters are abstracted from embodied states, filtering out all points of difference to leave highly schematic content: the parameter. The word form red encodes the parameter ‘property’, while redness, encodes the parameter ‘thing’. This is another way of saying that red is an adjective—it describes a property of a thing—while redness is a noun—it describes a property that is reified in some way, and established as being identifiable in its own right, independent of other entities in a world of which it is a property.

So, let’s look at how these different parameters package analogue content: multimodal information found in the conceptual system. Consider the following examples, adapted from a skin care product advert on the internet:

Treat redness with Clinique urgent relief cream.
Treat red skin with Clinique urgent relief cream.

Both words, red and redness, which I’ve underlined, relate to the same perceptual state: the same analogue representation—that part of conceptual space corresponding to the colour spectrum usually identified as ‘red’. But the words package the content in a different way, giving rise to distinct simulations. In the first example, redness leads to an interpretation relating to a skin ‘condition’. In the second, red refers more straightforwardly to an unwanted property of the skin.

The different interpretations arising from these sentences are not due to a different hue being activated—the hue is presumably the same in both examples. Rather, the words –noun versus adjective—nuance our interpretation of the perceptual hue: they give rise to distinct simulations: an interpretation of ‘skin condition’ on the one hand, versus ‘discolouration of skin’, on the other.

In the case of red, this word encodes the parameter ‘property’. This means that the word itself is telling us that whatever it is that it points to in the conceptual system, it has to be interpreted as a property of some entity. In contrast, redness encodes the parameter ‘thing’: whatever it is the word points to, it has to be interpreted as an entity, and in the case of colour, a property reified as a quality distinct from entities it might otherwise be a property of. And the consequence is that red versus redness lead to different interpretations.

What this all reveals is this: language has a representational format—parametric concepts—that is qualitatively different from the multimodal nature of the conceptual system—analogue concepts. And in turn, this has provided an evolutionary advantage not evident in other species. Words, and other units of language, provide instructions as to how simulations should be constructed: they provide the how to the what of the conceptual system.

http://www.imdb.com/name/nm0000569/bio (accessed 2nd April 2014).

advertisement
More from Vyvyan Evans Ph.D.
More from Psychology Today