Skip to main content

Verified by Psychology Today

Creativity

The Shape-Shifting Malleability of 'Universals' in UG

Universal Grammar: Convenience, Circularity or both?

Getting Chomsky Wrong?

In an earlier post, Is There A Language Instinct?, based on my book, The Language Myth, I observed that the philosopher and linguist, Professor Noam Chomsky, argues that Universal Grammar can be investigated, in principle, by study of just a single language. This led to something of an outcry from adherents of Chomsky’s approach to language. The Chomskyans suggested I had either misread Chomsky; or, perhaps less charitably, that I was simply making stuff up. For instance, one senior Chomskyan was fairly forthright in setting me straight: Chomsky, he explained patiently, had never explicitly said such a thing. So here’s another example where Chomsky says exactly what he’s never supposed to have said:

"A plausible assumption is that the principles of language are fixed and innate" [Chomsky 2000: 122]. Chomsky then says: "For example, evidence from Japanese can be used (and commonly is used) for the study of English; quite rationally, on the well-supported empirical assumption that the languages are modifications of the same initial state" [Chomsky 2000: 102, New Horizons in the Study of Language].

What this shows is that Chomsky really does seem to believe that, in principle, a single language can be used to study “the [universal] principles of language”. This follows from his assertion that, as the underlying principles of language, all languages, are “fixed and innate”, and all languages derive from the “same initial state”, one language holds the key to understanding any other. Hence, Japanese can be used to study universal principles that are then assumed to hold for English, and vice versa.

Of course, Chomsky is often difficult to understand, and sometimes paradoxical, so we can’t all always know exactly what he did or didn’t mean. So here’s something to gladden our hearts in dark times. In a different quote, from the same book, Chomsky appears to contradict what he says in the quote above:

“I am alleged to be one of the exponents of this [innateness] hypothesis, perhaps even the arch criminal. I have never defended it and have no idea what it is supposed to mean…people who are defenders of ‘the innateness hypothesis’ do not…even use the phrase.” (Chomsky, 2000: 66).

This revelation might, in fact, come as a shock to many of Chomsky’s adherents, who may have been forgiven for thinking that Universal Grammar was, indeed, about innateness. One prominent Chomskyan linguist, writing in 2009, says:

“If we bring these facts [about language] in the open…we will thereby strengthen the innateness hypothesis for language acquisition.” (Fodor 2009: 206).

The objection I have come up against, time and again, when explaining to adherents what Chomsky seems to have in mind, if taken at face value, is that I am either misunderstanding Chomsky (and perhaps should try harder), or I’ve taken his quotes out of context, distorting them in the process; or else, they say, wagging their fingers accusingly, I am caricaturing him; or all three.

But really, why should any of this matter? Who cares if I, and others, can’t agree as to what Chomsky—if not the world’s greatest living linguist, certainly the most famous—may or may not mean by what he says? In fact, it matters a great deal, for the following reason. In this post I will show that Chomsky’s influential proposal, and meme—Universal Grammar—is, in a profound sense, scientifically bankrupt. This matters because Chomsky’s adherents, sometimes interpreting his often vague and paradoxical statements in idiosyncratic ways, are institutionally powerful and numerous. And yet the “science” they peddle as such, based on his pronouncements, are not, on my assessment, enabling language science to focus on the right research questions in the right way, and are leading many aspiring and junior language scientists down intellectual blind-alleys. Language is the most complex of human behaviours. It’s a tough enough nut to crack without engaging in seemingly willful acts of pseudoscience.

Chomsky advocates a somewhat novel approach to science—what he has dubbed the “Galilean” method. And this approach, as we shall see, means that none of the “hypotheses” generated are scientific; at least, not in the way that science is usually understood (outside the narrow Chomskyan purview). And more worryingly, this novel approach appears to provide Chomsky (and perhaps his adherents) with free licence to ignore findings which are potentially problematic for their theoretical perspective(s). Engaging in a ‘ground-clearing’ exercise—critically examining the “scientific” approach being pursued by Chomsky—can only help facilitate, in my view, the beginning of a recalibration, at least in some quarters, of how language science might best examine the pressing theoretical and empirical problems at hand.

Universal Grammar and the Galilean Approach to Science

Before getting to universals, it’s worth clarifying what Chomsky seems to take Universal Grammar to be, and how this relates to his “Galilean” approach to science. For Chomsky, Universal Grammar seems to constitute a biological pre-specification for language—one that’s innate—that constitutes an “initial state”, enabling a cognitively-normal human child to learn a language: any language. It amounts to, possibly many, different genres of information—propositions, constraints, and so forth—that enable a child to acquire their mother tongue, that are not otherwise facilitated by more general learning mechanisms. In short, Universal Grammar is the initial-state of grammatical knowledge, that each child is born with, and which underpins any and all languages, enabling a child, perhaps together with more general learning facilities, and other factors, to learn a language based on the linguistic input—the “blooming, buzzing confusion”, to borrow a phrase from Williams James—that a child encounters around it, in its early years of life. In short, it amounts to the specifically linguistic content—biologically prescribed—that enables a child to acquire a language, that could not come from elsewhere, or which would not be predicted by any other types of experience and/or mental and/or developmental and/or physiological abilities and mechanisms.

This sort of formulation of Universal Grammar, Chomsky takes to be axiomatic—an axiom being a self-evident truth. And the rationale for this axiom—that there is this biological pre-specification for language, of some sort, namely Universal Grammar—is, in large part, based on his famous ‘poverty of the stimulus argument’, which I briefly discussed in my previous post: Are All Languages English-Like?. (As an aside: for a great many linguists, there are numerous problems with the ‘poverty of the stimulus argument’; moreover, today a broad array of empirical data suggests Chomsky’s assumptions about linguistic input, in making the argument, were ill-founded, which I review in Chapter 4 of The Language Myth; but for purposes of this post, we’ll set that discussion aside).

The consequence of Universal Grammar amounting to an axiom is this: it’s most definitely NOT testable, an axiom being a self-evident truth, not in need of testing. And, indeed, it’s difficult to imagine how one would—or even could—go about testing whether there is a biological pre-specification for language, especially if we were to rely on linguistic analysis alone, or even at all. After all, the claim for a Universal Grammar, in essence, amounts to a biological, rather than a linguistic claim: whatever it is that all languages may have in common , this is a consequence, so the claim goes, ultimately, of hereditary. And if something cannot be tested, it’s impossible to say whether it’s true or false.

This notion of being ‘testable’ amounts to the issue of falsifiability: the litmus test for good science. For a proposal to be worth its scientific price of entry, reality must be able to bite, at least potentially, in the form of counter-evidence. But as the proposition that language is biologically pre-specified is not testable, it is not, in principle, falsifiable. And being unfalsifiable, it is, alas, immune to counter-evidence. This isn’t a problem for Chomsky. And this is because he holds a somewhat novel perspective regarding what he considers scientific practice to be.

Charles Darwin was one of the earliest practitioners of what has, since the nineteenth century, become the standard scientific method. In essence, science involves developing a model based on prior observations. And then, subsequently, the model is tested against further observations, in order to assess whether the model correctly accounts for these subsequent observations; the model is examined, against these observations to see whether it correctly predicts the phenomena in question: whether it’s true or false. And if counter-evidence is provided, then the model is revised in the light of this.

But Chomsky has been explicit: he doesn’t subscribe to this approach. In essence, because Universal Grammar is an axiom—an article of faith—it’s more or less acceptable to put inconvenient data aside, or even, to ignore it altogether; otherwise, this inconvenient data would get in the way of the search for the principles that populate the biologically pre-specified Universal Grammar—those that Chomsky “knows” to be there. And this, I’m saddened to report, is no caricature.

Chomsky has cited Galileo as his model of choice for this “scientific” practice. Writing in his 2002 book, On Nature and Language, Chomsky has claimed that: “[Galileo] dismissed a lot of data; he was willing to say: “Look, if the data refute the theory, the data are probably wrong.” And the data that he threw out were not minor”. (Chomsky 2002: 98). He continues, saying that “the Galilean style . . . is the recognition that . . . it often makes good sense to disregard phenomena and search for principles” (Ibid.: 99), by “discarding recalcitrant phenomena,” (Ibid: 102). And in 2009, in his opening remarks to a volume edited by Piattelli-Palmarini and colleagues, Chomsky explains, in describing his “scientific” approach: “You just see that some ideas simply look right, and then you sort of put aside the data that refute them” (Ibid.: 36). The mind boggles!

In his 2006 book, Linguistic Minimalism, Cedric Boeckx has praised this approach, dubbing it “the majestic Galilean perspective”. Boeckx writes that: “it allows researchers to make maximal use of their creativity…and cannot be evaluated in terms of true or false, but in terms of fecund or sterile.” (Ibid.: 6). But if the researcher is free to invoke his or her “creativity” and can dispense with the admitted inconvenience of whether a proposal is true or not, how then do we judge whether an approach is “fecund” or not? And how long do we give it? What I’ll be saying, below, is that the search for ‘universals’ in Universal Grammar, has been ongoing for over 40 years. And over this period, the number of the proposed ‘universals’ has steadily shrunk, with a concomitant reliance on other (so-called ‘second’ and ‘third’) factors: non-linguistic aspects of human experience, biology, growth and so on. How much more time do we give it, before we give in, and accept that the approach is just wrong: always was, always will be? And the real problem, of course, is that, Chomsky, and some of his more die-hard adherents don’t have to fret about testability, and hence, whether a particular proposition is falsifiable or not. And this is a consequence of doing bad science. Being true to the data, and falsifiability, keeps you on the straight and narrow. Incredible as it seems, for Chomsky, it doesn’t appear to really matter whether something is actually true, or not. He just has to believe in it.

Two telling reviews of Chomsky’s recent 2012 book, The Science of Language, and his Galilean method which I’ve just sketched, offer a withering assessment. Christina Behme concludes in her review (available here), that: “Chomsky uses appeal to authority to insulate his own proposals against falsification by empirical counter-evidence. This form of discourse bears no serious relation to the way science proceeds.” Philip Lieberman writes in his review (available here), that: “If it impossible to falsify a ‘theory’, it is not a scientific theory: the Chomskian enterprise falls outside the domain of science.”

The shape-shifting malleability of ‘universals’ in Universal Grammar

This brings us, nicely, to the nature of the ‘universals’ in Universal Grammar. One would think that a ‘universal’ is just that: universal. But a surprising number of ‘universals’ have come and gone, in the Chomskyan enterprise, since the 1960s. One conclusion, (mine), is that this might demonstrate that Universal Grammar is simply plain wrong, if not unscientific—hence, it will never be able to identify linguistic universals beyond the banal. Another, might be convenience: as recalcitrant linguistic facts have piled up, even the Galilean method has to accept, at some point, that a particular proposed ‘universal doesn’t get things right. And of course, we’ll also see that inconvenient facts can be ignored, which appears to jibe with the modus operandi licensed by Chomsky’s Galilean method.

Back in the 1960s, Chomsky proposed what he dubbed formal and substantive universals. Substantive universals were grammatical categories such as lexical classes—noun, verb, adjective and adverb—and grammatical functions like subject, and object: what we might think of as the basic ‘building blocks’ of grammar. Chomsky (1965: 66; Aspects of a Theory of Syntax), suggested that languages select from a universal set of these substantive categories. Formal universals are rules like phrase structure rules, which determine how phrases and sentences can be built up from words, and derivational rules, which guide the reorganisation of syntactic structures, allowing certain kinds of sentences to be transformed into or derived from other kinds of sentences (for example, the transformation of a declarative sentence into an interrogative sentence). But as the facts of linguistic diversity and variation emerged, it increasingly appeared that couching universals in these terms was untenable.

By the 1980s, a revised, and more flexible approach to Universal Grammar had emerged, dubbed Principles and Parameters. Informally, the idea was that the constraints, (or whatever), that populate our biological pre-specified language faculty, consist of grammatical principles that can be parameterised—set in different ways—for different languages. Switch the parameter one way rather than another, and you get a cascade of effects that makes a language like English look very different from, say, the indigenous Australian language Jiwarli. But in terms of the initial biological state, we all approach languages from the same starting point, prescribed by our common Universal Grammar. Summarising the state of the art, in his 1994 book, The Language Instinct, Steven Pinker confidently proclaimed the following: .

It is safe to say that the grammatical machinery we use for English . . . is used in all the world’s languages. All languages have a vocabulary in the tens of thousands, sorted into part-of-speech categories including noun and verb. Words are organized into phrases according to the X-bar system [the system used in an earlier version of Chomsky’s theoretical architecture to represent grammatical organization] . . . The higher levels of phrase structure include auxiliaries . . . which signify tense, modality, aspect and negation. Phrases can be moved from their deep structure positions . . . by a . . . movement rule, thereby forming questions, relative clauses, passives and other widespread constructions. New word structures can be created and modified by derivational and inflectional rules. Inflectional rules primarily mark nouns for case and number, and mark verbs for tense, aspect, mood, voice, negation, and agreement with subjects and objects in number, gender and person. (Pinker, 1994: 238).

Alas, Pinker couldn’t be further from the truth. As I show in Chapter 3 of The Language Myth, most, if not all of these claims for language ‘universals’ are falsified by specific languages that differ, often in startling ways, from say English. As linguists Nicholas Evans and Stephen Levinson have observed, in a telling 2009 overview of some of the facts of linguistic diversity, (which can be found here), “[I]t’s a jungle out there: languages differ in fundamental ways – in their sound systems (even if they have one), in their grammar, and in their semantics”.

From the mid-1990s onwards, the grammatical machinery that might constitute the initial state of Universal Grammar was down-sized further, under the aegis of the so-called Minimalist programme. The current state of the art appears to be that there is a single innate operation, termed Merge—a general purpose computation, parameterised in different ways across languages, that enables the recursive—i.e., combinatorial potential of language(s)—such that any given language can combine syntactic units in a range of language-specific ways. And this, thereby, gives rise to the observed complexity of grammar in and across the world’s languages. But the consequence of this down-sized Universal Grammar is that other factors have to be invoked to account for linguistic variation.

Indeed, in a 2005 paper (‘Three Factors in Language Design’, published in Linguistic Inquiry), Chomsky argues for three factors that are required to account for language (universals): i) the innate, biological pre-specification (aka Universal Grammar), ii) experience, and ii) non-linguistic factors, such as growth, development, and so forth. In short, today, very little, in relative terms, remains that is specifically innate, part of the biological endowment and unique to Universal Grammar. And moreover, these so-called ‘second’ and ‘third’ factors must now play a huge explanatory role in accounting for the nature and structure of language, it’s diversity, and how it’s acquired.

In short, in the course of around 40 years, proposals as to what amounts to the grammatical information that constitutes our biological endowment—Universal Grammar—has progressively shrunk. And today, there is an explicit acknowledgement that factors other than those that populate our Universal Grammar must be invoked in order to account for language, and how children acquire it.

Recursion and the case of Pirahã

While, in some sense, the move to downsize the innate stock of ‘universals’, in Universal Grammar, has been driven be necessity, the Galilean method can, nevertheless, be invoked to ensure that some vestige of a Universal Grammar remains. And in this case, the “recalcitrant data” constitutes the Amazonian language Pirahã, studied famously by Professor Daniel Everett (see Everett’s website here), of Bentley University, USA. Everett has conducted field-work on Pirahã for many years, living in remote Amazonian Pirahã villages for over six years, and returning regularly since, to conduct further field research. Not only is Everett fluent in Pirahã, he is the world’s leading linguistic authority on the Pirahã language and culture.

According to Everett, the Pirahã language and culture appears to be unique in a number of ways. It’s the only known language without numbers, numerals or a concept of counting—it even lacks terms for quantification like “all”, “each”, “every”, “most” and “some”. It lacks colour terms, and has the simplest pronoun system known. Moreover, and more generally, Pirahã culture lacks creation myths, and exhibits no collective memory beyond two generations. More problematic for Universal Grammar, Everett has claimed that Pirahã lacks the ability to embed grammatical phrases within other phrases: for instance, a noun phrase inside another noun phrase, or a sentence within a sentence.

This grammatical ability is often referred to as recursion. And in terms of Chomsky’s Universal Grammar, recursion might be thought of as the surface manifestation of the general purpose computation, Merge—as defined, more or less, in these terms in Chomsky’s 2012 book: The Science of Language. Merge enables syntactic units to be combined, recursively, enabling the construction of complex syntactic assemblies, providing, in principle, sentences of infinite complexity.

For instance, take the English expression: Death is only the beginning, uttered by Imhotep in the 1999 movie The Mummy. This phrase can be embedded in the grammatical frame’ X said Y’, providing a more complex sentence: Imhotep said that death is only the beginning. This sentence can then, itself, be further embedded in the same frame recursively: Evelyn said that Imhotep said that death is only the beginning. But, according to Everett, this sort of embedding is impossible in Pirahã.

Everett first presented this proposal in a 2005 paper (available here). And he has developed and elaborated on this in further research, including two popular science books: Don’t’ Sleep: There Are Snakes; and Language: The Cultural Tool. Not only are these books highly informative about Pirahã, language and culture, they are also hugely entertaining, not least about life, faith and what it means to be human. They are also highly recommended. An excellent documentary on the Pirahã , The Grammar of Happiness, is also available to watch (here). Everett’s overall conclusion is that the lack of embedding in Pirahã is, in fact, a consequence of Pirahã culture: which exhibits a preference for immediacy of experience. And one manifestation of this is that Pirahã grammar encodes just one event per sentence, militating against the sort of grammatical embedding evident in other languages, such as English.

On the face of it, if Merge is supposed to generate recursive embedding of syntactic phrases within others, as assumed by the current Universal Grammar perspective, and Pirahã doesn’t, shouldn’t this, on the face of it, constitute counter-evidence against Merge/recursion? Of course, Everett could be wrong about the linguistic (and other) facts. And a sometimes ferocious debate has raged in the years since Everett first published his claims.

But here’s pause for thought. Recent research on European starlings suggests that starlings may have, at least in principle, the ability to learn to recognise recursive patterns in the rattle and warble motifs of other starlings—a finding I review in Chapter 2 of The Language Myth. So, if, at least in principle, at least one language fails to exhibit recursion, and if another species can, at least in principle, exhibit (some aspects of) recursion, what does this say about the claim that recursion arises from a biological principle, which forms part of the uniquely human genetic endowment for language: Universal Grammar?

In true Galilean fashion, such (potentially) recalcitrant data has been set aside. Writing with colleagues Hauser and Fitch in 2005, Chomsky states that: “the putative absence of obvious recursion in one of [the human] languages . . . does not affect the argument that recursion is part of the human language faculty [because] . . our language faculty provides us with a toolkit for building languages, but not all languages use all the tools’, (Fitch, Hauser and Chomsky, 2005, ‘The evolution of the language faculty: Clarifications and implications’. Cognition: 103-104). This is a position that Chomsky appears to reiterate in his 2012 book The Science of Language. If Everett is right, it doesn’t matter anyway: the arguably last remaining ‘universal’ of Universal Grammar is still there, nevertheless.

Falsifiability again

From a certain perspective, it might not matter whether Chomsky’s Galilean approach is scientific or not in the contemporary, and indeed, conventional sense; it might not matter that Universal Grammar cannot be falsified. Indeed, the prominent Chomskyan, Professor Neil Smith, in his introductory remarks to Chomsky’s 2000 book, New Horizons in the Study of Language, observed that “Chomsky is careful to stress that ‘Minimalism’ [the approach to human syntax Chomsky has been promoting since the 1990s] is not yet a theory; it is just a program defining a certain kind of research endeavour” (Ibid.: xi). And from that perspective, perhaps falsifiability doesn’t, or needn’t apply. The Universal Grammar research agenda is not at a point that it can make specific, and testable, hypotheses. Hence, the apparent lack of falsifiability doesn’t detract from the fundamental worth of pursuing the research programme.

But, such a move is disingenuous, and arguably, intellectually dishonest, in two ways. First, the research project of Universal Grammar has been in train for at least 40 years. Surely, surely, by now, with some of the smartest linguists around working on figuring out what makes Universal Grammar tick, we’d have more to show than the steady retreat away from the originally rich panoply of universals, to essentially, a single, abstract computational process, and reliance on other, so-called, second and third factors?

Second, adherents to the Chomskyan enterprise argue that the Merge hypothesis, formulated appropriately, could, in principle, account for Pirahã—assuming, of course, Everett is right: that it fails to exhibit recursion; and research on Pirahã is ongoing, by Everett and others. And, formulated in the correct way, the Merge hypothesis then becomes testable, and hence falsifiable, against Pirahã, and indeed other languages.

But wait, and here’s my bleak assessment. Merge is predicated on an a priori commitment to a biological pre-specification for grammar: Universal Grammar. You have to assume that there is a “biolinguistics”—something innately prescribed— before you can begin to posit Merge, or whatever. And as we’ve seen, this principled commitment to a biological pre-specification for language, cannot be falsified—at least not on the basis of language. And based on a review of the non-linguistic evidence in The Language Myth, I’ve been unable to find any convincing non-linguistic evidence to support it. A “hypothesis” (e.g., Merge) cannot count as falsifiable, if it’s based upon an axiom which is itself impervious to counter-evidence. It doesn’t matter what anyone thinks that Merge can or cannot predict, as it’s built on foundations of sand. This renders Merge, or whatever other hypotheses are proposed, vacuous. It relies on intellectual circularity.

If we assume something for which we have no evidence, and which cannot be falsified—Universal Grammar—we can indeed call it a “research programme”. But we cannot then generate “hypotheses” on the basis of it, which we then claim to be falsifiable, and claim to be doing science. The fact is, Universal Grammar has been in retreat for 40 years because it’s wrong. Worse than that, the claim for a Universal Grammar is a myth. And as J.F. Kennedy once observed, a myth “persistent, persuasive and unrealistic” poses the greatest harm to the quest for truth.

advertisement
More from Vyvyan Evans Ph.D.
More from Psychology Today