Are All Languages English-Like?

On Universal Grammar, caricatures and bad science.

Posted Dec 28, 2014

In my previous post, Is Language An Instinct? I introduced some of the myths I contend are associated with what I termed ‘rationalist linguistics’—a particular world-view in language and cognitive science brought to a general audience in the pop-sci books of Professor Steven Pinker.  In this post, I focus on the influential hypothesis that the world’s 7,000 or so languages are all underpinned by a set of genetically hard-wired Universal Grammar.  A ‘universal’, in this sense, is an aspect of grammatical structure common to all of the world’s languages.  And in particular, I consider whether the proposal for a Universal Grammar constitutes a scientifically sound theoretical proposal.

The Universal Grammar Proposal

What then does Universal Grammar amount to?  While English looks and sounds different from, say, French, Swahili, Japanese, and so on, the idea is that once we strip away the surface details—the specific sound inventories and vocabulary systems used by a particular language—underneath, the rudiments of the grammar that drive all the world’s languages are essentially alike—they are all English-like.  This position has been argued for by Professor Noam Chomsky.  The proposed existence of Universal Grammar, as it is known, constitutes a central axiom—something held to be self-evidently true—of the theory of grammar, sometimes referred to as Generative Grammar, developed by Chomsky and his co-workers, in a number of variants, from the 1960s onwards. 

The rationale for proposing that we each possess a Universal Grammar was to account for the conundrum of how it is that human infants become so adept at language, so quickly, and in the absence of formal instruction, or even much in the way of negative feedback, or correction, from parents, caregivers and others.  Moreover, Chomsky thought that children lacked sophisticated learning mechanisms that might guide them in the learning process—in the 1950s and 1960s a high-profile view of learning, developed most notably by the behaviourist psychologist B.F. Skinner, was the only serious contender, and Chomsky, arguably disingenuously, dismissed that account out of hand—behavourists, for a variety of reasons, take exception to Chomsky’s argumentation against Skinner (although this is a topic for a future post).  What is not in doubt, however, is that by around the age of four, each cognitively normal human child can be likened to a “linguistic genius”.  The question then is, without adequate general learning mechanisms, as assumed by Chomsky, and without adequate correction—sometimes referred to as the ‘poverty of the stimulus’ argument—how is it that each human child manages to acquire a mother tongue (or tongues), in such a relatively short space of time?

The proposed solution was that that each human possesses a Universal Grammar.  But by dint of possessing a Universal Grammar, this doesn’t mean that children come ready equipped with a fully-specified grammar in their heads: they still have to go through the process of acquiring the grammar of the language(s) they are exposed to.  The idea is that what is ‘universal’—shared by all cognitively normal human infants—is the pre-specification for grammar: a kind of ‘blueprint’ that guides what is possible. This is conceived, by rationalist linguists, as part of the human biological endowment: we are each born, hard-wired, with a Universal Grammar. 

So what might Universal Grammar look like? Or, in slightly different terms, what does it amount to?  Given the assumption that all languages are underpinned by a common Universal Grammar, the starting point is to examine a single language, to uncover its principles; and indeed, much early work on Universal Grammar focused primarily on English.  As Chomsky has observed:

“I have not hesitated to propose a general principle of linguistic structure on the basis of observation of a single language.  The inference is legitimate, on the assumption that humans are not specifically adapted to learn one rather than another human language.”  (Chomsky: 1980: 48;  On Cognitive Structures and their Development: A Reply to Piaget).

In my recent book, The Language MythI characterise this proposal as follows:  “…as all languages are assumed to derive from this Universal Grammar, the study of a single language can reveal its design.  In other words, despite having different sound systems and vocabularies, all languages are basically like English.  Hence, we don’t in fact need to learn or study any of the exotic languages out there—we need only focus on English, which contains the answers to how all other languages work.”  (Chapter 1: 15).

A caricature?

Rationalist commentators have recently accused me, in saying this, of caricaturing Chomsky’s position, and the rationalist “quest for the truth”, as it has sometimes been put.  The accusation is that I am (perhaps deliberately) misinterpreting Chomsky; moreover, that I am suggesting that rationalists only study English, if that is indeed the charge levelled against me.  But this is not the claim I am making.  On the contrary, many rationalists have studied an impressive variety of other languages—albeit, not to the degree associated with the branch of linguistics known as linguistic typology, which investigates language diversity across very large language samples.  Hence, such a belief on my part would be patently absurd; and it is not, in fact, one I hold.  My characterisation, in the context of a popular book, amounts to the assertion that Chomsky, and perhaps many other rationalist linguists, assume we need only study English—at least in principle—to uncover (an approximation of) the universals that underpin all the world’s languages. 

 So here’s the point: the principles associated with Universal Grammar can be established on the basis  of the study of a single language—at least they can be in principle, if one takes Chomsky at face value—and his writing is notoriously difficult to decipher on occasion—see here for just such an example (quite hilarious!).  And moreover, this approach—the study of a single language in order to uncover what it reveals about language universals—seems, to me at least, to be very much the spirit of the rationalist enterprise, as practised and adhered to by many researchers working in the Universal Grammar tradition.  It is sufficient to examine just one language, say English, Italian, or whatever, as all languages, no matter their stripe, possess the same underlying, biologically-prescribed grammatical machinery.  But of course, investigating other languages enables the rationalist to check whether the principles established on the basis of examining a single language, say, English, stand up to scrutiny.  If a feature of grammar, proposed as forming part of our innate Universal Grammar, based on examining English, for example, is found not to hold in another language, then it must be revised.

A problem for the Universal Grammar Proposal

But the problem with the proposal for Universal Grammar is that it amounts to an article of faith—it is presumed to exist, even in the absence of evidence.  Positing universal principles—on the basis of say English, as all languages are English-like—and then later, attempting to validate these principles by examining other languages, doesn’t in fact get to the heart of the matter: it is not a quest, ironically, for truth, as it fails to test the preseumed existence of Universal Grammar.  In point of fact, the existence of Universal Grammar, being an article of faith, is itself immune to counter-evidence: Universal Grammar is assumed to form part of our biological endowment.  Examining the grammatical structures that populate our innate Universal Grammar, based on a single language like English, and then, perhaps, later, comparing other languages with English, only leads to a revision of what’s proposed to be in the biologically-prescribed Universal Grammar; it doesn’t call into question whether Universal Grammar actually exists to begin with.

So why is this a problem? Well, Universal Grammar is not, from this perspective, a hypothesis; a hypothesis is normally taken to be a proposal whose truth is not presumed in advance.  A hypothesis can and is, accordingly, subject to empirical investigation.  But the existence of Universal Grammar is, rather, an assumption—an a priori commitment—one based on theoretical deduction rather than an observation or experience, no matter how limited, to be empirically tested; Universal Grammar exists, the rationalists believe. Hence, the linguistic data based on the study of English, or whatever, informs what Universal Grammar is theorised to be made up of—and as we will see in my next post, the proposed make-up of Universal Grammar has evolved considerably over the last 50 years or so.  It does not bear on, nor can it call into the question, the proposal that we are all born with a Universal Grammar—that it exists in the first place.  Universal Grammar is timeless, and its existence is not subject to empirical investigation, whilst what it looks like—in terms of the grammatical principles that populate it—may shift and change.

This position can be summarised as follows: linguistic data are provided as evidence for the grammatical principles that populate our Universal Grammar; but, and it’s a whopping, huge ‘but’, such “evidence” is contingent on the prior (theoretical/ideological) commitment to the existence of a Universal Grammar to begin with.  The problem, then, is that the linguistic “evidence” enables us to figure out how Universal Grammar is constituted only if we first assume there is a Universal Grammar to begin with—the search for ‘universals’ is contingent on the prior assumption there is a Universal Grammar.  Hence, whatever is “discovered” to be ‘universal’ is underwritten by faith in there being a Universal Grammar.

A Hegelian Argument

In The Language Myth I liken this paradoxical situation to a Hegelian argument, after the widely ridiculed ‘proof’ of Hegel.  In 1801, Hegel claimed that the number of planets in the solar system was seven, based on premises which he provided, and had no evidence for.  Indeed, we now know that there are eight major planets, and five dwarf planets.  The point, of course, is that you can’t start looking for putative universals, until you’ve established firm evidence for the position that there is such a thing as a Universal Grammar.  Of course, all would be fine if there were compelling, or even mildly persuasive arguments for a Universal Grammar, in the sense of a biological pre-determination for some genre of grammatical knowledge, no matter how abstract.  It might even, at a pinch, be fine, if other options and/or explanations for the prodigious ability of children to acquire a native language had been investigated and shown to be false.  But rationalist linguistics hasn’t done this. 

The proposal for Universal Grammar—to assume that grammatical knowledge is there to begin with, implanted in the microcircuitry of the human brain by virtue of our genetic endowment, regardless of what this grammatical knowledge might amount to—seems, to me at least, to be a position of last resort, when other positions could, and probably should, be explored first.  Language, from this perspective, is simply too complex and arguably too mysterious to be accounted for without appeal to special knowledge.  Such knowledge is ‘special’ in the sense that we simply don’t know where it comes from.  Experience, and general learning mechanisms, can’t account for these unique features of the human mind.  Thus, language must be hard-wired, part of our genetic endowment: enter Universal Grammar.

This genre of argument has been described as an argument from incredulity by British evolutionary biologist Richard Dawkins.  And the US linguistic-anthropologist, Daniel Everett, specifically addressing the presumed existence of Universal Grammar has suggested that, in essence, it boils down to a lack of imagination.  I suggest this lack of imagination proceeds as follows: we (=the extremely clever, tenured professors) can’t see how children could possibly learn something as complex as grammar—which underpins language.  Therefore, they can’t learn it.  Thus (the rudiments of) grammar must be innate.

Failure of the ‘Good Science’ Test

In the final analysis, for any theory to be considered as a viable theory, reality must be able to bite, in the form of counter-evidence.  In short, a theory must be, at least in principle, and with an appropriate formulation, falsifiable.  Universal Grammar, being an article of faith, is impervious to counter-evidence.  What rationalist linguists, in fact, investigate is not whether there is a Universal Grammar—its existence, ‘the truth’, is taken for granted.  Hence, it can never be falsified.  Consequently, it makes very bad science indeed.  Indeed, and ironically, while I have been accused of caricaturing the position of Chomsky, and perhaps, the larger world-view of rationalist linguistics, I suggest that, in point of fact, the Universal Grammar proposal is itself a caricature of what constitutes (good) science.  Not only does it fail the ‘good science’ test, the essential requirement being falsifiability, by virtue of being an article of faith, it arguably enters the realm of pseudoscience. 

This is not just problematic, but, in important respects, a tragedy.  Many, very many, extremely smart language scientists have expended considerable amounts of time working on a single language, or are engaged in comparative linguistic analysis, attempting to uncover what populates this putative Universal Grammar.  But Universal Grammar is unfalsifiable, and, as I contend, a myth.  This pursuit, and, at least for some non-rationalist commentators, an unwillingness, on the part of a subset of rationalist linguists to tolerate counterproposals, has arguably held back the scientific study of language.   Moreover, if Universal Grammar really is a myth, as I suggest, what then do these supposed lines of “evidence” for ‘universal principles’ amount to?  What is their value?  And what does this say about the considerable research effort, and even the careers, of those who have worked so impressively hard not just to “uncover” them, but also, to defend this ideological position, sometimes at all costs? These are important questions that language science should reflect upon.  These lines of “evidence” may have considerable value, even if Universal Grammar is shown to be and/or accepted as being a myth.  But they may not—and that should depress us all, even if, like me, we are not committed to the Universal Grammar proposal.

I’ll return to evidence against Universal Grammar, and alternative accounts that are more biologically, culturally and psychologically realistic.  But in my next post, I’ll begin with issues pertaining to linguistic typology. Moreover, and as I’ll also discuss, as linguistic evidence has mounted, disconfirming various proposals as to what might actually be ‘in’ the Universal Grammar, since the 1960s, the nature of Universal Grammar has been repeatedly ‘downsized’, resulting, in recent incarnations, to Universal Grammar being held to include merely very general computational processes.  This will lead me to discuss the hot topic of recursion—the ability, for instance, to embed grammatical units within others, creating sentences of great complexity.  This will include a discussion of Daniel Everett’s important, and for some, controversial, work on the Amazonian language Pirahã, as well as evidence for aspects of recursion in other species.