Press Contact

SISU News Center, Office of Communications and Public Affairs

Tel : +86 (21) 3537 2378

Email :

Address :550 Dalian Road (W), Shanghai 200083, China

Further Reading

Jeroen van de Weijer: The Nature and Nurture of Language

15 March 2017 | By Jeroen van de Weijer ( | Copyedited by Boris Lopatinsky and Gu Yiqing

  • Language

    There is no topic that is more interesting than linguistics, the study of language, because it is about who we are.

[About the Author]

Jeroen van de Weijer (耶鲁安) is from Holland and received his Ph.D. at Leiden University, where he taught for about 15 years. In 2009 he became full professor of English linguistics, at SISU’s School of English Studies, where he teaches phonology, phonetics, morphology and psycholinguistics. He has published widely about phonology and takes a particular interest in English (all varieties, all historical stages), Dutch and East Asian languages. His aim is to provide a general cognitive basis for phonological theories.


The Nature and Nurture of Language

Jeroen van de Weijer (

School of English Studies, Shanghai International Studies University


1. Introduction


umans are the “language animals”; if there is anything that separates us from the rest of the animal kingdom, it is that we can express ourselves using spoken words, sign language, and a host of other communicative devices. There is no topic that is more interesting than linguistics, the study of language, because it is about who we are. Linguists, psychologists, philosophers, neurologists, anthropologists, historians and many other scientific fields have all contributed to this field. They ponder a host of questions: how does language ‘work’, i.e. how exactly do we say what we intend to say, and how do we understand what we hear? How do babies manage to acquire one language (or more than one)? What can go wrong in language development, and can we fix it? How do languages develop over time? Was there an “original” language? Why is it so difficult to learn a second language? What is the relation between society and language? To what extent do languages differ? To what extent are they all similar? All of these questions have been debated for hundreds of years. But perhaps no question has engendered more controversy than the question whether language is innate (i.e. belongs to ‘nature’) or not (i.e. is learned from experience, a result of ‘nurture’). Two and a half thousand years ago, Plato and Aristotle already argued on opposite sides of this debate. The question is pondered at length, from the perspective of language, in a forthcoming book by van der Hulst (2017), whose discussion inspired the following.

There is no question that knowledge of language is, in part, innate. Regardless of how much you talk to your dog (or to a chimpanzee), it will not talk back to you, or, at the very least, it will never match a human’s creativity with language. Humans learn language, and other animals don’t. This is not to say that animals don’t have rich within-species communication systems, which are entirely suitable and adequate for their habitats. In fact, as the unforgettable Carl Sagan remarked, it is remarkable that dolphins can pick up quite a bit of human language, but we have not been able to make any (or much) sense of dolphinese.

But of course language is also acquired. If all of language were innate, we would all speak the same language and there would be no need to learn it. So the nature-nurture question can be more precisely framed as ‘how much of language is due to nature (presumably: in our genes), and how much of it is due to experience?’. On the nativist side in this debate is Noam Chomsky, who heralded the introduction of modern linguistics in the 1950s and 1960s (e.g. Chomsky (1959, 1965)). Much of the forty or fifty years after that, work in linguistics was either in support, or a critique, of Chomsky’s original ideas. In fact, Chomsky has changed his position on innateness quite drastically since those early days. His latest viewpoint is that “Universal Grammar”, the innate capacity for language, is remarkably barren: only one syntactic principle survives (namely one that is capable of creating recursive structure, as in the sentence “A box in a box in a box .... in a box”) (see Hauser, Chomsky & Fitch (2002)).

Another question that has caused lots of debate is whether our knowledge of language and the way we acquire it is specific to language. Is there something innate that helps a child to acquire language, especially given the fact that language is so remarkably complex, that acquisition goes so fast, and that children are exposed to a bewildering variety of voices (often with errors), sometimes different languages, and certainly different styles? Or are the ‘strategies’ by which we learn language the same strategies by which we learn other skills, such as drawing or riding a bike?

My position in this is that learning language can (mostly) be explained as a result of general cognitive skills, i.e. not as the result of any innate, genetic endowment. The reason for this position is partly a methodological one: if we explain something by invoking an innate mechanism (whether this is a linguistic skill, or a part of grammar, or anything else), it is not really an explanation: it is an admission that we don’t really understand the reason for this mechanism, so we call it ‘innate’: research ends there and then. But if we are forced to try to understand the mechanism, i.e. if the nativist way out is not available, we are bound to learn more about such mechanisms, even if we might not be able to explain them completely. This may sound abstract; in this paper I will give two examples to make this clearer.


2. Uneven structures

Lesson 1 in linguistics is that there is structure everywhere. Words consist of syllables, and syllables consist of sounds. The way sounds are arranged in the syllable is subject to rules that are specific to a language. In English or Chinese, words cannot start with the sounds /ps-/ (but in Dutch, they can). In Chinese, words have tones (but in English they don’t, typically, one syllable in the word is stressed instead). So although languages differ, they all show syllable structure and segmental (=sound) structure. Words also have another kind of structure: a word like swimming in English consists of a main verb swim and a suffix –ing. This is morphological structure. Sentences have structure too: traditionally, we distinguish parts like the subject (The man) and the predicate (knew too much), or Noun Phrases and Verb Phrases: this is syntactic structure. We can even analyze the structure of novels, or of conversations. In linguistics, structure is everywhere.

It is interesting that, typically, such structures are uneven (in some way). In English, syllable structure is uneven because some syllables are stressed and others unstressed: e.g. (pár)(don), my (áw)(ful) (Éng)(lish) (where the brackets indicate the syllables); here we find three words that all consist of a ‘foot’, a combination of a stressed and an unstressed syllable. There are no feet that consist of two stressed syllables or two unstressed ones: an uneven structure is best. In morphology, like in the example of swimming, a root (a full lexical item, with relatively concrete meaning) is often combined with a suffix (which is often unstressed, and often has a relatively abstract meaning). Other examples are (beauty)(ful), (print)(er)(s), and (re)(write), (where the brackets indicate the morphemic structure, not the syllables). These are uneven structures, too. Someone might suggest that in morphology there are also even structures such as (baby)(oil), a combination of two lexical items (resulting in a so-called compound). However, there is also an unevenness in this structure: baby oil is a kind of oil, not a kind of baby: the right-hand side seems to determine more of the meaning than the left-hand side. And in syntax, finally, there is unevenness in sentence structure: verbs seem to be more ‘crucial’ in sentence structure than nouns. While in some languages sentences can be complete with only a verb form, this is often not true for sentences consisting of only a noun.

One school of linguistics that recognised the importance of uneven structure is Dependency Phonology (or Dependency Grammar in general). The theory is described in Anderson & Ewen (1987) and a recent overview is presented in van der Hulst & van de Weijer (2017). Pride of place in this theory takes the so-called dependency relation, which says that in constructions of two units one will be ‘head’ (i.e. relatively important or salient in some way) and the other will be ‘dependent’ (relatively less so).

We can ask where this dependency relation comes from. Is it acquired during language acquisition? Or is it innate? It might make sense to propose that a (rather abstract) principle like this is part of Universal Grammar, our innate linguistic endowment. This might help children to structure what they hear, i.e. help them to acquire language rather speedily and in spite of a language environment that is full of errors, restarts, hesitations, ungrammatical sentences, etc.

This is a possible answer to the question, but the alternative should also be considered (in fact, methodologically it should be considered first). Is it possible that the “preference for uneven structure” is acquired from the language input that infants are exposed to? Might the dependency relation be applicable to a wider range of cognitive fields than just language? A moment’s thought indicates that this might well be the case. From day zero (and even before), children will use all their senses to try to understand their world. Many of the objects they encounter will have different parts, and the parts are not all of equal importance. This is true of people, things, noises––just about anything: it turns out that uneven structure is everywhere, not only in language. This would imply that something like the “dependency relation” is not an innate endowment that is specific to language: it is the reflection of a general cognitive strategy to assign structure to the world and make a distinction between vital parts and the rest. Seen in this light, we see that the stressed and unstressed syllables (in English) also tap into this nascent knowledge: it makes it easier for the child to recognise words (specifically: word boundaries): the child expects structure and dividing the speech stream into stressed and unstressed syllables is one possible way of doing this.

Whether this conclusion is correct or not, the idea to try to derive linguistic principles from more basic general cognitive strategies is rewarding in its own right. If we assign a principle like “structure is uneven” to an innate language facility, the issue is settled too soon. If we try to relate it to human cognition in general, we are bound to learn.


3. Learning patterns

Let us turn to a second example. Children the world over start pronouncing their native languages in wondrous ways. English children will say ‘pay’ for play, French children will say ‘tan’ for trente, and Chinese children will say ‘pao’ for ‘ticket’ at some early stage in their acquisition of these languages. What these examples have in common is that they are all simplifications: one or more sounds (usually consonants) that appear in the ‘correct’ (adult) form are not pronounced by the children. Since this pattern is so strikingly similar for the youngest speakers of languages around the world, it might make sense to propose a principle that allows (or even forces) children to simplify their pronunciations a little bit: instead of a syllable that starts with a cluster (like pl-, tr- or pj-) a single consonant will do at this stage. There is one linguistic theory that says exactly this: this is so-called Optimality Theory (Prince & Smolensky 1993 [2004]), which has been tremendously influential in linguistics during the past twenty years. The theory says there is a principle, let us call it ‘No clusters allowed’, and, moreover, the principle is suggested to be innate.

Once again, the decision to assign a principle like this to an innate part of the mind (specific for language) precludes any further discussion. It signals the end of research, while of course such observations should really be the beginning. If we disallow an appeal to innateness, we are forced to ask the question: where does this principle (or this knowledge, or this way of pronouncing) come from? In such an approach, there are two possible solutions. The first is that children, at the age where these kinds of simplifications (e.g. ‘pay’ for play) occur, are simply not yet ready, physically and/or cognitively. They can hear the word play correctly (and they can hear the difference between the words pay and play), but they simply cannot pronounce it. Since this kind of maturation may be genetically determined (e.g. memory expands gradually, motor control grows with practice), it also makes sense that children the world over make the same mistakes. A second possible explanation takes into consideration the words children hear most often. Many corpora have been collected of speech that is addressed to infants (and/or between adults or others in the presence of infants), so-called infant-directed speech (IDS). Surprisingly, it turns out that words with ‘complex’ onsets (like play) are extremely rare among the most frequent words in such corpora (see van de Weijer (2014, 2017a)). In one IDS corpus, none of the 100 most frequent words had an onset cluster (there were only three of such words among the 150 most frequent words: please, from, and, not surprisingly, play). In fact, this is also true of adult speech. We know that although children of course do not literally keep track of how many times exactly they hear a particular word, they are very good at distinguishing what is common in a language and what is uncommon. Apparently, clusters like in play are quite uncommon. If children generalize this knowledge, they will acquire a principle like ‘No clusters allowed’. No assumption of an innate principle is required.

These two alternatives to a nativist solution need further development. Do they also apply to other languages and to other ‘mistakes’ in language development? How are they part of a more comprehensive theory of phonology, and linguistics in general. This is research in progress (see e.g. van de Weijer (2017b)), but the point here is that we should try to find alternative solutions for nativist positions. Perhaps we will fail (sometimes), but even in that case linguistics benefits, because the case for putative innate properties of language will be much stronger.


4. Conclusion

The nature-nurture debate is omnipresent in science, and in society. There is no better way of studying this question by examining language, a basic skill exclusively associated with humans. Linguistics is very much a structuralist endeavour, and (uneven) structure can be found everywhere in linguistics. Is it necessary to assume a principle specific to language that demands that in constructions that are composed of units one unit is stronger than the other? The answer suggested here is that such structures are everywhere, also outside language. If babies have an innate capacity to assess such structures in terms of ‘main point’ and ‘side matter’, it is not specific to language.

In spite of the fact that children make similar mistakes regardless of which language they are acquiring, there are plentiful ways of explaining such similarities without recourse to innateness. One such a way makes use of general cognitive strategies: paying attention to what is common and uncommon in a language, just like infants pay attention to what is common and uncommon outside language.

The ultimate goal of such an approach is to properly place the study of language inside the study of the human cognitive faculty. We have much to learn both from what theories have discovered over the course of the years about the thousands of languages that exist, especially with respect to variation and to acquisition, and from recent advances in the study of general human cognition. It is my hope that these two sides reach out to each other, and find fertile, common ground.




Anderson, John M. & Colin J. Ewen. 1987. Principles of Dependency Phonology (Cambridge Studies in Linguistics 47). Cambridge: Cambridge University Press.

Chomsky, Noam. 1959. A review of B.F. Skinner’s Verbal Behaviour. Language 35, 26-58.

Chomsky, Noam. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.

Hauser, Marc D., Noam Chomsky & W. Tecumseh Fitch. 2002. The faculty of language: What is it, who has it, and how did it evolve? Science 298.5598, 1569-79.

Prince, Alan & Paul Smolensky. 1993 [2004]. Optimality Theory: Constraint interaction in generative grammar. London: Blackwell.

van de Weijer, Jeroen. 2014. The origin of OT constraints. Lingua 142, 66-75.

van de Weijer, Jeroen. 2017a. Emergent phonological constraints: The acquisition of *Complex in English. Acta Linguistica Hungarica 64.1,

van de Weijer, Jeroen. 2017b. Where now with Optimality Theory? In Qiuwu Ma (ed.), Frontier research in phonetics and phonology. Beijing: Foreign Language Teaching and Research Press.

van der Hulst, Harry. 2017. The human capacity for language – The nature and nurture of language. Storrs, CT: University of Connecticut.

van der Hulst, Harry & Jeroen van de Weijer. 2017. Dependency Phonology. In S. J. Hannahs & Anna R. K. Bosch (eds.), The Routledge Handbook of Phonology. London: Routledge.


Press Contact

SISU News Center, Office of Communications and Public Affairs

Tel : +86 (21) 3537 2378

Email :

Address :550 Dalian Road (W), Shanghai 200083, China

Further Reading