How does Bill VanPatten describe how we acquire language?

Linguistics is a rabbit-hole second only to Hegelian philosophy in terms of depth and complexity.  You can move down there and spend the rest of your life looking at cross-clause meaning transfers, lexical ambiguities and other odd denizens who like the Cheshire Cat are easy to visualise and often impossible to grasp.

Fortunately, amateur geeks like Eric Herman and I, and a few pros like Bill VanPatten and Mr Noam Chomsky and Stephen Krashen, are here to make sense of the research so that the rest of us can look at thirty kids and pull off meaningful, acquisition-building activities.

Today, a brief run-through answering the question what actually happens in language acquisition?

Well, to put it simply, we start with linguistic data (words spoken or written).  This just means language with an intent to communicate meaning.  If it is comprehensible, or partly comprehensible, the language gets “scanned” by the aspect of the brain that we could loosely call “the input processor.”  This input “must come from others,” as VanPatten says.

This processor does a bunch of stuff.  It first looks for meaning, and it does that by looking at what Bill VanPatten informally labels “big words” such as nouns and verbs, and then adverbs and adjectives.  While the input processor is Mainly looking for meaning, it is also looking at a bunch of other data.  How do the words in question relate in terms of meaning to other words?  How do they sound?  Where do they go in the sentence?  How do they change when said/written in a sentence?  What are tone and speaker’s intent?  (there are other data the processor looks for too).  It’s important to note that the only thing the input processor can process is language.  It cannot process images, any kind of explicit rules, or incomprehensible input.

This point is absolutely crucial. A teacher can explain, say, verb conjugation or pronouns or whatever up the yin-yang, but this information cannot become part of acquired competence.  As VanPatten argues, echoing Krashen, any kind of conscious awareness of grammar etc rules is only useful if the learner

  1. knows the rule.
  2. knows how to use the rule
  3. has time to recall, apply and use the rule.

The processor kicks sorted data (or, more accurately, information derived from sorted data) upstairs to Chomsky’s “language acquisition device,” which runs “software” called “universal grammar.”  The U.G. does a bunch of stuff to the sorted data, with which it starts building what VanPatten calls “mental representation of language.”  All this big fancy-schmancy term means is, unconsciously getting it, and having an unconscious “language blueprint” or “language software.”  Mental representation is like using the Force: when you have it, things just flow.  Do, or do not– there is no try.  And by “getting it,” we basically mean two things:

a) understanding the language

b) knowing what is grammatically OK and what is not.

You, the reader, have a very well-developed mental representation of English.  You just know–but probably can’t explain why— that you can enjoy running, but that you cannot enjoy to run, and that you can untie your laces, but you cannot unsleep.  You also know that “does John live here?” is OK but “lives John here?” or “lives here John?” is not.

As mental representation develops, output potential emerges.  The more meaningful input we get, the more we process language, build mental representation, and thereby start being able to “spit out” first words, then phrases, and finally progressively more complex sentences.  There is in fact an order of appearance of rules in organic, unforced output (what people can do without any teacher or written prompting).  This is briefly detailed in VanPatten’s 2003 book From Input to Output.

So, recap: comprehensible language comes in, is parsed (sorted) by processor, goes to universal grammar, which only via linguistic input builds a progressively more complex “mental representation” of language, which as it develops will permit first understanding and then output of gradually increasing complexity.

Here is how VanPatten describes it in an email:

“I use the metaphor of a grocery checkout. The cash register computer is the mind/brain.  The bar codes on the product is the input. And the red light scanner is the input processor.

[Note: in this case, the cash register develops a “mental representation” of your grocery bills– scanner codes plus $$ amounts– from the moment it begins scanning]

The scanner can only read bar codes. It cannot read pictures, labels, rings on a can, signs, and so on.  And the computer can only receive what the red scanner delivers to it as data. It cannot read the bar codes but instead the processed information processed by the scanner. 

Language acquisition is the same.  Only input is useful for the input processor, not knowledge about language or practice. And the mind/brain needs the processed input data in order to build a linguistic system. All components in both systems are dedicated to specific activities and act on only certain kinds of info.”

Take a minute and re-read that.  Good.  Now, read it again.

It is also important to note a few other things that VanPatten (and Krashen) have said:

First, there are “working memory” bandwidth limits which come into play during input.  Not everyone can “hold in their head” the same amount of info, and too much info renders the input processor useless.

Second, there is an “order of attention,” so to speak, of what the input processor pays attention to.  At the beginning stages of acquisition, it processes “big words”– nouns, verbs etc– and only once these “make sense” can the brain sort through things like verb endings, articles, gender etc.  Basically, the brain is going to pay attention to the most important aspects of input first.

We know this because, for example, when we teach a relative beginner, say, habla (speaks) in Spanish, the learner will probably be able to tell you quite quickly what habla means (or close to it), but be unable to explain that the -a ending means “he” or “she.”  This does not mean that the brain is not registering that -a, or anything else, but rather that its main focus is on first “big meaning” and only later on inflections etc.

Finally, teachers need to ensure that learners process L2-unique grammar properly.  VanPatten’s work on processing instruction– getting people to not screw up interpretation– looks at things like this sentence in Spanish:  A la mujer vio el hombre  (“the man saw the woman”).  In English, this literally translates as “to the woman saw the man,” and English speakers tend to interpret it as “the woman saw the man.”  Some “focus on form,” as Long calls it, is necessary to make sure that learners don’t develop “bad processing” habits.

The one thing VanPatten’s metaphor does not do is explain how much repetition the brain needs to acquire something.  In the case of the cash register, all it needs is one bit of data from the scanner and its “mental representation” of the pile of groceries– an itemised bill– grows.  In language, however, the U.G. works by hypthesis testing.  Data comes in, partial rules are formed, and the system waits for confirmation or denial of rule.  So the U.G. needs LOADS of data.

Consider this.  Habla means “s/he speaks” in Spanish.  Now, here are a bunch of possible ways to use habla:

1. Juan habla con sus amigos.

2. ¿Habla o quiere hablar con sus amigos Juan? 

3. ¿No habla Juan?  Juan no habla.

4. Cuando se pone enjojado, ¿habla o grita Juan?

5. ¿Quién habla con Martina—Juan o Antonio?

Every time habla is said here, a slightly different set of meanings, grammar rules, positions in sentence, intonations, etc etc, are in play.  It is not enough for the brain to simply know what habla means.  It has to see/hear habla associated with other words and sounds, doing different jobs in different places, etc.  Indeed, a word is not a thing, but a cluster of relational properties which changes in contexts.

Consider this.   ¿Habla con sus amigos Juan?  This means “does Juan talk with his friends?” and literally “talks with his friends Juan?”  The U.G. will build a number of hypotheses here, which will look (to us from the outside– what the brain actually does looks….different) like “where does the subject in a question go?  Hypothesis: the end” and “why does sus have an -s?  Hypothesis: -s is for plural adjectives.”  The next time data comes in, the U.G. will test its hypotheses and if they are confirmed, that bit of neural wiring gets reinforced.

This– among other reasons– is why output, grammar instruction and drills simply do not develop linguistic competence, or mental representation. There are too many rules which are too complex and subtle for the conscious mind, and acquisition can only happen through meaningful, varied input over time.  Grammar instruction– like grapefruits, music and pictures– cannot be processed by the input processor, output is not hypothesis formation (though it may generate input on which the processor and U.G. can operate), and drills of any kind at best offer dull, impoverished input.

The upshot?  VanPatten’s metaphor flat out tells us

  • there will be no meaningful language development without oceans of comprehensible input
  • anything other than comprehensible input– grammar rules and practice, output, ambiguity– does not help develop mental representation
  • if there is a place in the classroom for grammar talk, is is this: we should discuss grammar ONLY insofar as such discussions support accurate meaning. Anything other than, say, “-aste or -iste mean you did ___ in the past” are useless.

10 comments

  1. Thanks, Chris. I love the scanner metaphor and think it will make an easy explanation for the kids, parents, etc! Clear and concise!

  2. if there is a place in the classroom for grammar talk, is is this: did You mean it is this: (?) not a grammar point, I understood what I think You mean, but even if off by a bit it still can be understood? or not??

Leave a comment