Roy Lyster

Bad science meets questionable usefulness: Lyster (2004a) on prompting feedback

McGill University professor Roy Lyster gave the British Columbia Language Coordinators’ Association annual conference talk in 2015 about best practices in the French Immersion classroom. He specifically mentioned that form-focused instruction and feedback were essential for acquisition of second languages.  Well, THAT got me wondering so I went and did what a sane guy does of a fine Sunday: I went climbing and then I read his paper.

Lyster has done a very good job in terms of his research, controls, etc etc.  Unlike Orlut and Bowles (2008), Lyster did very good science.  But, as we shall see, there are a lot of problems with his conclusions.  Let’s have a look.

To sum it up, Lyster — following Ellis, DeKeyser et al– argues that there needs to be some “focus on form”– explanations about language (as well as activities that make learners process that language)– in a language classroom in addition to meaningful language itself, because without some “focus on form,” acquisition of some items fossilises or goes wrong.

Lyster noted that English-speaking kids in French immersion were not picking up French noun gender very well.  There are a bunch of reasons for this.  Noun gender is of almost zero communicative significance and so acquirers’ brains pay it little attention, and Immersion students are typically exposed to native-speaker generated/targeted materials which do not foreground grammatical features.  Noun gender acquisition is a classic study question because French has it and English does not. Lyster’s question was, “can form focused instruction (FFI) centered on noun gender improve noun gender acquisition?”  FFI involved a bunch of instruction about noun gender (how to figure out what it is basically based on noun endings, which are in French fairly regular), plus various practice decoding activities.  Lyster set up four groups:

  1. a control group which got regular content teaching.
  2. another group that got (1) plus “focus on forms” (FFI; explanations) only
  3. a second group got (1) plus FFI plus recasts (errors being “properly resaid” by teacher)
  4. a third group got (1) plus FFI (explanations) plus prompts (e.g. the teacher asking un maison ou une maison? after hearing students make noun gender errors); these prompts were designed to get students to reflect on and then output the targeted form

The reasoning for prompts is to “force” the learner to bring “less used” (and improperly or not-yet acquired) stuff into the mental processing loop.  Note that this is a technique for advanced learners– those who have a ton of language skill already built up– and would, as Bill VanPatten has noted, overload any kind of beginner learner.

The results, basically, were that the FFI + prompt group did way better than the others on both immediate and 2-month delayed post-test.  Postests included both choosing the proper form, and producing the proper form.

So, prima facie, Lyster can make the following argument:

“The present study thus contributes to theoretical arguments underpinning FFI by demonstrating its effectiveness when implemented in the context of subject-matter instruction within an iterative process comprising three inter-related pedagogical components:

  1. Learners are led to notice frequent co-occurrences of appropriate gender attribution with selected noun endings, contrived to appear salient by means of typographical enhancement
  2. Learners’ metalinguistic awareness of orthographic and phonological rules governing gender attribution is activated through inductive rule-discovery tasks and metalinguistic explanation
  3. Learners engage in complementary processes of analysis and synthesis (Klein, 1986; Skehan, 1998) through opportunities for practice in associating gender attribution with noun endings.”

Lyster claims that his results contribute to the “theoretical arguments underpinning FFI.”  He is right.  And here is the crux:  the problem with work like this is simple: while he can make theoretical puppets dance on experimental strings, what Lyster does in this paper will never work in a classroom.  Here are the problems:

First. the bandwidth problem, which is that for every acquisitional problem a teacher focuses on “solving,” another problem will receive less attention, because the amount of time/energy we have is limited, and so tradeoffs have to be made.  In this case, Lyster decided that a worthy problem was noun gender acquisition.  So, materials were made for that, time was spent practising that, and teachers focused recasts or prompts on that.  The students got 8-10 hours of FFI.

The question: what did they “de-emphasise” in order to focus on noun gender?  But Lyster does not address this.  Was Lyster’s testing instrument designed to catch changes in other errors that students made?  No– they looked specifically at noun gender. It is possible, indeed, it is almost certain, that the FFI resulted in other grammar or vocab content being downplayed.  Lyster’s testing instrument, in other words, was not holistic: he looked only at one specific aspect of language.

An analogy may be useful here.  A triathlete needs to excel in three sports– swimming, cycling and running– to win.  She may work on the bike until she is a drug-free version of Lance Armstrong. But if she ignores– or undertrains– the swimsuit and the runners, she’ll never podium.  An economist would say there is an opportunity cost: if you invest your money in stocks, you cannot buy the Ferrari, and vice versa.

Second is what Krashen called the constraint on interest problem.  By focusing instruction (or vocab) around a grammar device, we have much less room as teacher to deliver either an interesting variety of traditional “present, practice, produce” lessons or T.P.R.S. or A.I.M.-style stories.   Imagine deciding that since the kids have not acquired the French être avec le passé composé, you must build every activity  around that.  How quickly will the kids get bored?  Je suis allé aux toilettes.  Est-ce que tu est allé à l’ecole? etc. In T.P.R.S. (and in A.I.M.), stuff like this is in every story, but as background, because it’s boring.   It’s like saying, “paint but you only have red and blue.”

Third is the rule choice problem.  Since, as noted above, we can’t deal with every not-yet-acquired rule, we have to choose some items and rules over others. Which will they be? How will we decide?  What if teachers came up with a list of a hundred common errors that 6th grade French immersion kids made.  Which errors should they focus on?  How should materials be built– and paid for– to deal with these?  What if Profeseur Stolz couldn’t give a rat’s ass about French noun gender, but Profeseur Lyster foams at the mouth on hearing “une garçon”?

Fourth, Lyster’s study does not take into account individual learning needs.  OK, all of the subjects in the 4th group got better with noun genders (temporarily, and with prompting) .  But was this the most pressing issue for each person?  What if Max hasn’t acquired the passé composé?  What if Samba is OK with noun gender but terrible with pronouns?  When you use a grammar hammer, everything looks like the same nail.  Noun gender is not very important.  It’s like stripping a car: no brakes and the whole thing crashes; but no hood ornament only looks bad.  Noun gender is the hood ornament of French: looks good but hardly essential.

The problem with a study like Lyster’s– or a legacy-methods program that tries to systematically do what Lyster did– is that it reduces the multidimensionality of both the classroom language and activities and the teacher’s feedback, with the effect of impoverishing input.  If Max needs passé composé and Samba pronom input, and the experiment focuses activities, learning strategy instruction and teacher feedback on noun gender, the experiment’s focus inevitably cuts down on input they need as it plays up noun gender stuff.  As Susan Gross has argued, a comprehensible input classroom is going to solve that problem: by presenting “unsheltered” language– language with no verb tenses, pronouns or other grammatical features edited out– everything learners need is always in the mix.

Fifth, and most seriously, Lyster’s results do not– could not– pass Krashen’s “litmus test” for whether instructional interventions produce legitimate acquisition.  Krashen has said that if you really want to prove that your experimental treatment trying to get language learners to acquire __________ has worked, your results must meet the following criteria:

  • they must be statistically significant not just right after treatment, but three months later
  • they must occur unprompted (what Krashen calls not involving the Monitor)

The three-month delayed post-test is there to show that the intervention was “sticky.”   If it’s been acquired, it will be around for a long time; if it’s consciously learned, it will slowly disappear.  You can check the reasonableness of this by looking at your own experiences– or those of your students– and asking how well does language teaching stick in my or my kids’ heads? (Teachers who use T.P.R.S. know how sticky the results are: we do not need to review.  Legacy-methods teachers have to do review units.)  So what are Lyster’s study’s two most serious problems?

First, Lyster did a two month delayed post-test, so we don’t really know how “sticky” the FFI results were.

Second, Lyster’s assessment of results is largely Monitor-dependent. That is, he tested the students’ acquisition of noun gender when they had time to think about it, and under conditions where the experimenters (or test questions) often explicitly asked whether or not the noun in question was masculine or feminine. Given that the experimental kids had had explicit treatment, explanations etc about what they were learning– noun gender– it is not surprising that they were able to summon conscious knowledge to answer questions when it came assessment time.

At one point in his study, Lyster’s investigators found out that the students being tested had figured out what the investigators were after– noun genders– and had developed a word that sounded like a mix of “un” and “une” specifically to try to “get it right” on the tests. This is not acquisition, but rather conscious learning. 

Indeed, Lyster notes that “it might be argued therefore that […] prompting affects online oral production skills only minimally, serving instead to increase students’ metaliguistic awareness and their ability to draw upon declarative, rule-based representations on tasks where they have sufficient time to monitor their performance ” (425).

Now, why does this matter? Why do Krashen and VanPatten insist that tests of true acquisition be Monitor-free? Simple: because any real-world language use happens in real time, without time to think and self-Monitor.  What VanPatten calls “mental representation of language”– an instinctive, unthinking and proper grasp of the language– kicks in without the speaker being aware.  Real acquisition– knowing a language– as opposed to learning, a.k.a. knowing about a language (being able to consciously manipulate vocab and grammar on tests, and for various kinds of performance)– is what we want students to have.

The marvellous Terry Waltz has called kids who are full of grammar rules, menmonics, games, vocab lists etc “sloshers”: all that stuff has been “put in there” by well-meaning teachers, and the kids have probably “practiced” it through games, role-plays or communicative pair activities, but it hasn’t been presented in meaning-focused, memorable chunks– stories– so it sloshes around.

We also want to avoid teaching with rules, lists, etc, because– as Krashen and Vanpatten note– there is only so much room in the conscious mind to “hold and focus on” rules, and because the brain cannot  build mental representation– wired-in competence– of language without oceans of input.  If we teach with rules and prompts, and when we assess we examine rules and prompts, we are teaching conscious (read: limited) mind stuff.  We’re teaching to the grammar test.

So…to sum up Lyster’s experiment, he

  • took a bunch of time away from meaningful (and linguistically multidimensional) activities & input, and, in so doing,
  • focused on a low-importance grammar rule, and his results
  • do not show that the learners still had it three months post-treatment,
  • do not show that learners could recognise or produce the form without conscious reminders, and
  • did not measure the opportunity cost of the intervention (the question of what the students lost out on while working on noun gender)

Does this matter?  YES.  Lyster, to the best of my knowledge, is giving bad advice when he recommends “focus on form” interventions.  If you teach Immersion (or just regular language class), doing grammar practice and noticing-style activities is probably a waste of time.   Or, to put it another way, we know that input does a ton of good work, but Lyster has not shown that conscious grammar interventions build cost-free, wired-in, long-term unprompted skill.

My questions to Lyster are these:  on what functionally useful evidence do you base your claim that focus on form is essential for SLA, and how would you suggest dealing with rule choice, bandwidth, opportunity cost and individualisation problems, etc?