Author: cstolztprs

Teaching Spanish (using T.P.R.S.), senior English and Social Justice 12. I make beer, play Irish music and bluegrass on the mandolin, climb, and take care of my adopted kids and my Samoyed, Zoe. Twitter @srstolz

Story Listening? Oh yea!

Dr Beniko Mason.

This post encourages you to try Story Listening, and responds to objections to it.

Story Listening— SL— is a comprehensible input teaching technique developed by Beniko Mason, who taught English to Japanese Uni students, many of whom had failed first-year Uni English. Mason’s students— the “bad” ones— consistently outperformed their traditionally-taught peers, in many cases acquiring twice as quickly as other students.

SL is very simple. The teacher tells a story (ideally, a folktale or sometthing from literature) in the target language and illustrates it on the board by drawing pictures, writing key words, using arrows etc. The teacher can translate and answer any student questions. When this is done, students read the story. Some teachers have students write an summary of the story in their L1. The SL program is supplemented with as much free-choice reading as students have time for. There is no “accountability piece”: the work is done in class, there are few or no quizzes, and students’ homework— should they choose to do it—- is just…reading!

SL does not involve homework, output, grammar (or other) “practice,” grammar instruction (other than the teacher answering student questions). The instructional sequence moves from shorter, simpler stories to longer and more complex ones.

SL is a “pure input” technique, and it works. Read the research here.

Story Listening has many advantages:

  • In my experience, it’s effective, easy and fun, and I regularly use it.
  • It’s also low-prep, and you can use the stories on the https://storiesfirst.org website (you need an email to sign up). This is the least expensive SL method there is.
  • It’s low prep.
  • It generally avoids controversy, because it focuses on folktales and literature, rather than news or teachers’ experiences. People whose students have religious parents will very much appreciate this
  • It is a way for teachers to maintain their target-language skills. SL uses actual real folktales, or abbreviated literary works, so teachers are being exposed to non-learner-focused language.

Here in North America, lots of us want to use SL in our classes. But there are some biiig differences between Mason’s teaching and research context, and those of eg most North American teachers. These differences will (in my experience— your mileage may vary) pose challenges. The differences between Japan and North America— and objections to SL— include

  • Mason’s research does not look at pure beginners.
  • Mason’s students tend to be 19 and up.
  • The Japanese school system is very big on “sit, listen and learn.” In Canada and the US, uh, not so much 😜
  • English is a fairly phonetic language (unlike say Chinese).
  • Neither Mason nor her students have to be “accountable” to anything stupid, such as a set of textbook exercises, or a set of dumb and scheduled exams, etc. They get one big comprehension & writing test at the end of the course.

There have also been other comments. Here are some.

Today’s question: how do we deal with these problems and objections? Answers follow.

1. SL hasn’t been studied/tried with pure beginners. Sure. So, I don’t start with story listening until kids have had about 40 hrs of L2 input. This is enough time for them to implicitly understand sweet 16 verbs, basic sentence & question structure, and some high-freq vocab. This is the platform onto which SL builds a bigger language stack.

The idea is that a basic gut feel for the language will make adding new words easier by reducing the processing load. To illustrate processing load challenges, here are two German sentences:

1. Mark hat einen Fisch.

2. Mark ist gestern nach Hamburg mit seinem Kumpel gegangen.

You could probably figure that the first sentence means Mark has a fish. The only really new word is einen. So it’s 25% unfamiliar.

In the second— which has the two obvious words Mark and Hamburg— you have 6 totally new words, and you might have guessed that ist means “is.” So this is 66% unfamiliar words. We also have some weird word order. That sentence literally translates as “Mark is yesterday to Hamburg with his buddy gone.”

Sooo…when the new-word ratio is low, we have much easier processing

There are teachers who start SL with beginners. You can talk to them (and to Beniko Mason) on Facebook here.

2. Mason’s students are older, and have been trained to sit, listen and be quiet. Sure! So, we do a few shorter stories instead of one long one in a class. Or, we do SL for part of a class only. We have brain breaks! We do some PQA when a story is done (point to board, and ask basic questions). We can add PQA to the story. No, these modifications of Mason’s method are not ideal, but we do what works in our context.

Mason has correctly commented that anything other than C.I. isn’t helping acquisition nearly as much as does pure C.I. However, our objectives may well include generating output (for admin/observation & teacher eval purposes), and they will certainly include classroom management. So we might well have to mix other things in to SL.

3. English is fairly phonetic, so SL won’t work for non-phonetic languages. True. For F.P.I.G.S. teachers, SL works (in part) because literate L1 learners can read (and there are cognates). SL will not work for eg English L1s acquiring say Chinese. You can’t read a Chinese character, sound it out, and map that sound onto your understanding of spoken Chinese.

If you taught Thai, Hebrew or Hindi— non-Roman alphabetic languages— to English L1s, you would want to ensure a massive amount of vocab-limited input (aural and written) before you started SL, and people would have to be able to read. If ppl cannot read the board, they have problems.

4. There is no “visible accountability”— i.e. there’s no evidence the students are “doing anything” with the language— in a SL class. This is a problem for teachers being observed/tied to a specific curriculum.

If you are tied to a stupid textbook sequence, and/or have dumb grammar-focused exams, SL is not going to work that well.

If you are being observed, and your observer doesn’t understand SLA, I would do something other than SL (unless observer has an open mind 🤣🤣). If they do understand SLA, we tell them this is CI delivery, followed up with reading, and we could— during the reading phase— ask some questions to keep kids visibly focused.

If you must occasionally have kids show output, I would do some TPRS-style stories, and make (and write up) OWI stories. Especially in Levels 1 and 2, these will give kids the simple language chunks they need to throw down some stories or descriptions.

5. There is “no assessment of any kind.” This is not true. Although Mason, with her college students, can avoid tests etc until the final, we can easily do tests to assess comprehension. You can do a dictation to assess listening. You can also have students either summarise or translate the written version of the story. This can generate two marks/week.

My experience with Story Listening was at a demo with Mason herself, who told a very short story in Japanese to us (she wrore the Japaense words with Roman letters) and none of us knew any Japanese. I was lost within two minutes, because I saw pictures, and heard and read words, but they didn’t go together. I got the gist of the story but found the language hard to follow.

When she was done, I counted about 25 words. I could follow the story via pictures and I knew a couple of words— ojo (princess) and shinrin (forest)— but I would not have been able to read a Roman-alphabet version. Japanese has weird word-order and question “rules” and few cognates.

From this I concluded that SL would work best if students had some base knowledge. This would focus mental energy on new stuff, rather than having to focus on everything new all at once.

Anyway, overall, Story Listening is fun, effective, low-prep, and low-cost, and is therefore well worth learning and using. 😁😁

Oct 25th Languages Pro-D in Surrey, B.C.

Hello languages & E.L.L. teachers!

For Mainland teachers who aren’t going to the Island, there will be some C.I. Pro-D on Oct 25th in Surrey!

😁

What:  

1.  A demo & explanation of Beniko Mason’s Story Listening method.  You will leave with an awesome method AND all the free resources you  need!.  We will do the demo in German (no experience necessary  ) so you feel how easy acquisition is.

2.  Zero-Prep C.I. No lesson plan? Want to add to a story? No problem! Four very simple strategies to deliver interesting C.I. content without planning.


3.  C.I. Opening Routine. A simple, easy and student-centered way to teach boring necessities (numbers, time etc), as well as early introduction of past-tense forms.

4. C.I. Round Table. Bring your C.I.-related questions and ideas, and the group will discuss them. I also have some good short discussion items about SLA science to feed our brains.

When:

Fri, Oct 25th, 9-3. One hour for lunch.

Where:
Room S208, Tamanawis Secondary, 12600 66th Ave, Surrey, BC V3W-2A8

Who:

Teachers of all languages, E.L.L. teachers. Everyone welcome!

How much:

Free! Bring your own lunch, or contribute $10 for an amazing lunch of acha dhesi khanna—good Indian food!  (veg options too!)

RSVP—I will only have room for 30 people. Bonus: French and Spanish teachers get a free copy of my kids’ novel Berto y sus Buenos Amigos/Jean-Paul et ses Bons Amis.  

Feel free to forward this email.

chris stolz
stolz_c(att)surreyschools(dott)ca

tamanawis secondary

What is my C.I. Workload?

Recently somebody asked how do I reduce my marking load? This is a crucial question. Anyone who is overloaded/tired makes poorer, short term choices (and is functionally less intelligent than) the non-overloaded.

Blaine Ray once joked that TPRS was developed partly to improve his golf game. There is a solid kernel of truth here: when teachers have family lives, hobbies and rest, they are much better focused in class. Same for kids!

So…here is a look at the workload in Spanish 1 to offer people perspective.

Background: my language teaching is about 50% classical TPRS. On top of that, we have Movietalk, Picturetalk, opening routine, Story Listening, zero-prep activities and of course reading (and I use OWIs in TPRS-style stories). I more or less base everything around a story cycle.

I also want to spend as little time as possible testing and marking (these take away input time, and are boring).

The Marking Workload

I deliver C.I. for 75 min/day for a total of 6 hrs/week for five months per year. My testing includes

• one or two 5-7 sentence-story listening quizzes per story cycle (about teo weeks). I read a 5-7 sentence story aloud, sentence by sentence, the kids copy them down, then translate into English.

• one reading assessment per story cycle. Here, kids translate any of the following: a short story (using recent story vocab), a Wooly story, sentences from the novel we are reading, or I upload a class story to Textivate and use that.

• At the end of every story cycle, we test writing. First, kids have 5 min. to describe a picture. Second, they have between 15 and 50 min. (depending on grade & time during semester) to write a story.

It takes me about 15 min/block to mark & enter quizzes, so 30-45 min every two weeks for quizzes (faster if it’s Textivate or I’m using Wooly for listening).

5-min writes = 15 min/class to read & enter.

Stories take about 40 min (you don’t have to read all of each story— reading 5 random sentences will give you a very accurate picture of their writing).

So 95 min biweekly of work.

Marking load per block per week: 45 min.

The Preparation Workload

I have a vague idea pre-story what vocab (usually verbs, adverbs and prepositions) I want in each story. So prep is zero.

Once a story is asked, I type it up (15 min), look for & cue up some Movetalks (5 min), look for & load pics for Picturetalks (5 min) and type up the most recent bits from the Soap Opera (10 min). So the prep takes 35 min/two weeks = 20 min/week.

So Spanish workload = 1 hr 5 min per block per week outside of class time.

This seems to be a bit lower than the workload in English, about comparable to Philosophy, and much higher than for guitar. One thing is for sure: I get waaaaaay better results, have much more fun (as do the kids), and work a lot less than I did when I used the textbook.

In other words, as research shows, C.I. is not just fun and effective, it’s efficient! 😁😁

So…what did I do to reduce marking?

  1. I stopped giving out & marking stupid cahier/cuaderno homework.
  2. I stopped planning “activities.” C.I. stuff delivers everything we need to acquire language.
  3. I ditched huge projects. Output and translate-into-TL don’t do much for acquisition.
  4. I got rid of most electronics. I’m too lazy to plan QR-code this and Quizlet that. A flashcard on a smartphone is still a flashcard.
  5. I stopped giving stupid “unit exams” complete with multiple-guess questions which took forever to mark.

Getting Rid of a Big Buuuuut

Image result for bottle of wine

This is a decent bottle of wine. It’s also a bet. I bet you this bottle of wine that nobody can refute what follows. Go on…take  the bet! (Mormons and other teetotalers, we can do a bottle of Portland’s finest kombucha, how’s that?)

We all know that C.I. works for language acquisition. Actually, we know that C.I. is the only thing that works. As linguist Bill VanPatten put it on his show, “the benefits of grammar-focused teaching are purely incidental.” That is, when we give students worksheets, or force them to talk/memorise scripts, or memorise lists of words or grammar rules, or whatever silly thing the textbook preaches, they pick up (a wee bit of) language not because of these activities, but despite them.

We have research to support these claims.  Yet, we still have colleagues, Headz, Adminz, Faculty Adjunctz, Evaluatorz, some Parents, and even some students, who say a version of “buuuuut…C.I. doesn’t work, because _______.”

That’s a biiiiiig buuuuut, and nobody’s pedagogical self wants to walk around dealing with THAT, sooooo…TPRS Questions And Answers is proud to present, Getting Rid of a Big Buuuut, aka “the short n sweet for the haters.” Some people don’t like, can’t or won’t read, or don’t “believe in” science. This is for them. Here goes. Thank you: BVP, Robert Harrell, Terry Waltz, Blaine Ray, Bob Patrick, Tina Hargaden, Eric Herman and others for many of these ideas. Note: these work. But you have to tailor them to audience, etc.  As always, YMMV.

  1. “…buuuuut people need to talk to acquire language.” 
    Robert Harrell: OK, so you need to talk to learn to talk. Right. What language would you like to learn?
    — Uhh, Urdu.
    OK, let’s start by speaking Urdu.
    — But I don’t know any Urdu


  2. “…buuuuut we need to [consciously] know grammar rules to speak a language.”

    Me: Which sounds better, I like to run, or I enjoy to run?
    — I like to run.
    Why?
    — …
    Who taught you that “rule”? Did you practice that “rule”?
    — …

    Terry Waltz: *takes out phone and turns stopwatch on*
    Terry: Say three sentences about what you did last night.
    Uh, I cooked dinner and ate with my kids.  Then I watched the news. Later my husband put the kids to bed.
    OK, now, say three sentences about what you did last night, but don’t use the letter “s”.
    I, uhh, cooked dinner and I ate with my uhhh children. Then I watched uhhh TV. And my hu– er, partner– put our ki– err, children– to bed.
    Terry: Your first took you 4 1/2 seconds. Your second took you 16. How easy is it to speak when you have to think about your own language?

  3. …buuuuut if your kids don’t know how to conjugate verbs and fill in the blanks, how are they going to be ready for [high school/middle school/Uni]?”

    Vice-Principal in a Portland school: Riiiiight, good point.  Let’s have a look at State/provincial standards. Hmmm. I don’t see anything here about our curriculum preparing students for any specific subsequent classes.  Could you show me that?
    Colleague: …

  4. …buuuuut they still NEED those skills.”

    Tina Hargaden: Suuure. Let’s have a look at State standards.  There is going to be something in there that says, “students will be able to conjugate verbs and fill in worksheets.”
    *looks up the Oregon World Languages Standards re: what
    Novice High students should be able to do.*
    Tina and colleague: read that students at this level “understand, exchange, and present information about familiar topics in everyday contexts using a variety of rehearsed or memorized words and phrases with attempts at creating simple, original sentences and questions.”
    Tina *shows colleague examples of how students can read and write stories in, and understand spoken Blablabian*
    Tina: sooooo those verb conjugations.  Where do the Standards mention them?
    Colleague: …

    Note: if you can find ONE State or Provincial language curriculum that includes verb chart filling out, pronoun-placing etc work as an objective, that bottle of wine is on me cos you, uh, “win.” Go on, get your Google on!
  5. …buuuuut students need to know all the words for food if they will ever survive in France.”
    Terry Waltz: I’m a certified, professional Mandarin-English translator and I  lived and worked in Taiwan for years. I still don’t know all the words for the food I typically eat there. Neither do the people who live there. And when we don’t know, we just point, and say I’ll have that.

  6. …buuuuut students must know all the numbers from 1-3,998,231.6, all the location words, all the colours, the alphabet, all the basic body parts, and the words for clothing.

    You (in your head): ya right cos when I go to Taiwan, I’m gonna need to say “I need 87 pairs of blue pants to wear on my legs A and B under the raincoat.”
    You (actually; thanks Eric Herman): Why?
    — Well, these are the basics of language.
    You: What do you mean?
    — They are used a lot. Basic. Also they are in our textbook as the first units and they are on the exam I have been giving for the last 45 years.
    You: I wonder.  How about we look at frequency lists to see what’s most used?
    — Sure.
    You: *show them the Wiktionary Frequency Lists*
    You: *press CTRL F to search the list.*
    You: OK, let’s see whether or not “yellow” is in the top-1000 most-used words in Spanish.
    You: *type in amarillo. Nothing comes up. Type in sea (“is” in the subjunctive form, typically taught in Level 4 or 5 in textbook programs). Sea is the 150th most-often-used word in Spanish.*
    You: Hmm that’s weird, well I guess we better ditch colours in Level One and start teaching the subjunctive.
    — …

  7. …buuuuut when *I* was in school, WE learned Latin by memorising verbs and lists of other words.

    Bob Patrick: You took Latin in high school?
    — Yeah, and I got 91.358%.
    Bob: Quid agis hodie?
    — …
    Bob (s.l.o.w.l.y.): Quid agis hodie?
    — …
    You: It’s normal for any student to forget some language over time. But you had trouble understanding me asking you how are you today? in Latin.
    — …

    Note: Kids, don’t try this in parent-creature int– err, I mean, student learning reflection conferences. And if you do, let me know how you did it politely.


  8. …buuuuut that input stuff doesn’t work, because students aren’t learning grammar.”

    Blaine Ray: I believe the best thing a department can do to show who is learning the language and who is not is to share timed writings. If departments required teachers to bring all timed writings from their classes, then it would show who is teaching well and who is not. Teachers wouldn’t be able to pronounce that their students are learning. They would show what their students have learned by bringing in writing samples of all of their students.”

    You: That’s possible.  Why don’t we see? I propose this: let’s you and I choose a picture of, I dunno, a boy walking his dog. We’ll each project that in front of our classes. Students will have five minutes to write about the picture.  They can’t use phones, notes, dictionaries, etc.  Then, we’ll compare.
    — ….

  9. …buuuuut [C.I. instruction, using stories and other interesting materials] is too teacher-centered.”
    Bill VanPaten
    : The [C.I.]  classroom is NOT teacher-centered. It is teacher-led.
    …buuuuut [C.I. classes are] too much about fun, and not enough about real communication.”
    BVP: Entertainment is a valid form of communication.
    …buuuuut [C.I. classes are] too much about stories and characters, and not enough about exchanging information.”
    BVP: [C.I.] is communicative, since it has an expression, interpretation, and negotiation of meaning in a given context.
    …buuuuut teachers who use TPRS [and other comprehensible input strategies] do not teach enough explicit grammar.”
    BVP: What’s on page 32 in the textbook will not be the language that winds up in a student’s head.
    …buuuuut in a C.I. class, there is very little interaction with input, because students are listening to stories and questions, not engaging in conversations.”
    BVP: Interaction with input simply means indicating comprehension. Students can do this in many ways.

10. Colleague/Head: “We all have to use the textbook, common assessments, etc, because we need to make sure everybody’s students have covered the same material, so if one teacher’s kids go to another teacher next year, they will be prepared.”

(Ideally, read Mike Peto’s response and try that).

You: such as?
— well in level 1 students learn food vocab, to eat, regular present tense verbs, pedir, etc
You: does it say that in the State standards? Is that a level 1 outcome?
— well no, but, we need some kind of framework
You: I agree. Let’s base it on State/ACTFL standards, and not textbook units.

K folks, have at it.  Refutations = you get a bottle of wine!

Gianfranco Conti’s Claim and the Evidence

Dr. Gianfranco Conti just joined CI LIFTOFF. Yay! Now, along with Bill VanPatten this group has two experts. VanPatten has a PhD in linguistics (where he focused on SL processing stregies) and has written a few hundred articles in scholarly journals and books, a whack of books, some textbooks, and some novels (his 2011 CV is here). Conti is a French and Spanish teacher with a PhD in applied linguistics (his CV is here).

Conti  and I have divergent views about SLA. Mine is pretty standard: during a discussion, I wrote that “the only way language acquisition occurs is through processing comprehended input.” Whatever else may be going with a learner and their class/learning environment (e.g. forced output, grammar teaching & practice, grammar-“rule” feedback, etc), it is the C.I. that the learner is getting that drives the acquisition bus.  That’s my claim.

Conti countered with this: “Chris Stolz you are welcome to your viewpoint, but the weight of research is solidly against you. Explicit instruction appears time and again to be superior to implicit instruction and there is an argument that it demonstrates to the learners that they can approach language empirically, just like biology or chemistry, and thus makes it more interesting to a wider range of learners.”

Note that Conti’s claim has one giant problem: he doesn’t define what “superior” means, or to what it applies. I give him the benefit of the doubt and guess that “superior” means “generates more durable and accurate mental representation of the target language in the learner’s brain.”

Being the data-slave that I am, I asked for evidence. Here is Conti’s support for his position. My comments follow each work he cites.

Chan, A. Y. W. & D. C. S. Li. 2002. ‘Form‐focused remedial instruction: an empirical study’. International Journal of Applied Linguistics 12/1: 24-53. This study of Cantonese L1s studying L2 English lacks a real control group. When it was begun, the control group instructional materials were so boring that the researchers were forced to treat them with almost the same instruction as the treatment group. Both groups experienced equal gains on post-treatment and delayed post-treatment assessment. The absence of a real control group– eg one which got regular English instruction, or just comprehensible input aurally and/or in writing– means we have no idea what the intervention did relative to other interventions. The assessments also focus on conscious learning. This study therefore does not appear to support Conti’s claim.

Craik, F.I.M. & R. S. Lockhart. 1972. ‘Levels of processing: a framework for memory research’. Journal of Verbal Learning and Verbal Behavior 11: 671-684. This article does not look at the role that grammar instruction plays in language acquisition, and therefore does not support Conti’s claim.

DeKeyser, R. 1994. ‘Implicit and explicit learning of L2 grammar: a pilot study’. TESOL
Quarterly 28/1: 188-194
DeKeyser’s study used an artificial language (Implexan)  and examined whether people who were told grammar rules (and then given input) acquired a better “feel” for the meaning of the language, and better production, than did those who received merely input. He showed subjects pictures with accompanying sentences in the synthetic language. DeKeyser used a production test and a grammaticality judgment test to assess learning.

This study has a number of problems.  These include:
1. n = 6
2. He doesn’t provide his data. Am I missing something? I have a PDF of the article and I don’t see his data.
3. DeKeyser doesn’t explain what he means when he says that the subjects who receive explicit grammar instruction “learned categorical rules.” Does this means they were able to consciously formulate them? Does this mean they could apply them?
4. No delayed post-test was conducted.
5. DeKeyser does not say whether or not the subjects were told the meaning of the sentences. If they were not, this experiment is looking at pattern recognition (general cognitive processing) and not language acquisition. This leads us to…
6. …the probability that the grammar-rule instruction included references to meaning, and therefore made the input more comprehensible. Eg, a student sees a picture of a man on a horse, and the sentence “flerb guf dibble.” They are then told that the word order is subject object verb. This means the sentence probably reads “man horse rides/is on.” This person has a leg up on the person who just sees the picture and the sentences and has to guess its meaning.

For these reasons. DeKeyser’s study does not support Conti’s claim.

DeKeyser, R. 1995. ‘Learning second language grammar rules: an experiment with a miniature linguistic system’. Studies in Second Language Acquisition 17/3: 379-410. This synthetic-language (Implexan) study is the only one which appears to support Conti’s hypothesis that explicit grammar teaching is more effective than delivering comprehensible input when building proficiency in production and comprehension. In this study, which is basically a much bigger version of his 1994 study (and in this one, subjects were told sentence meanings in English), DeKeyser found that the explicit and implicit learning groups were able to recognise the same amount of vocab by the end of the study. However, the explicit-instruction group significantly outperformed the other group in production accuracy. So far, so good, but…
1. There was no delayed post-test.
2. The use of artificial languages– done to simplify testing– is problematic. One must note that nobody has ever gotten DeKeyser’s results with a real language.

 

DeKeyser, R. 1998. ‘Beyond focus on form: cognitive perspectives on learning and practicing second language grammar’ in C. Doughty & J. Williams (eds.). Focus on Form in Classroom Second Language Acquisition. Cambridge: Cambridge University Press. This is not an empirical study. It therefore does not support Conti’s claim.

Erlam, R. (2003). ‘The effects of deductive and inductive instruction on the acquisition of direct object pronouns in French as a second language’. The Modern Language Journal 87/2:242-260.  This study– by the author’s acknowledgement– shows that people who get explicit “grammar” instruction do well on tests where explicit (declarative) knowledge of “grammar” can be accessed. The post-treatment measures of speaking and writing, by Erlam’s admission, did not prevent students from “thinking about” answers. In other words, Erlam tested for conscious learning and not implicit acquisition. This does not therefore appear to address Conti’s claim.

Fotos, S. & R. Ellis. 1991. ‘Communicating about grammar: a task-based approach’. TESOL Quarterly 25/4: 605-628. This study, in the authors’ words, measured and “encouraged communication about grammar.” The authors tried to see if conscious-grammar problem-solving helped students learn grammar rules (learn = consciously understand and explain). This study does not focus on acquisition, and therefore does not appear to support Conti’s claim.

Gass, S. & L. Selinker. 2008. Second Language Acquisition: an Introductory Course (Third Edition). New York: Routledge/Taylor. This is not a book of empiricial studies, and therefore does not support Conti’s claim.

Genesee, F. 1987. Learning through Two Languages. New York: Newbury House.  This book does not contain empirical data, and so does not support Conti’s claim.

Hulstijn, J. 1995. ‘Not all grammar rules are equal: giving grammar instruction its proper place in foreign language teaching’ in R. Schmidt (ed.). There may be a mis-citation in the list Conti provides. I could only find a 1994 collection from R. Schmidt (ed.), and this volume does not contain any empirical studies.  If this is the text Conti recommends, it does not appear to support his contention.

Attention and Awareness in Foreign Language Learning (Technical Report Nº 9). Honolulu, Hawai’i: University of Hawai’i, Second Language Teaching and Curriculum Center, 359-386. This paper– if I have found the right one– outlines theories about consciousness and the “noticing hypothesis” in SLA, but does not provide empirical data.  It therefore does not appear to support Conti’s claim.

Johnson, K. 1996. Language Teaching and Skill Learning. Oxford: Blackwell.  This is a teaching handbook.

Klapper, J. & J. Rees. 2003. ‘Reviewing the case for explicit grammar instruction in the university foreign language learning context’. Language Teaching Research 7/3: 285-314.  This study compared English L1 students of L2 German. It compared two groups: those taught a basically “hardcore grammar” German class (focus on forms), and those taught a much less grammar-focused “society and culture” class (focus on form). 

There are a number of problems with this study:
1. As the researchers themselves note, their study “might be thought to favour explicit over implicit language knowledge and it is certainly possible that slightly different results might have been obtained with fluency measures.” No kidding! The assessment tool was a gap-fill grammar test, on which the hardcore grammar students (FoFs)  predictably beat the others.
2. There was no control group which received “pure” C.I.

This study, because it assesses conscious grammar knowledge and not spontaneous comprehension and/or production, does not support Conti’s claim.

Ming, C. S. & N. Maarof. 2010. ‘The effect of C‐R activities on personal pronoun acquisition’. Procedia – Social and Behavioral Sciences 2/2: 5045-5050.  For the purposes of examining acquisition, this study is flawed. It does not use either a control group or delayed post test. It assesses conscious learning rather than acquisition. It therefore does not support Conti’s claim.

Nation, P. 2007. ‘The four strands’. Innovation in Language Learning and Teaching 1/1: 1-12. This does not provide any data and therefore does not support Conti’s claim.

Norris, J. M. & L. Ortega. 2000. ‘Effectiveness of L2 instruction: a research synthesis and quantitative meta‐analysis’. Language Learning 50/3: 417-528. Ah yes, the bad boy. This study looked at….every other study about language instruction, and concluded that, yes, grammar teaching is necessary. The devil, as always, is in the details: Ortega and Norris do not distinguish between research focused on acquisition (unrehearsed, spontaneous language use) and learning (where explicit awareness, rule-learning etc come into play).

Skehan, P. 2003. ‘Task-based instruction’. Language Teaching 36/ 1:1-14. This article discusses task-based teaching. It does not provide any empirical evidence for Conti’s claim.

Spada, N. & P. M. Lightbown. 2008. ‘Form-focused instruction: isolated or integrated?’ TESOL Quarterly 42: 181‐207. This discusses various types of instruction but does not provide data. It therefore does not support Conti’s claim.

Spada, N. & Y. Tomita. 2010. ‘Interactions between type of instruction and type of language feature: a meta‐analysis’. Language Learning 60/2: 1‐46. In this study, the authors summarise research into instructional practices and conclude that, yes, explicit grammar instruction “works.”  However…in my view, this meta-analysis is deeply flawed, for the following reasons.

1. Only 4 sample studies have a 2nd delayed post-test, of which the average length after treatment was 5 weeks. No delayed P.T. = we have no idea if the gains are durable. 

2. As Eric Herman points out, “just because language use is “free” or “spontaneous” does not rule out people using explicit knowledge (especially if the treatment primed them to do so), and especially in an untimed written mode, which was counted as a “free” response!” The authors themselves write “Thus, one cannot be certain that the oral production tasks used in the primary studies for this meta-analysis are indeed measures of implicit knowledge” (p. 287).”

3. One “effective” study of VanPatten’s processing instruction, (Benati, 2005) was characterized as “explicit.” However, in P.I., as VanPatten has pointed out, students are not “being taught rules.” 

4. The single-biggest problem here, however, is the authors’ description of language acquisition as “learning rules.” This is not what happens with language. There are, strictly speaking, no “rules” to be learned.

One can summarise Spada and Tomita by saying they provide evidence that under some conditions, explicit grammar instruction appears to help production under conditions where conscious language use can occur. This meta-analysis thus appears to contradict Conti’s claims.

 Swain, M. 1985. ‘Communicative competence: some roles of comprehensible input and comprehensible output in its development’, in S. Gass & C. Madden (eds.).  Swain does not provide any data here. In addition, she notes that the main “job” of output in language acquisition is to generate more– and more focused– input for the learner. This article does not support Conti’s claim.

Input in Second Language Acquisition. Rowley MA: Newbury House, 235-253. This essay does not describe any specific evidence about language acquisition, but rather focuses on possible roles for input and output.

Swan, M. (1994). Design criteria for pedagogic language rules’, in M. Bygate, A. Tonkyn and E. Williams (Eds.), Grammar and the Language Teacher. London: Prentice Hall, pp. 45‐55. The full article is available at https://mikeswan.net.  This article does not contain any empirical evidence.

Van Patten, B. & S. Oikkenon. 1996. ‘Explanation versus structured input in processing instruction’. Studies in Second Language Acquisition 18/4: 495-510.
This is a repeat of the classic VanPatten & Cadierno (1993) study on the effects of processing instruction. They wanted to see whether exposure to language, direct instruction, or processing activities resulted in better development of ability to process “non-Englishy” language (Spanish, with its pronoun orders).  Their conclusion: “[r]esults showed that the beneficial effects of instruction were due to the structured input activities and not to the explicit information (explanation) provided to learners.” In other words, this refutes Conti’s claim: it’s the processing of input, not instruction, that develops mental representation of language.

Willis, D. & J. Willis. 2007. Doing Task-Based Teaching. Oxford: Oxford University Press. This book does not contain any empirical studies.  It therefore does not support Conti’s claim.

Conclusion: Gianfranco Conti has claimed that explicit instruction generates more language acquisition (fast, accurate and spontaneous comprehension and production of language) than does the provision of comprehensible input. He provided 23 references that allegedly supported his claim. Only one–marginally– does so. The rest either contradict his claim, are irrelevant, or otherwise do not support it. As of this reading, Gianfranco Conti’s claims await substantiation.

(How) Should I Use Questions to Assess Reading?

Yesterday I found a kid in my English class copying this from her neighbour.  It is post reading assessment– in Q&A form– for the novel Les yeux de Carmen. TPT is full of things like this, as are teachers guides,, workbooks, etc.

The idea here is, read, then show your understanding of the novel by answering various questions about it. It “works” as a way to get learners to re-read, and as what Adminz like to call “the accountability piece,” ie, “the reason to do it is cos it’s for marks.”

Before I get into today’s post, I should note, I (and every teacher I know) uses some kind of post-reading activity.

Q: Should I use questions to assess reading?

A: Probably not. Here’s why.

  1. How do we mark it? What if the answer is right, but the French is poor? Or the reverse? Half a mark each? Do we want complete sentences? What qualifies as acceptable and not for writing purposes? What if there is more than one answer? What’s the rubric we use for marking?
  2. It can (and, basically, should) be copied. This is the kind of thing that a teacher would send home to get kids to re-read the novel. Fine, but…it’s boring, and it takes a long time. It doesn’t use much brain power. If I were a student, I would copy this off my neighbour. If you don’t get caught, you save a bunch of time, and the teacher has no way of noticing.
  3. It would totally suck to mark this. Do you actually want to read 30– or 60!— of these?!? I dunno about you folks, but I have a life. We have to mark, obviously, but these, ugh, I’d fall asleep.
  4. It’s a lot of work for few returns. I asked the kid who’d lent her answers to her friend how long it took (btw, there is one more page I didn’t copy), and she said “about 45 min.” This is a lot of time where very little input is happening.  The activity should either be shorter, or should involve reading another story. As Beniko Mason, Stephen Krashen and Jeff McQuillan (aka The Backseat Linguist) show us, input is more efficient than input plus activities (ie, instead of questions about a story, read another story).  As the great Latinist James Hosler once remarked, “for me, assessment is just another excuse to deliver input.”

So…how should we assess reading? Here are a bunch of ideas, none of them mine, that work.

A. Read the text, and make it into a comic. Easy, fun, useful for your classroom library and requires a bit of creativity.

B. Do some smash doodles. This is basically a comic, but minus any writing. As usual, Martina Bex has killer ideas.

C. Do a discourse scramble activity. For these, take 5-10 sentences from the text, and print them out of order (eg a sentence from the end of the text near the beginning, etc). Students have to sort them into correct order, then translate into L1. This is fairly easy– and even easier if a student has done the reading, heh heh–, as well as not requiring output while requiring re-reading.

Another variant on a discourse scramble is, have students copy the sentences down into order and then illustrate them.

For C, they get one mark per correct translation (or accurate pic), and one mark for each sentence in its proper place. Discourse scramble answers can be copied, so I get kids to do them in class.  They are also due day-of, because if kids take them home others will copy.

D. If you have kids with written output issues, you can always just interview them informally: stop at their desk or have them come to you and ask them some questions (L1, or simple L2) about the text.

Alrighty! Go forth and assess reading mercifully :-).

 

 

 

The Rule of Three: Simpler Evaluation

Teachers are uhhhh obsessive, especially about marking. We write and rewrite assessment instruments, when we could be hitting a bachata class, ripping up the Grand Wall after work, or kicking back with our five-year-old.

^ wanna be overloaded like her? ^ 😞😞

We spend too much time thinking about grading. Luckily for us, I’m gonna make the rest of your teaching career waaaaaay simpler by showing you how to make marking simple.

So…imagine if you got marked on partying. They give you a Number for how well you party.

Q: what would the rubric look like?

A: like this…

1 You are on your way to the party.
2 You are standing in the doorway, chatting with the host, eyeing a nice martini.
3 You are shaking it on the dancefloor with thirty others, with your second drink, and the sexiest person at the party is checking you out.

Works? Sure! It’s simple, quick and accurate. Your Party Mark will be 34%, 66% or 100%. Now, say we also wanted to grade outfits. So we add this:

1 Sweats and slides are kinda basic…but hey, you got out of bed!

2 Business casual? You look good and respectable but no eyeballs/mentions for you.

3 Oh yeah! What’s yr Insta, gorgeous? 😁

As we evaluate our partiers, they can get marks from 2/6, going upward in 15% intervals, to 6/6.

Various assessment gurus will tell you something fairly similar regarding attaching Numberz to Performancez: there are only three real levels of skill that one can accurately describe.  These are basically, not yet proficient, functionally proficient, and fully proficient. Breaking things down further is complicated, and therefore makes marking slower (and rubrics harder for students to understand). The more you refine descriptors and levels, the harder it is to distinguish between them.

So here is our Rule of Three for Evaluation:

1. We focus on three levels of skill (not yet, just got it, fully proficient). 2. There is a clear difference between each level.

Now, I’ma show y’all how this works for a language class. Here’s our oral interaction rubric (end of year, zero prep, totally 100% spontaneous & unplanned Q&A with a student, Level 2 and up in any language).

What matters? 1. student comprehension of interlocutor 2. accurate, comprehensible output 3. degree of engagement with interlocutor

3. I understand everything said. My errors have minimal impact on how understandeable I am. I ask and answer questions, and keep conversation going, appropriately.

2. I understand much of what is said with some obvious gaps. My errors occasionally make me impossible to understand. I try to keep conversation going but sometimes have problems adding to/elaborating on what has been said.

1. I don’t understand much of what is said. My errors often make me hard to understand. I have consistent problems keeping the convo going.

This rubric is s 3×3 and generates marks between 3 and 9 out of 9 (ie 33%-100% in 11% intervals). It’s a nice mix of detail, fast, and simple. You basically never want a rubric more complex than 3×3 cos it gets too texty for kids to read.

There you go. Use it if you want it.

Anyway, a few notes to go with this (and with marking writing, or anything else):

A. You can mark via selective sample. Eg, for writing, say your kids pump out 300-word stories (mid Level 1). I’ll bet you dinner and a movie that marking any three sentences will show you their proficiency as accurately as reading the entire thing. Same goes for answering questions about a reading, or listening. Pick a small sample and go.

B. You will generally see marks “clustering.” The kid who understands all the questions/comments in an oral interview will probably also be able to speak well. This is cos most “skills” develop in concert. With our partying rubric, it is likely that Mr Dressed To Kill is also quite sociable, a good and enthusiastic dancer, etc. Yes, there will be the odd kid who understands everything but can’t say much, but this is uncommon.

Now would somebody please make rubrics for spontaneous written output and reading comp also? Create & share.

Let’s be DONE with marking questions and focus on what matters: finding cool input for kids, and making our grading quick & simple, so that we can relax after work & show up energised. Remember, one of C.I.’s greatest innovators at one point said that their method was developed to boost their golf score. The logic? Well-rested, happy teacher = good teacher 😁😁.

 

 

Can Anyone Teach a Language?

Today somebody asked about a school in Mexico that teaches Spanish via C.I., as the student very much likes her TPRS (etc) class. There were a bunch of responses and suggestions, and then the following.

OK, two points.

First, I havn’t said– or implied– anything Karen Rowan here says. No, language teachers are *not* replaceable “by any native speaker.” The teacher (native speaker or not) needs skills. A language teacher needs to be able to go slowly, repeat the words a lot, be interesting and restrict the vocab load. These are important skills.

Sure, everyone is entitled to their opinion.  I’m not sure how that is relevant, and my opinion is not what Rowan here says. I generally prefer it when people don’t put words in my mouth. However, in one-on-one situations…

Second, today’s question:
Q: Can anyone teach a language?
A: With basic training, yes!

If you are a native speaker of ____, you need to do the following– in a one-on-one setting; classrooms are different– to teach a language. If you are a student of ____, all you have to do is tell (or get somebody to tell) your teacher to do these.

  1. Base instruction around whatever interests the student. If Johnny gets to China and wants to talk food, have at! When I was in Guatemala, I was interested in politics, food and the lives of migrant workers. Most of the vocab one needs to know– high-frequency– will come up naturally during conversation.
  2.  Limit vocabulary and recycle it. People need to hear the vocab over and over, in slightly varied contexts, to get to the automatic recognition stage of acquisition. Here is an example from a conversation I recall from Xela, Guatemala, in January of 1992:

Teacher: do you like Guatemalan food?
Me: Yes.
T: I like Guatemalan food. I also like Mexican food. I like Salvadorean food a bit. Do you like Mexican food?
M: Yes.  Tacos delicious.
T: Yes, tacos are delicious. In Guatemala, tostadas are like tacos.  They are also delicious.

The day we had this convo, we later went to the market and ordered tostadas. You can see here that the teacher used tacos, delicious, are and I/you like over and over.

My teachers were able to circle– repeat in Q&A form– whatever we both wanted to talk about. When we needed new vocab, they said it, I wrote it down, and we used it.  Daily, my understanding of my host family’s Spanish grew. I never in four weeks saw a vocab list, got grammar homework, or got asked to conjugate verbs.

Most of my teachers were Uni students, but not full-time professional teachers.

3. If I had a student going abroad, I would tell them to get the teacher to tell stories, talk with the student about whatever interests them, go slow,  and circle the vocab. I would also suggest they use any apps (eg LingQ, Duolingo) that don’t bore them, and read whatever they find interesting.

This is really all you need to be an effective  language teacher for one-on-one situationsIf it’s a class, you’ll need a method (TPRS, Movietalk, Picturetalk, Story Listening, reading) plus training in these plus novels etc, plus general knowledge about assessment, classroom management, etc.

 

 

Oral Assessment Made Easy

If you must assess oral proficiency– which should not be done before the end of Level 2, in my humble opinion– here is the world’s simplest hack. No more interviews, “speaking tasks” and hassles where you sit with one kid and the other 28 are off-ta– err, I mean, doing their Kahoots or whatever.

In a C.I. class, because any feedback other than “pay more attention & ask for help” doesn’t do anything for acquisition, and because testing wastes time by not delivering input, we want to put as little time and effort into Assigning Numberz as possible. We also never want to assign role-plays or “pretend you’re customer and sales associate”-type scenarios, which test memorisation rather than spontaneous communication.

Image result for kids reading from scripts

This is what we do not want.

This is very simple. Starting in the last 1/4 or so of level 2 Spanish, I randomly check on how kids communicate in class.  Do they ask and answer questions?  Do they use complete sentences? Can they initiate and sustain conversation? To what if any extent do their speaking errors interfere with communication?

Every two weeks, I give each kid a score out of three. At the end of the year, I average these and that’s their oral mark. If the kids disagree, they can come in for a formal oral interview. (If they choose this, it is totally unstructured and unplanned ie they cannot prepare and memorise Qs & As. I will ask them to tell me a story including dialogue, and we will basically ask each other questions.) I have to be conscious: some days kids are sick, exhausted etc, so the score has to reflect their overall (ie best possible) proficiency.

This is the first year I have done this and not a single kid complained about their mark.

Here’s the marking rubric. Go ahead and steal it but please acknowledge authorship.  Note that the rubric will generate a mark between four and twelve.

Oral proficiency is evaluated in the last ¼ of Spanish 11. I will note how you communicate in class. If you disagree with your oral mark, please come in for a one-on-one interview. I am available Fridays after 2:30.

When using Spanish, for a score of ___, I
3 (mastery) 
— ask and answer questions in complete sentences.
— initiate and sustain conversations.
— demonstrate understanding of what is being said to me

— have minor errors that do not interfere with meaning

2 (basic proficiency)
— occasionally ask and answer questions in complete sentences
— sometimes initiate and sustain conversations
— generally demonstrate understanding of what is being said to me
— have errors that noticeably interfere with meaning


1 (not yet proficient) 

— use mostly one-word statements
— don’t sustain or initiate conversations
— often don’t clearly understand what is being said to me

— have errors that consistently block meaning

Communicative Pair Activities

Today somebody posted this:

This is a standard problem with this activity,  recommended by textbooks, ACTFL and methods teachers etc, and known as the “Communicative Pair Activity,” or CPA, wherein students interact in the target language to express and get information about themselves and each other.  These activities are also known as “information gap activities.”

Today’s question: are CPAs worth bothering with?

Answer: not really, but sometimes you have to.

There a bunch of reasons why CPAs are not worth doing.  Here they are, in no particular order.

1) The fake & boring  factor. Using a second language to ask questions a. to which one probably already knows the answer, and b. which would be much more easily asked and answered in L1, feels fake and contrived. And the level of questions in a typical language class– do you like skateboarding? do you prefer red dresses or pink ones? — is waaay below the cognitive level of most students. Kids want to feed, not starve, their heads. As an adult, I haaaate those stupid “find somebody who…” mixers at social or profesisonal functions.

Jody Noble, as usual, nails it:

2) The policing factor. Because CPAs feel fake, kids find them silly, and won’t do them, which turns the teacher into a cop who patrols for English usage.  Ugh. The smarter kids will use the TL only when the teacher swings by, to keep the teacher off their backs.

3) The linguistic junk food factor. Kids– even the ones who actually want to do CPAs– are learners.  And learners, despite their best intentions, make mistakes.  And Partner A’s mistakes become poor input for Partner B. And vice-versa. Since we learn by processing linguistic input, there is no point in providing poor input to our students. I used to see stuff like the following all the time when I was a skill-builder:

¿Te gusta ver la tele?
— Sí, te gusta ver la tele.  ¿Te gusta ver la tele?
Sí, te gusta.

Is there meaning being exchanged?  Yes.  Is the language quality?  No.

Terry Waltz succinctly sums this up: “communicative pair activities are the McDonalds of language teaching.”  And Bill VanPatten writes that “to the extent that output activities ask learners to produce what they are trying to acquire, they put the cart before the horse.”

4) The inefficiency factor. When we include off-task time (a lot), poor input (frequent), and a lot of time setting up, policing and then debriefing CPAs, there is not actually a whole lot of communication going on per unit of classroom time.

I rough-calculated this.  Years ago, when I used the ¡Juntos! program, there would be a CPA such as ask your partner and have them ask you if they like the following sports, and kids would have to ask eight questions: do you like basketball? Do you like hockey? etc.

For a 10-question Q&A, I would give kids 5 minutes. In 5 minutes– if they were actually focused on the activity– they would hear do you like ___? and I like ____ sixteen times.

How many repetitions of I like…do you like…? can I get using a basic C.I. technique such as asking an actor questions and having them ask me (a parallel character in the story) questions? I timed myself asking a student these, and in thirty seconds I got 8 repetitions of do you like…? I like… In one minute, I would get 16 repetitions, and in five minutes, I would get eighty.  Now, obviously, I’m not spending five minutes asking the actor the same question, but the point stands: focused, teacher-provided input is massively more efficient than communicative pair work.

I can also ensure the output is accurate, and that the class is listening (and getting TL input, and not English).  The teacher, especially in a TPRS or other story classroom, can get students to focus by ensuring that the exchange is memorable: instead of asking tedious obvious questions such as do you like hamburgers?, the teacher can ask the actor do you like fighting dragons or knights? or do you like dancing with Ryan Gosling or with Post Malone? Finally, the teacher can ensure that the language used is actually understood– comprehended, as Terry Waltz puts it.

As one person recently posted on the Facebook group CI FIGHT CLUB, “CPAs as a student always made me feel like language class was insulting my intelligence.”

Finally, remember this:  people do not need to speak a language in order to learn to speak it. You do not need to “make kids talk” to teach them to talk.  If they hear enough comprehensible input, and it’s repeated enough, they will first understand, and then later, be able to speak.

So go ahead: model dialogue, interrogate your actors (or your students during PQA, or persona especial), talk about yourself, whatever…pretty much anything is going to provide more, better and more interesting input than communicative pair activities.

CAVEAT MAGISTER:

There are occasionally reasons to do CPAs. As Mike Peto reminds us, if you have ten minutes to spare, reading is a much better use of time than a CPA, but but but…

a. you may have to do CPAs (ie your Defartment Headz might be in a position to dictate what you do in your class, and your job might ride on giving Headz what they want).

b. your Adminz might believe talking is how people acquire languagezzz. If you are getting observed, a CPA or two– after which you call on your two biggest egg-heads to eg. “model successful completion of learning objective”– will satisfy people with boxes to tick.