Bad Ideas

Losing With Word Games

It’s January 2022 and Wordle— also in German, French and Spanish— has become the ninth stage of COVID. And to nobody’s surprise, Wordle has gotten some good Twitter press by language teachers who advocate for its use. This happens every few years: a word game shows up, and people love it.

Varied word games’ common threads include the use of fine visual perception, logic and target-language knowledge to find words. Word games include Hangman, Wordle, crossword puzzles, word searches, acrostics and so forth.
So, today’s question: Should I use word games in my language classroom?

My answer: Generally, no. And why not?

Well, first principles: language is acquired only by processing comprehended input in a communicative context. And a communicative context is a situation where meaning is created, negotiated and/or exchanged for a given purpose. Meaning is something non-linguistic: enjoying a story, gathering information, evaluating information, etc.

So, what are the problems with word games?

First, you have know the word you are looking for. For example, in Hangman or Wordle, we might get to this: __ R __ L L. If you have lots of English, you will make some guesses such as troll, droll, trill, drill and so on. If you are a learner of English, you will be blindly throwing letters in there, hoping for a hit, and if you get it, you probably won’t know the word’s meaning.

Second, you are not processing meaning with these games. You can find words in a word search, Hangman game or Wordle simply by using logic, visual recognition and guesswork. When Wordle tells you that your __ R _ L L guess, DRILL, is correct, yaaay! you won, and you don’t have to know what “drill” means.

Third, Wordle, Hangman and acrostics are hard in additional languages. I can solve any English Wordle in three lines. Spanish, French and German Wordles completely kick my ass…and I have way more of those languages than do most learners in high school or college.

Textbook publishers sell the wordgame parts of their books & workbooks by arguing that eg “trying to remember French words will help kids acquire them.” Now, there is research from conscious learning domains which says something like, if you practice recalling something, you will remember it better (this is why eg flashcards work). But this is not true for language acquisition. The language version of this is, the more often you process a word in a communicative context (ie hear/read it), the more likely you are to remember it.

Acrostics are especially stupid. If you can see the word, you circle it. Again, you can do this without attending to meaning. I’m reminded of Sudoku. When I saw my first Sudoku, I first figured out what to do (basically if X is here, then Y cannot be, rinse and repeat), which was interesting. Actually doing a Sudoku involves almost zero brain: follow the procedure and you get there. Basically, if a computer can generate it, it’s boring to do.

If you want to play games in the TL, here are two suggestions which involve zero prep, are fun, and involve processing meaning.

1. Grab the pen. After you read/create a story, or do anything, get the kids in pairs, put a pen between members of each pair, and say either a true or a false TL statement about your reading, story etc aloud. If they agree, they have to grab the pen. They get a point for grabbing the pen first, but they lose a point if they grab the pen when the statement is false. This game seems ridiculous but kids love it.

2. Who Am I Describing? Divide the class into 2-6 teams. Make a TL statement about anyone in the class, or any character in the story, or somebody famous, etc. EG: this girl rides a motorcyle or this boy really likes ballet. The first person who puts up their hand sand says you are describing ____ gets a point for their team. You can make this simple– I have played this on Day One after our first story– or complex, by eg lying about people.

What If I’m Stuck With the Text?

You went to IFLT or NTPRS, or you got a Terry Waltz or Wade Blevins workshop, whereby seemingly magically in 90 minutes you became able to understand and tell a simple story in Chinese or German or Cherokee.  You ran into Stephen Krashen at the Starbucks at the conference and bought him one of his trademark gigantic lattes. You’re all hopped up on BVP.  And your Blaine Ray workshop included a set of Look, I Can Talk! and man, is September ever looking fun!

And then it’s mid-August and the email comes from your Head, who says at the first defartment meeting, we will be discussing the grammar and vocabulary target piece across grade levels and classrooms, to ensure that all students in all classes have the same learning opportunities for standards-aligned assessment bla bla bla and suddenly you know exactly how Cinderella felt at 12:01 AM.  Because what the Head is saying is, we are all going to follow the textbook and have students write the same exam.  They might have gussied this up into fancier language, by saying “integrated performance assessment” instead of “unit,” “structures” instead of “vocabulary,” and “proficiency” instead of “marks” or whatever.  To you, however, it’s all lipstick on a pig.

Yes, this totally sucks, because as researcher Michael Long reminds us, the idea that what we teach is what they learn, and when we teach it is when they learn it is not just simplistic, it is wrong.  Language acquisition cannot be scheduled by teacher or text or testing requirements.  BUT…you are still stuck with District/Adminz/Headz who want everybody on the same team and so you are stuck with the text.  Preterite tense mastered by November!  Clothing unit assessment in January!  Numbers mastered on the 13th day!

Anyway, here in no real order are a few ideas about Dealing With The Text.  There are a few basic things that have to happen (other than you keeping your job): educating colleagues, actually effectively teaching a language, keeping the ignorami off your back, and getting kids through the test.

1. Colleagues have to be educated about what actually happens during S.L.A. and what actually works.  Bill VanPatten said these exact words to Eric Herman in 2016.  So, how?  Well…

a. Results.  Nothing, ever, will trump 5-min timed writes and story writes.  If you show at a dept meeting with crusher results, especially from “weaker” students, and/or from students who do not use notes or dictionaries during writing, the resident dinosaurs are going to have a very hard time arguing against C.I.  The C.I. kids will write more, and more fluently, and more interestingly.  Blaine Ray says as much.  Kids who get good C.I. in whatever form (targeted, untargeted, OWIs, stories, Movietalk, Picturetalk, reading) will in the long run outperform grammar kids.  Your colleagues who actually care about kids (as opposed to their own comfort, or keeping their workload low) will notice.

b. Bridge building.  The apparent weirdness (to a grammarian and/or textbook teacher) of comprehsnion-based instruction can be off-putting.  So show them good C.I. that they can do with the text, what I have called the “six bridges.” In my dept., most of my colleagues don’t do or “believe” in C.I.  But my department head likes Movietalk, Picturetalk and novel and story reading.  Some C.I. beats none.

Personal note: you can lead a horse to water, but… It is important to try to show people that (and, later, how) C.I. works, but a best-case scenario is that many listen, a few try, and fewer than that stick with C.I.  In my experience (and I have learned this the hard way), the most important thing is keeping doors open. If you have results, are nice, are open to talk…people will at least listen.

c. Assessment straight-talk.  Sarah Cottrell makes this point: if every teacher has to do the same test at end of year or whatever, the process of deciding on the test (format, material etc) should be obvious.  The only things I can say here are that a. the ACTFL guidelines are your friend.  The ACTFL guidelines do not say that grammar testing, verb tables etc are valid (or useful) assessment of students’ abilities.  b. whatever testing is done, it should primarily involve processing of meaningful whole language and spontaneous production of language.  Reading or listening to meaningful things, like stories and situationally-clear dialogues, and writing meaningful things (ditto) are useful.  Fill in the blanks, verb tables, etc, is not.  And whatever students are tested on should have been taught: no “authentic resource decoding” gotcha!-style surprises. c. State/provincial standards are your friends.  No State or Provincial standard includes “fill in the blanks” as a communicative objective.

If the department/District/whatever decides on (say) a list of nouns and verbs or verb tenses or whatever, best practice will be to not assess these on a schedule.  There is not too much harm being done by asking that, say, all French 2s will know the passé composé, but this should be an end-of-year goal, rather than “by Unit 3 in November, students will ______.” We know acquisition is piecemeal and, as Bill VanPatten says, “unaffected by instructional intervention,” so it is important to provide a lot of varied input of vocab, grammar, “rules” etc over a looong time so kids can maximise their chances of picking it up.

2. For the textbook itself, rearrange order, ditch low-frequency vocabulary, and build simple routines to master boring stuff.  OK, here is how

a. Every text I have ever seen thinks weather, numbers, hellos, goodbyes, colours, location words etc matter.  If you must “cover” these, try this, and let your Dept Head/Amin know, I am doing this, but not in “unit” form, and here is how.  For example, the Spanish Textbook Avancemos Uno puts all of this into the Leccion preliminar…just spread it out throughout the year. This is something even a textbook teacher can get behind: less vocab? Yes please!

b. For low-frequency vocab (especially in programs organised around thematic/topical “units”), ditch the non-essential stuff.  Again, in Avancemos Uno Unidad 1 Leccion 1, some things are not worth spending time on (eg. descansar, andar en patineta (to rest, to skateboard) which are low-frequency vocabulary (not in top-1000 most-used words).  We are always better off spending more time on less vocab than less time on more vocab (and, as Susan Gross, said, shelter vocabulary, not grammar).

c. The daily opening routine is amazing prep for the kids in languages like Spanish where verb tenses are an issue.  One verbform per day = they will have solid understanding by end of year.

(How) Should I Use Questions to Assess Reading?

Yesterday I found a kid in my English class copying this from her neighbour.  It is post reading assessment– in Q&A form– for the novel Les yeux de Carmen. TPT is full of things like this, as are teachers guides,, workbooks, etc.

The idea here is, read, then show your understanding of the novel by answering various questions about it. It “works” as a way to get learners to re-read, and as what Adminz like to call “the accountability piece,” ie, “the reason to do it is cos it’s for marks.”

Before I get into today’s post, I should note, I (and every teacher I know) uses some kind of post-reading activity.

Q: Should I use questions to assess reading?

A: Probably not. Here’s why.

  1. How do we mark it? What if the answer is right, but the French is poor? Or the reverse? Half a mark each? Do we want complete sentences? What qualifies as acceptable and not for writing purposes? What if there is more than one answer? What’s the rubric we use for marking?
  2. It can (and, basically, should) be copied. This is the kind of thing that a teacher would send home to get kids to re-read the novel. Fine, but…it’s boring, and it takes a long time. It doesn’t use much brain power. If I were a student, I would copy this off my neighbour. If you don’t get caught, you save a bunch of time, and the teacher has no way of noticing.
  3. It would totally suck to mark this. Do you actually want to read 30– or 60!— of these?!? I dunno about you folks, but I have a life. We have to mark, obviously, but these, ugh, I’d fall asleep.
  4. It’s a lot of work for few returns. I asked the kid who’d lent her answers to her friend how long it took (btw, there is one more page I didn’t copy), and she said “about 45 min.” This is a lot of time where very little input is happening.  The activity should either be shorter, or should involve reading another story. As Beniko Mason, Stephen Krashen and Jeff McQuillan (aka The Backseat Linguist) show us, input is more efficient than input plus activities (ie, instead of questions about a story, read another story).  As the great Latinist James Hosler once remarked, “for me, assessment is just another excuse to deliver input.”

So…how should we assess reading? Here are a bunch of ideas, none of them mine, that work.

A. Read the text, and make it into a comic. Easy, fun, useful for your classroom library and requires a bit of creativity.

B. Do some smash doodles. This is basically a comic, but minus any writing. As usual, Martina Bex has killer ideas.

C. Do a discourse scramble activity. For these, take 5-10 sentences from the text, and print them out of order (eg a sentence from the end of the text near the beginning, etc). Students have to sort them into correct order, then translate into L1. This is fairly easy– and even easier if a student has done the reading, heh heh–, as well as not requiring output while requiring re-reading.

Another variant on a discourse scramble is, have students copy the sentences down into order and then illustrate them.

For C, they get one mark per correct translation (or accurate pic), and one mark for each sentence in its proper place. Discourse scramble answers can be copied, so I get kids to do them in class.  They are also due day-of, because if kids take them home others will copy.

D. If you have kids with written output issues, you can always just interview them informally: stop at their desk or have them come to you and ask them some questions (L1, or simple L2) about the text.

Alrighty! Go forth and assess reading mercifully :-).

ACTFL: Almost There!

The American Council on the Teaching of Foreign Languages provides American teachers with guidance about “core practices” which ACTFL recommends.  Unfortunately, ACTFL hasn’t done much reading of science (or discussion with successful teachers) in forming these guidelines.

Today’s question:  are ACTFL’s core practices best practice?

Answer: Sometimes.

dumb actfl list

First, ACTFL’s suggestion that teachers “facilitate target language comprehensibility” is solid.  No arguments from science or good languages teachers.  Now, the rest…

  1. The use of “authentic resources” is, well, problematic.  As I have discussed, an awful lot of #authres use low frequency vocabulary, and they don’t repeat it very much.  Yes, you can “scaffold” their “use” by frontloading vocab, removing vocab, etc.  Which raises the question of why bother using #authres? Why not just start with something that is actually comprehensible?Want to teach culture?  Picturetalk and Movietalk work well.  Music…great, because if it’s good, people will listen to it over and over (and maybe focus on the lyrics) but expect a load of slang and other low-freq vocab.

    In terms of acquisition bang-per-buck, or gains per unit of time, nothing beats a diet of comprehensible input.

  2. That  teachers should “design oral communication tasks” for students is not the best idea.  Learner-to-learner communication in the target languagea. is a difficult thing on which to keep students (especially adolescents)  focused.  Why use the TL to discuss something in which L1 is quicker and easier? is what kids often think.  In my experience, for every three minutes of class time students get for “talking practice,” you might get thirty seconds of actual “practice,” and then L1, Snapchat etc take over.  In a full C.I. class, you have a lot more time where students are focusing on interpreting the target language.

    b. will feature poor learner L2 use becoming poor L2 input for other students, which is not optimal practice.  As Terry Waltz has noted, “peer to peer communication is the McDonalds of language teaching.”

    c. lowers the “richness” of input: what a teacher (or good book) can provide has richer and more complex input than what learners can do for each other.

  3. Planning with a “backward design model”– i.e. having specific plans for specific goals– is something we might have to do in some Districts, where there are imposed exams with vocab lists and so forth.  Much better practice is to simply  allow student interests– and frequency lists– guide what is taught. Student interests– self-selected reading; story co-creation and activities using vocabulary in student stories– will by definition be compelling, and high-frequency vocabulary  most useful.The only meaningful primary goals in a second-language classroom are  that 1. students be able to easily demonstrate comprehension of a LOT of the target language and 2. that students read and listen to a lot of the target language (in comprehended form). If this is accomplished, everything else– ability to speak and write– inevitably follows. Planning anything else– S.W.B.A.T. discuss ______; SWABT write ______— gives instruction an unproductive interest-narrowing and skill-practicing focus.

    It is also well worth thinking about the ideal “end state” or goal of language teaching.  I agree with Krashen: we are here to get people to the point where they can continue to acquire on their own.  If they automatically recognise a ton of high-frequency vocabulary (which will by definition include most grammar “rules”), they will understand a lot and be able to “slot in” new vocab. And most importantly, when they get to France or Mexico or China or Blablabia, input will ramp up so much that spoken French, Spanish, Chinese and Blablabian will emerge on its own.

  4.  “Teach grammar as concept and use in context”– not bad.  ACTFL here notes that meaning comes first, yaaay.  Should we “teach grammar”? Other than explaining meaning, no: conscious knowledge about language does nothing to develop competence with language. Although if students ask why do we _______ in Blablabian, a ten-second “grammar commercial” won’t hurt.
  5. “Provide oral feedback” is a terrible idea. Why?a. Anything we address to explicit awareness does not enter into implicit memory.  If Johnny says yo gusto chicas, and we say no, it should be me gustan chicas, he might be able to remember this for the eight-second auditory window, and maybe even repeat after us. But if Johnny is merely listening and repeating, he is not processing for meaning, which is how language is acquired.

    b. Oral correction makes Johnny embarassed— it raises his affective filter– and this is both uncomfortable and unproductive for him.

 

Anyway, we are getting there.  ACTFL puts C.I. front and center; as we C.I. practiioners continue to show just how well C.I. works, hopefully ACTFL eventually ditches its old-school recomendations.

Don’t Do This

One C.I.-using American colleague recently shared this section from a Spanish test which their defartment head gave their Spanish class, viz

idiot task

How dumb is this?  Let us count the ways:

  1. Unclear instructions.  Are we supposed to rearrange the words in the sentences, or the sentences themselves, or both?
  2. Some of these have more than one possible answer (way to rearrange words).  Eg c. could be vivir juntos no es fácil or no vivir juntos es fácil.
  3. What does this have to do with actual Spanish that people actually speak or write?  Nothing.
  4. I have never seen a language curriculum that says students will be able to take scrambled words and turn them into sentences.
  5. I’m not sure what they are assessing here.  It’s not comprehension of actual Spanish, since nobody speaks or writes like that.  It’s not output, since students aren’t generating language.

 

This reminds me of those high-school math problems that felt like this:  Suzie is twice as old as Baninder.  When Baninder is twice as old as John, John will be three times as old as Suzie.  How old will Suzie’s dog be on Thursday when Baninder is four? 😉

This is basically a gotcha! question for the grammar geeks.  Yes, you could figure it out, but why bother?

 

Ben comes out swinging: homework, and some generalisations 

Well, Ben Slavic never pulls his punches (which is what I love about him) as you can see below.  Ben does implicitly raise a question, however: is it worth tarring all one’s colleagues with the same brush?  See Ben’s post, and my comments.

Well, my dear Mr Slavic, I would respectfully suggest that there is waaaaay more to the homework question than this.  So, Ben, what about these points?

What about teachers who have to give homework? Required in some places.  Are all these teachers mean, afraid, in need of approval, boring, or incompetent?  Generalise much?  It is a much better idea to look at a specific practice than something like “homework” which is so vague it could mean almost anything.

What about good homework? Things that I send home with kids– making simplified cartoons from asked stories, or Textivate sequences, or translations of short passages from L2 into L1– all deliver good C.I., are easy, and do not take much time.  I tell my kids, budget 15 min/week for Spanish homework.  Hey Ben, do you think my homework mean, or coming from fear, boring or pointless?

What about finishing up class work? My policy– in all classes except English, where there is simply not enough time to read novels in class– is, if you don’t get it done in class, it’s homework.  Would you recommend something else, Ben?

Your kids “don’t do it anyway.” Why?  Was the homework pointless, too much, too hard, infantile, or what?  Does what works (or not) with your kids apply to me and mine? 90% of my kids will do my homework if it’s not unreasonable.

Homework “seems insulting.” I’ve never heard or felt this from kids.  I have heard, it’s too much/hard/boring though. The reality in schools, with languages, is that most students  do not get enough exposure to the language (comprehensibly) in class, even with great teachers, to get anywhere near mastery in 2-4 years.  A bit of enjoyable and not too difficult reading or listening outside of  class is going to do what all comprehensible input does: boost acquisition.  How we mark hwk etc will vary across contexts, but the “insulting” tag seems, well, pointless and unclear.

Homework “is a national sickness.”  It would be much more accurate to say, stupid homework is a national sickness.  And by stupid homework, I mean more or less what Alfie Kohn means: things that do any of “building work habits,” or which unnecessarily repeat what was done in class, or which don’t work (in our world, grammar stuff etc), or which cut into family/leisure or personal interest or sports time, etc.

I don’t make decisions for my kids based on other people’s dumb ideas…I make them based on what’s going to help my kids pick up Spanish.

Anyway, my dear sh*t-disturbing Ben, you havn’t offended me.  But then, I don’t speak for everyone.

 

Does iPad “talking practice” boost oral fluency? A look at Schenker & Kraemer (2017).


In a 2017 paper, Schenker and Kraemer argue that iPad use helps develop oral fluency. Specifically, they found that iPad app users after “speaking practice” were able to say more in German, and were more fluent– rapid and seamless– in saying it than were controls who had not “practiced” speaking. 
So, prima facie, the authors can claim that focused speaking practice helps develop fluency. 

Q: Does this claim hold up?

A: Not according to their evidence. 

Let’s start with the method. Kraemer and Schenker took English L1 students of second-year German, divided them into two groups, and gave one batch iPads. The iPad group had to use Adobe Voice to record three tasks per week, which had to be posted to a group blog. In addition, each iPad user had to respond verbally to some other students’ posted responses to the tasks. 

The tasks included things such as “describe your room” and “recommend a movie to a friend.”

The control group did nothing outside class other than their usual homework, and the iPad group had their other homework (which the authors do not detail, but describe as work involving “vocabulary and grammar knowledge”) slightly reduced in quantity. 

In terms of results, the iPad group during oral testing on average said more, and was more fluent (using language “seamlessly”) than the control.  The authors thereby claim that “practice speaking” boosted oral competence. 

However, there are a number of atudy design flaws which render the authors’ conclusions problematic.

First, the study compares apples and oranges. The speaking group practised, well, speaking, while the controls did not. The speaking group had more time with German (class, plus speaking, plus doing whatever they did to prepare their recordings, plus listening and responding to others’ posted task responses) than did the controls (class, plus “vocabulary and grammar” hwk). The speaking group had more time doing speaking as well as more total German time than the controls. 

This is akin to studying physical fitness by comparing people who work out with those who are couch potatoes, or by comparing people who do two hours a week of working out with those who do four. 

Second, the study does not compare speaking development-focused methods. One group “practiced speaking,” while the other did “vocabulary and grammar” homework.
 This is like comparing strength gains between a group of people who only run two hours a week with another group that runs two hours a week and lifts weights. Yes, both will get fitter, and both will be able to lift more weights  and run a bit faster (overall fitness provides some strength gains, and vice-versa).  

However, what should have been compared here are different ways of developing oral fluency. (We should note that fluency first requires broad comprehension, because you cannot respond to what you don’t understand). 

We could develop oral fluency by 

• listening to various kinds of target-language input (stories, conversations, news etc). 

• watching target-language, L1-subtitled film. 

• reading (it boosts vocabulary). 

Schenker and Kraemer’s “practice speaking” will help (at least in the short term). One could also in theory mix all of these, as a typical class does.

Schenker and Kraemer, however, compare one approach to developing speaking with an approach that does nothing at all to address speaking. 

A more persuasive study design would have had three groups: a control, and two different “speaking development” groups. The “speaking development” groups could have included those doing Schenker & Kraemer’s “practice talking” with, say, people listening to speech, or reading, or watching subtitled film (or a mix).  One group would spend 60 min per week recording German (and listening to 50-75 second German recordings made by their peers). The other would spend 60 min per week, say, listening to German. At the end, control, speakers and listeners would be tested and compared. 

Third, the study does not control for the role of aural (or other) input. The iPad group for one had to come up with their ideas. Since no relatively novice learner by definition comes up with much on their own, they must have gotten language somewhere (Kraemer and Schenker do not discuss what the students did pre-recording their German). My guess is, the speakers used dictionaries, Google translate, reading, grammar charts, things they heard on Youtube, anything they remembered/wrote down from class, possibly Duolingo etc, to “figure out” what to say and how to say it. If you were recording work, being marked on it, and having it responded to by strangers, you would surely make it sound as good as you could…and that (in a language class) could only mean getting extra input.  So did the speaking group get better at speaking because they “practiced speaking,” because they (probably) got help pre-recording, or both? 

Which leads us to the next problem, namely, that the iPad group got aural input which the control group did not. Recall that the iPad group not only had to post their recordings, they also had to listen and respond to these recordings. So, again, did the iPad group get better because they talked, or because they also listened to others’ recordings of German?

Finally, there was no delayed post-test to see if the results “stuck.”  Even if the design had shown the effectiveness of speaking “practice” (which in my view it did not), no delayed post test = no real results. 

The upshot is this: the iPad group got more input, spent more time listening, spent more total time with German, and spent more time preparing, than did the controls. This looks (to me) like a problematic study design. Ideally, both groups would have had the same input, the same amount of listening, etc, with the only difference being that the iPad group recorded their tasks. 

Anyway, the skill-builders’ quest continues for the Holy Grail of evidence that talking, in and of itself, helps us learn to talk. 

The implications for classroom teachers are (in my view) that this is waaaay too much work for too few results. The teacher has to set the tasks (and the blog, iPad apps, etc) up, then check to make sure students are doing the work, and then test them. Sounds like a lot of work! 

Better practice– if one feels one must assign homework– would be to have students listen to a story, or watch a video in the T.L., and answer some basic questions about that. This way people are focused on processing input, which the research clearly says drives acquisition. 

On a personal note, I’m too lazy to plan and assess this sort of thing. My homework is whatever we don’t get done in class, and always involves reading. 

Should I Mark Behavior? The Great JGR Debate, and a Silver Lining for Behaviour Rubrics.

Some years ago, Jen S. and her colleagues built a behaviour rubric for her C.I. classes, which Ben Slavic named JGR and which was discussed on his blog and then elsewhere.  Here is a version I have played around with: INTERPERSONAL COMMUNICATION rubric.  I initially opposed the use of JGR, then used it, then ditched it, and now I use it (but not for marks). Note: this is a modified version of the original JGR; and I don’t know for how long she used her rubric, or if she still does, or what the original looked like.

JGR was developed because– like all of us, especially me– the creator had some challenges managing her C.I. classes in her initial year with T.P.R.S., which can in (especially my) rookie hands turn into a “woo-hoo no more textbook!” clown show.  JGR basically “marks” classroom behaviour.  JGR specifies that students make eye contact, add story details, ask for help, not blurt, not use cell-phones etc.  Jen used it (and if memory serves Ben also recommended its use) by making part of her class mark a function of behaviour as marked by JGR.  So the kids might get, say, 20% of their mark each for reading, writing, listening, speaking and 20% for their in-class behaviour.  Part of the thinking here was that some behaviours lead to acquisition, while others do not and also wreck the classroom environment, and so “acquisition-rewarding” behaviour should be rewarded.

JGR– for many people, including me– “works.”  Which is why– especially when linked with allegedly “acquisition-promoting” behaviours– lots of people are interested in it.

JGR is a kind of “carrot-and-stick” marking tool:  if the kids engaged in the behaviours JGR specified, their marks went up, partly because (a) they got marks for those behaviours, and partly because (b) the behaviours should– in theory– help them acquire more language.

This can of worms was shaken around a bit on Ben’s blog, and recently, thanks to the always-remarkable Terry Waltz, there have been FB and Yahoo discussions about it.  So, today’s question:

Should we assess in-class behaviour for final marks purposes?

My answer: no, never.  Why?

1. Behaviours typically asked for in JGR– or other such rubrics– are not part of any     curricula of which I am aware.  Every language curriculum says something like, students of the Blablabian language will read, write, speak and understand spoken Blablabian, and maybe say something about Blablabian culture.  Nowhere does any  curriculum say “students should suggest details for stories” or “students will lookthe teacher in the eye.”

If it’s going to get a mark, it has to be part of course outcomes.  Any assessment guru (Wormelli, Harlen, etc) will tell you the same thing: we do not mark attitude, behaviour, homework, etc, as these are not part of final outcomes.

To put it another way, how do we judge the New England Patriots football team?  By how well, often and/or enthusiastically they practice and look Bill Belichick in the eye, or by how many games they win?  How should Tom Brady be paid: by how often he shows up for practice, and how nice he is to Belichick, or by how many yards he successfully throws?  That’s right.

We could– and I often do– end up in situations where a “bad” kid does well, or a “good” kid does poorly.  I have had bright-eyed, bushy-tailed teacher’s pet-type kids who were not especially good at Spanish, and I have had giant pains-in-the-butt who were quite good.

My best-ever student in TPRS, Hamid Hamid, never added story details, never looked up, and always faced away from the board.  Yet he CRUSHED on assessments and got 100% in Spanish 2.  Two years later, his younger brother, Fahim (also a great student) told me that Hamid Hamid was both shy and deaf in his left ear, so always “pointed” his right ear at the board (and so appeared to be looking away).  This kid’s mark would have been lowered by assessing his “in-class behaviour,” which– given his epic Spanish skills– would have been absurd.

2. As Terry Waltz points out, neurodivergent kids can– and do– acquire language without engaging in many behaviours typically required by participation and behaviour rubrics. She also points out that forcing neurodivergent kids into the “normal” mold is at best less than productive. If you are autistic, anxious, suffering from PTSD (as my stepdaughter does) or facing any other neuro challenges, “engagement” rubrics can make your life miserable while not appreciably meaningfully measuring what you can do with the language.

3. The only thing required for language acquisition is reception of comprehensible input.  While the focus of behaviour rubrics is designed to get kids to tune in, it does not follow that many behaviours which do make for a good class– e.g. people adding good details to stories, looking at each other– are necessary to acquire language.

All of us have been there: you have a plan, you did your story warmup or whatever, but the kids aren’t into it.  You bust out a Movietalk but they aren’t into that either.  Dead class. Now, in a C.I. class, we don’t have recourse to worksheets or whatever, and we still have to teach the language. I have a bail-out move here: direct translation, and I always have a novel on the go, so I can read aloud, and Q&A the novel.  If I’m being particularly non-compelling, I’ll throw an exit quiz at them.

The point: if the kids are getting C.I., they are acquiring.  If they are miserable/tired/bored with stories, fine.  They are gonna get C.I. one way or another.

4. Any kind of behaviour rubric plays the awful “rewards” game.  Ask yourself this question:  why do I teach? The answer– other than because I have to make a living— is probably something like, because it’s interesting, I have some measure of control over my work, and I love kids and my subject.  Some will add that teaching, properly done, opens doors for kids.  Teachers do not teach because they want to be evaluated, or because they want to use the latest gizmo, or because they want public praise, etc.  They are, in other words, intrinsically motivated.  They want to work because the work is good and worthy in itself.

When we institute rewards for behaviours, as Alfie Kohn has spent a career arguing, we destroy intrinsic motivation.  We turn something interesting into payment for marks.  The point stops being paying attention to the story– or adding to it cos you actually care about it– and becomes something rote.

5. Using behaviour rubrics can dampen professional self-examination. If my practice is such that I have to use marks as a stick to keep kids in line (the policing metaphor is not an accident), there are two possibilities: tough kids, and/or I am doing a bad job.  The question why are they not tuned in? might be answerable with any of the following:

— I am not being sufficiently comprehensible

— I am ignoring the top or the bottom end of the class– too fast/slow or simple/complex

— my activities are not interesting, varied or meaningful enough

— the kids see no purpose

— accountability: they don’t see tuning in as something that results in real gains

— I lack basic skills (smart circling, control of vocab, etc etc)

— my story sucks 😉

I had better be able to look in the mirror, consider and then deal with these possibilities, rather than merely acting like a cop and demanding obedience.

Now, behaviour does matter.  You cannot run a T.P.R.S. class without rules etc.  My basic rules:

  • no phones or other distractions (including side-talk, blurting etc)
  • no insults of anyone other than oneself or of rich entitled people
  • listen, watch and read with the intent to understand; ask when you don’t
  • do not create or engage in distractions

The tools that we have for dealing with distracting behaviour include

  • warning the offender, standing by their desk, calling Mom and Dad, etc
  • pointing, with a smile, to classroom rules every time there is a problem
  • sending them to admin if necessary
  • taking their phone until 3:15 (most kids would rather die)
  • detention, where we discuss behaviour
  • assigning read & translate (quiet seatwork)
  • taking the kids outside for a walk, or doing some other kind of physical brain-break
  • changing activities
  • doing a quiz
  • talking to kids one on one and asking what do I need to do to get you focused?

 

The upshot?  We should not, and need not, mark “behaviour” or “participation.”

 

Addendum:  is there ever a use for classroom behaviour rubrics?

Yes.  I get my kids to self-evaluate using JGR every 2-3 weeks.  My version generates a mark out of 20.

Nineteen out of twenty kids will very honestly self-evaluate their behaviour, provided they understand exactly what is expected.  One kid in twenty will heap false praise on him/herself.  For the false praisers (“I never blurt in class!”), I sit them down and explain what I think, then we agree on a more realistic mark.

I save these JGR “marks” and once in a blue moon, when a helicopter parent or an Admin wants to know, how is Baninder doing in Spanish, I can point to both the spreadsheet with Numberz and JGR.  This frames the inevitable discussion about marks in terms any parent can understand.  Any parent, from any culture, understands that if Johnny screws around and/or does not pay attention in class, his mark will drop.

JGR– in my experience– accurately “predicts” the marks of about 80% of kids.  When I can show a kid (or a parent or admin), look, here are Johnny’s marks AND Johnny’s own description of how he behaves in class, we can have an honest discussion about marks, Spanish, etc.  Win-win.

Tech Done Wrong…and Right.

“Techmology,” as Ali G. says, “is everywhere,” and we feel forced to use it.  E-learning! i-Tech! Online portfoli-oli-olios! Quizkamodo!  Boogle!  Anyway, the litmus test for tech in the language classroom is the same as it is for anything else:  does it deliver compelling, vocab-restricted comprehensible input?

Today, a look at two ways to play with tech.

Here is a recent #langchat tweet from a new-waveish language teacher:

“When my students respond to my Twitter [now X] in Spanish, they receive points. The class that gets the most points wins a pizza party.”

What’s the problem here?

1.  As Alfie Kohn has noted, using rewards to encourage ____ behaviour turns teaching & learning into a payment & reward system: kids buy a pizza by doing ____.  But we really want to get kids to acquire languages because the process itself is interesting.  If we have to pizza-bribe kids, we are doing something wrong.

2.  The kids get a pizza party…during class time? Is this a good way to deliver the target language to kids? What about the kids who don’t use Twitter?  Do they not get to be part of the pizza party?  Do they sit there and do worksheets or CPAs or whatever while their peers gleefully yap in English, chat and cram junk food into their mouths?  What if kids are good at Spanish, but can’t be bothered to write “un tuit”?  What if they are working, or lack digital access?

3.  Output, as the research shows, does not improve acquisition…unless it provokes a TON of target-language response which meets all the following criteria:

  • it’s comprehensible
  • it’s quality and not student-made (ie impoverished)
  • it actually gets read/listened to

So if the teacher responds, and if the student reads/listens to the response…it might help.

4. Workload.  Kids don’t benefit from creating output.  The teacher also has to spend time wading through bad voicemails, tweets and what have you.  Do you want to spend another 30 minutes/day looking at well-intentioned– though bad– “homework” that doesn’t do much good?

5. What do kids do when they compete?  They try to win.  So the kid who really wants pizza is going to do the simplest easiest thing in French every day just so s/he can get the pizza. Hello ChatGPT.

Now, while the “tweet/talk for pizza” idea is a non-starter, there are much better uses for tech out here…here is one, from powerhouse Spanish teacher Meredith McDonald White.

The Señora uses every tech platform I’ve ever heard of, among them Snapchat (a free smartphone app).  You get it, make a user profile, and add people à la Facebook. Once people “follow” you, you can exchange images and short video with text added, and you can do hilarious things with images (eg face swap, add extra eyeballs, etc).

Her idea is simple and awesome:

  1. She sends her followers (students) a sentence from a story or from PQA.
  2. The kids create or find an image for which the sentence becomes a caption.
  3. They send her the captioned image.
  4. She uses these by projecting them and then doing Picturetalk about them.

Here is an example in French. The teacher writes the sentence quand j’ai des devoirs de francais (“When I have French hwk”) and a kid makes the meme. The kids get to choose the picture.

french meme

3.  This serves for Picturetalk:  Is there a girl/boy?  Does she have a problem?  What problem?  What is her hair like?  Is she happy?  Why is she unhappy?  Where is she?  What is her name? etc…there are a hundred questions you can ask about this.

Not all the kids will chat/email back, and not all images will work, but over a few months they should all come up with some cool stuff.  You can get them illustrating stories (4-6 images) using memes…

This is excellent practice (for outside class). Why?  Because the kids are

  • getting quality comprehensible input
  • personalising the input without having to make or process junky language
  • building a community of their own ideas/images
  • generating kid-interesting stuff which becomes an in-class platform for generating more comprehensible input

And– equally as importantly– the teacher can read these things in like 3 seconds each, and they are fun to read.  #eduwin, or what?