Month: August 2017

Frequency Lessons #2: What Really Matters?

Thought experiment, and neat discussion item for Defartment Meetingz, or Headz or Adminz who don’t understand why Textbookz are the devil in disguise. 

First, read the following lists.  These are English equivalents of Spanish words from Wiktionary.com’s frequency list. If you are using this with colleagues, don’t at first tell them where you got the words. 

List A: welcome, together, window, comes, red

List B: went, that he be, world, shit, that she had gone out

First, you could think about what these lists have in common, how they differ, etc. 

Second, anwer this question: which words will be the most useful for students in the real world?

The obvious answer is List A. After all, we always “welcome” people, kids need to know words for classroom stuff like “windows,” we set the tone for classes by working peacefully “together,” and common sense suggests that “comes” and colours such as “red” are super-important. 

The List B words are, obviously, either less immediately useful or “advanced” (ie textbook level 4 or 5) grammar. 

Now here’s the surprise for us and our colleagues: the List B words are all in the 200 most-used Spanish words, while none of the List A words are in the 1000 most-used Spanish words.

What I got from this was, first, that what is obvious isn’t necessarily true, and second that a sequenced plan of instruction (eg from “simple” to “complex” grammar) would majorly short-change students for their real-world Spanish experiences. 

The textbook, or the doddering grammarian (or even the smiley new school grammarian with their apps, feedback gadgetry, evidence of learning portfolios, self-reflections bla bla bla) will see language acquisition as a set of skills that we master one rule set or vocab set at a time, starting with simplest and going to “more complex.” However, what people need to actually function in México or Spain is, well, high-frequency vocabulary, as much of it as possible. Why is this? Two simple reasons. 

First, high-freq vocab is what one hears most. Knowing it means getting the functional basics and feeling good because you can understand lots. If you easily understand lots of the target language, you can function even if– as is always the case– you can’t speak as much as you understand. When I’m in Mexico and I can’t say blablabla, I can gesture, point, use other words etc. Never yet had a problem with getting my point across, but I’m always wishing I understood more. 

Second, high-freq vocab builds the “acquistional platform.” When our students are finally in a Spanish or Mandarin environment, knowing high-freq vocab reduces the processing load for new input. If students already know a high-frequency sentence such as I wanted that he had been nicer (in Spanish quería que estuviera/fuera más amable), it will be much easier to figure out what I wanted that she had been more engaging means, because we only have to really focus on the word engaging

This is the acquisition platform: when we have the basics (high-freq words and grammar) wired in, it gets steadily easier to pick up new words. 

Anyway…be curious to see what ppl and their colleagues think of this. OH WAIT I FORGOT THE DEVIL 😈. Textbooks. Well the basic prob with texts here is that they don’t even close to introduce words along frequency lines, as I have noted elsewhere

Does iPad “talking practice” boost oral fluency? A look at Schenker & Kraemer (2017).


In a 2017 paper, Schenker and Kraemer argue that iPad use helps develop oral fluency. Specifically, they found that iPad app users after “speaking practice” were able to say more in German, and were more fluent– rapid and seamless– in saying it than were controls who had not “practiced” speaking. 
So, prima facie, the authors can claim that focused speaking practice helps develop fluency. 

Q: Does this claim hold up?

A: Not according to their evidence. 

Let’s start with the method. Kraemer and Schenker took English L1 students of second-year German, divided them into two groups, and gave one batch iPads. The iPad group had to use Adobe Voice to record three tasks per week, which had to be posted to a group blog. In addition, each iPad user had to respond verbally to some other students’ posted responses to the tasks. 

The tasks included things such as “describe your room” and “recommend a movie to a friend.”

The control group did nothing outside class other than their usual homework, and the iPad group had their other homework (which the authors do not detail, but describe as work involving “vocabulary and grammar knowledge”) slightly reduced in quantity. 

In terms of results, the iPad group during oral testing on average said more, and was more fluent (using language “seamlessly”) than the control.  The authors thereby claim that “practice speaking” boosted oral competence. 

However, there are a number of atudy design flaws which render the authors’ conclusions problematic.

First, the study compares apples and oranges. The speaking group practised, well, speaking, while the controls did not. The speaking group had more time with German (class, plus speaking, plus doing whatever they did to prepare their recordings, plus listening and responding to others’ posted task responses) than did the controls (class, plus “vocabulary and grammar” hwk). The speaking group had more time doing speaking as well as more total German time than the controls. 

This is akin to studying physical fitness by comparing people who work out with those who are couch potatoes, or by comparing people who do two hours a week of working out with those who do four. 

Second, the study does not compare speaking development-focused methods. One group “practiced speaking,” while the other did “vocabulary and grammar” homework.
 This is like comparing strength gains between a group of people who only run two hours a week with another group that runs two hours a week and lifts weights. Yes, both will get fitter, and both will be able to lift more weights  and run a bit faster (overall fitness provides some strength gains, and vice-versa).  

However, what should have been compared here are different ways of developing oral fluency. (We should note that fluency first requires broad comprehension, because you cannot respond to what you don’t understand). 

We could develop oral fluency by 

• listening to various kinds of target-language input (stories, conversations, news etc). 

• watching target-language, L1-subtitled film. 

• reading (it boosts vocabulary). 

Schenker and Kraemer’s “practice speaking” will help (at least in the short term). One could also in theory mix all of these, as a typical class does.

Schenker and Kraemer, however, compare one approach to developing speaking with an approach that does nothing at all to address speaking. 

A more persuasive study design would have had three groups: a control, and two different “speaking development” groups. The “speaking development” groups could have included those doing Schenker & Kraemer’s “practice talking” with, say, people listening to speech, or reading, or watching subtitled film (or a mix).  One group would spend 60 min per week recording German (and listening to 50-75 second German recordings made by their peers). The other would spend 60 min per week, say, listening to German. At the end, control, speakers and listeners would be tested and compared. 

Third, the study does not control for the role of aural (or other) input. The iPad group for one had to come up with their ideas. Since no relatively novice learner by definition comes up with much on their own, they must have gotten language somewhere (Kraemer and Schenker do not discuss what the students did pre-recording their German). My guess is, the speakers used dictionaries, Google translate, reading, grammar charts, things they heard on Youtube, anything they remembered/wrote down from class, possibly Duolingo etc, to “figure out” what to say and how to say it. If you were recording work, being marked on it, and having it responded to by strangers, you would surely make it sound as good as you could…and that (in a language class) could only mean getting extra input.  So did the speaking group get better at speaking because they “practiced speaking,” because they (probably) got help pre-recording, or both? 

Which leads us to the next problem, namely, that the iPad group got aural input which the control group did not. Recall that the iPad group not only had to post their recordings, they also had to listen and respond to these recordings. So, again, did the iPad group get better because they talked, or because they also listened to others’ recordings of German?

Finally, there was no delayed post-test to see if the results “stuck.”  Even if the design had shown the effectiveness of speaking “practice” (which in my view it did not), no delayed post test = no real results. 

The upshot is this: the iPad group got more input, spent more time listening, spent more total time with German, and spent more time preparing, than did the controls. This looks (to me) like a problematic study design. Ideally, both groups would have had the same input, the same amount of listening, etc, with the only difference being that the iPad group recorded their tasks. 

Anyway, the skill-builders’ quest continues for the Holy Grail of evidence that talking, in and of itself, helps us learn to talk. 

The implications for classroom teachers are (in my view) that this is waaaay too much work for too few results. The teacher has to set the tasks (and the blog, iPad apps, etc) up, then check to make sure students are doing the work, and then test them. Sounds like a lot of work! 

Better practice– if one feels one must assign homework– would be to have students listen to a story, or watch a video in the T.L., and answer some basic questions about that. This way people are focused on processing input, which the research clearly says drives acquisition. 

On a personal note, I’m too lazy to plan and assess this sort of thing. My homework is whatever we don’t get done in class, and always involves reading.