
In a 2017 paper, Schenker and Kraemer argue that iPad use helps develop oral fluency. Specifically, they found that iPad app users after “speaking practice” were able to say more in German, and were more fluent– rapid and seamless– in saying it than were controls who had not “practiced” speaking.
So, prima facie, the authors can claim that focused speaking practice helps develop fluency.
Q: Does this claim hold up?
A: Not according to their evidence.
Let’s start with the method. Kraemer and Schenker took English L1 students of second-year German, divided them into two groups, and gave one batch iPads. The iPad group had to use Adobe Voice to record three tasks per week, which had to be posted to a group blog. In addition, each iPad user had to respond verbally to some other students’ posted responses to the tasks.
The tasks included things such as “describe your room” and “recommend a movie to a friend.”
The control group did nothing outside class other than their usual homework, and the iPad group had their other homework (which the authors do not detail, but describe as work involving “vocabulary and grammar knowledge”) slightly reduced in quantity.
In terms of results, the iPad group during oral testing on average said more, and was more fluent (using language “seamlessly”) than the control. The authors thereby claim that “practice speaking” boosted oral competence.
However, there are a number of atudy design flaws which render the authors’ conclusions problematic.
First, the study compares apples and oranges. The speaking group practised, well, speaking, while the controls did not. The speaking group had more time with German (class, plus speaking, plus doing whatever they did to prepare their recordings, plus listening and responding to others’ posted task responses) than did the controls (class, plus “vocabulary and grammar” hwk). The speaking group had more time doing speaking as well as more total German time than the controls.
This is akin to studying physical fitness by comparing people who work out with those who are couch potatoes, or by comparing people who do two hours a week of working out with those who do four.
Second, the study does not compare speaking development-focused methods. One group “practiced speaking,” while the other did “vocabulary and grammar” homework.
This is like comparing strength gains between a group of people who only run two hours a week with another group that runs two hours a week and lifts weights. Yes, both will get fitter, and both will be able to lift more weights and run a bit faster (overall fitness provides some strength gains, and vice-versa).
However, what should have been compared here are different ways of developing oral fluency. (We should note that fluency first requires broad comprehension, because you cannot respond to what you don’t understand).
We could develop oral fluency by
• listening to various kinds of target-language input (stories, conversations, news etc).
• watching target-language, L1-subtitled film.
• reading (it boosts vocabulary).
Schenker and Kraemer’s “practice speaking” will help (at least in the short term). One could also in theory mix all of these, as a typical class does.
Schenker and Kraemer, however, compare one approach to developing speaking with an approach that does nothing at all to address speaking.
A more persuasive study design would have had three groups: a control, and two different “speaking development” groups. The “speaking development” groups could have included those doing Schenker & Kraemer’s “practice talking” with, say, people listening to speech, or reading, or watching subtitled film (or a mix). One group would spend 60 min per week recording German (and listening to 50-75 second German recordings made by their peers). The other would spend 60 min per week, say, listening to German. At the end, control, speakers and listeners would be tested and compared.
Third, the study does not control for the role of aural (or other) input. The iPad group for one had to come up with their ideas. Since no relatively novice learner by definition comes up with much on their own, they must have gotten language somewhere (Kraemer and Schenker do not discuss what the students did pre-recording their German). My guess is, the speakers used dictionaries, Google translate, reading, grammar charts, things they heard on Youtube, anything they remembered/wrote down from class, possibly Duolingo etc, to “figure out” what to say and how to say it. If you were recording work, being marked on it, and having it responded to by strangers, you would surely make it sound as good as you could…and that (in a language class) could only mean getting extra input. So did the speaking group get better at speaking because they “practiced speaking,” because they (probably) got help pre-recording, or both?
Which leads us to the next problem, namely, that the iPad group got aural input which the control group did not. Recall that the iPad group not only had to post their recordings, they also had to listen and respond to these recordings. So, again, did the iPad group get better because they talked, or because they also listened to others’ recordings of German?
Finally, there was no delayed post-test to see if the results “stuck.” Even if the design had shown the effectiveness of speaking “practice” (which in my view it did not), no delayed post test = no real results.
The upshot is this: the iPad group got more input, spent more time listening, spent more total time with German, and spent more time preparing, than did the controls. This looks (to me) like a problematic study design. Ideally, both groups would have had the same input, the same amount of listening, etc, with the only difference being that the iPad group recorded their tasks.
Anyway, the skill-builders’ quest continues for the Holy Grail of evidence that talking, in and of itself, helps us learn to talk.
The implications for classroom teachers are (in my view) that this is waaaay too much work for too few results. The teacher has to set the tasks (and the blog, iPad apps, etc) up, then check to make sure students are doing the work, and then test them. Sounds like a lot of work!
Better practice– if one feels one must assign homework– would be to have students listen to a story, or watch a video in the T.L., and answer some basic questions about that. This way people are focused on processing input, which the research clearly says drives acquisition.
On a personal note, I’m too lazy to plan and assess this sort of thing. My homework is whatever we don’t get done in class, and always involves reading.