Should I Mark Behavior? The Great JGR Debate, and a Silver Lining for Behaviour Rubrics.

Some years ago, Jen S. and her colleagues built a behaviour rubric for her C.I. classes, which Ben Slavic named JGR and which was discussed on his blog and then elsewhere.  Here is a version I have played around with: INTERPERSONAL COMMUNICATION rubric.  I initially opposed the use of JGR, then used it, then ditched it, and now I use it (but not for marks). Note: this is a modified version of the original JGR; and I don’t know for how long she used her rubric, or if she still does, or what the original looked like.

JGR was developed because– like all of us, especially me– the creator had some challenges managing her C.I. classes in her initial year with T.P.R.S., which can in (especially my) rookie hands turn into a “woo-hoo no more textbook!” clown show.  JGR basically “marks” classroom behaviour.  JGR specifies that students make eye contact, add story details, ask for help, not blurt, not use cell-phones etc.  Jen used it (and if memory serves Ben also recommended its use) by making part of her class mark a function of behaviour as marked by JGR.  So the kids might get, say, 20% of their mark each for reading, writing, listening, speaking and 20% for their in-class behaviour.  Part of the thinking here was that some behaviours lead to acquisition, while others do not and also wreck the classroom environment, and so “acquisition-rewarding” behaviour should be rewarded.

JGR– for many people, including me– “works.”  Which is why– especially when linked with allegedly “acquisition-promoting” behaviours– lots of people are interested in it.

JGR is a kind of “carrot-and-stick” marking tool:  if the kids engaged in the behaviours JGR specified, their marks went up, partly because (a) they got marks for those behaviours, and partly because (b) the behaviours should– in theory– help them acquire more language.

This can of worms was shaken around a bit on Ben’s blog, and recently, thanks to the always-remarkable Terry Waltz, there have been FB and Yahoo discussions about it.  So, today’s question:

Should we assess in-class behaviour for final marks purposes?

My answer: no, never.  Why?

1. Behaviours typically asked for in JGR– or other such rubrics– are not part of any     curricula of which I am aware.  Every language curriculum says something like, students of the Blablabian language will read, write, speak and understand spoken Blablabian, and maybe say something about Blablabian culture.  Nowhere does any  curriculum say “students should suggest details for stories” or “students will lookthe teacher in the eye.”

If it’s going to get a mark, it has to be part of course outcomes.  Any assessment guru (Wormelli, Harlen, etc) will tell you the same thing: we do not mark attitude, behaviour, homework, etc, as these are not part of final outcomes.

To put it another way, how do we judge the New England Patriots football team?  By how well, often and/or enthusiastically they practice and look Bill Belichick in the eye, or by how many games they win?  How should Tom Brady be paid: by how often he shows up for practice, and how nice he is to Belichick, or by how many yards he successfully throws?  That’s right.

We could– and I often do– end up in situations where a “bad” kid does well, or a “good” kid does poorly.  I have had bright-eyed, bushy-tailed teacher’s pet-type kids who were not especially good at Spanish, and I have had giant pains-in-the-butt who were quite good.

My best-ever student in TPRS, Hamid Hamid, never added story details, never looked up, and always faced away from the board.  Yet he CRUSHED on assessments and got 100% in Spanish 2.  Two years later, his younger brother, Fahim (also a great student) told me that Hamid Hamid was both shy and deaf in his left ear, so always “pointed” his right ear at the board (and so appeared to be looking away).  This kid’s mark would have been lowered by assessing his “in-class behaviour,” which– given his epic Spanish skills– would have been absurd.

2. As Terry Waltz points out, neurodivergent kids can– and do– acquire language without engaging in many behaviours typically required by participation and behaviour rubrics. She also points out that forcing neurodivergent kids into the “normal” mold is at best less than productive. If you are autistic, anxious, suffering from PTSD (as my stepdaughter does) or facing any other neuro challenges, “engagement” rubrics can make your life miserable while not appreciably meaningfully measuring what you can do with the language.

3. The only thing required for language acquisition is reception of comprehensible input.  While the focus of behaviour rubrics is designed to get kids to tune in, it does not follow that many behaviours which do make for a good class– e.g. people adding good details to stories, looking at each other– are necessary to acquire language.

All of us have been there: you have a plan, you did your story warmup or whatever, but the kids aren’t into it.  You bust out a Movietalk but they aren’t into that either.  Dead class. Now, in a C.I. class, we don’t have recourse to worksheets or whatever, and we still have to teach the language. I have a bail-out move here: direct translation, and I always have a novel on the go, so I can read aloud, and Q&A the novel.  If I’m being particularly non-compelling, I’ll throw an exit quiz at them.

The point: if the kids are getting C.I., they are acquiring.  If they are miserable/tired/bored with stories, fine.  They are gonna get C.I. one way or another.

4. Any kind of behaviour rubric plays the awful “rewards” game.  Ask yourself this question:  why do I teach? The answer– other than because I have to make a living— is probably something like, because it’s interesting, I have some measure of control over my work, and I love kids and my subject.  Some will add that teaching, properly done, opens doors for kids.  Teachers do not teach because they want to be evaluated, or because they want to use the latest gizmo, or because they want public praise, etc.  They are, in other words, intrinsically motivated.  They want to work because the work is good and worthy in itself.

When we institute rewards for behaviours, as Alfie Kohn has spent a career arguing, we destroy intrinsic motivation.  We turn something interesting into payment for marks.  The point stops being paying attention to the story– or adding to it cos you actually care about it– and becomes something rote.

5. Using behaviour rubrics can dampen professional self-examination. If my practice is such that I have to use marks as a stick to keep kids in line (the policing metaphor is not an accident), there are two possibilities: tough kids, and/or I am doing a bad job.  The question why are they not tuned in? might be answerable with any of the following:

— I am not being sufficiently comprehensible

— I am ignoring the top or the bottom end of the class– too fast/slow or simple/complex

— my activities are not interesting, varied or meaningful enough

— the kids see no purpose

— accountability: they don’t see tuning in as something that results in real gains

— I lack basic skills (smart circling, control of vocab, etc etc)

— my story sucks 😉

I had better be able to look in the mirror, consider and then deal with these possibilities, rather than merely acting like a cop and demanding obedience.

Now, behaviour does matter.  You cannot run a T.P.R.S. class without rules etc.  My basic rules:

  • no phones or other distractions (including side-talk, blurting etc)
  • no insults of anyone other than oneself or of rich entitled people
  • listen, watch and read with the intent to understand; ask when you don’t
  • do not create or engage in distractions

The tools that we have for dealing with distracting behaviour include

  • warning the offender, standing by their desk, calling Mom and Dad, etc
  • pointing, with a smile, to classroom rules every time there is a problem
  • sending them to admin if necessary
  • taking their phone until 3:15 (most kids would rather die)
  • detention, where we discuss behaviour
  • assigning read & translate (quiet seatwork)
  • taking the kids outside for a walk, or doing some other kind of physical brain-break
  • changing activities
  • doing a quiz
  • talking to kids one on one and asking what do I need to do to get you focused?


The upshot?  We should not, and need not, mark “behaviour” or “participation.”


Addendum:  is there ever a use for classroom behaviour rubrics?

Yes.  I get my kids to self-evaluate using JGR every 2-3 weeks.  My version generates a mark out of 20.

Nineteen out of twenty kids will very honestly self-evaluate their behaviour, provided they understand exactly what is expected.  One kid in twenty will heap false praise on him/herself.  For the false praisers (“I never blurt in class!”), I sit them down and explain what I think, then we agree on a more realistic mark.

I save these JGR “marks” and once in a blue moon, when a helicopter parent or an Admin wants to know, how is Baninder doing in Spanish, I can point to both the spreadsheet with Numberz and JGR.  This frames the inevitable discussion about marks in terms any parent can understand.  Any parent, from any culture, understands that if Johnny screws around and/or does not pay attention in class, his mark will drop.

JGR– in my experience– accurately “predicts” the marks of about 80% of kids.  When I can show a kid (or a parent or admin), look, here are Johnny’s marks AND Johnny’s own description of how he behaves in class, we can have an honest discussion about marks, Spanish, etc.  Win-win.


  1. My building administrators pushed us not to grade behaviors/participation last year. I understand why! However this is a learning process for both teachers and students. If we are not grading their participation and they know it, there should be new rules of the game! I will use JGR as self assessment plus teacher evaluation every TPRS unit (Terry’s Zhongwen Bu Ma Fan). Your post looks promising to me! Thanks!

      1. If compelling messages are not a part of your curriculum, it should be. jGR is one tool to help us assess student affect and how engaged children are.

  2. Neurotypcial children like my son who has Aspergers do have a hard time making eye contact and any assessment would accommodate this, by simply modifying the rubric for the child. Thanks for bringing up that point; it’s very compassionate of you.

    I would love to discuss curriculum. We must constructively align our curriculum to our assessments, so thank you for bringing up that point. Please check out my blog and click “curriculum” or check out Ben Slavic’s blog on the curriculum discussion. “Tracking the speaker” is a WIDA performance descriptor used by 38 states for ESL and I included it verbatim on my curriculum.

  3. Hilarious truth: “If I’m being particularly non-compelling, I’ll throw an exit quiz at them.”

    I guess what bothers me about the neuro-divergent argument is that it assumes that we teachers are robots applying x rule indiscriminately. Kids understand why I do not force Johnny to make eye contact with me, even when I tell everyone to make eye contact with me. Most of them have grown up with Johnny and know him better than I do. So really, teachers who mis-over-abusively-apply JGR should consider that fairness does not force everyone to do the same thing at the same time. Neuro-divergent argument settled.

    The positive side of JGR is that it gives students clear expectations what to do– and clear expectations handles 80% of classroom management issues. JGR is a valuable tool for most of us. I like the way you apply it in your classes, though. Grades are something else.

    1. “I guess what bothers me about the neuro-divergent argument is that it assumes that we teachers are robots applying x rule indiscriminately. Kids understand why I do not force Johnny to make eye contact with me, even when I tell everyone to make eye contact with me.”

      I agree with this very much Mike.

  4. “When we institute rewards for behaviours, as Alfie Kohn has spent a career arguing, we destroy intrinsic motivation. We turn something interesting into payment for marks.”

    I’m glad you brought this up Chris. And I think Kohn argues that when we reward for anything (behavior, skill, performance) we destroy motivation (and performance).

    I would like to abolish grades in my classroom now. But I keep coming back to my intuition and Kohn saying that the abolition of grades is not something that will best be done by the individual teacher but rather on a school-wide level. Perhaps that’s not what you are arguing for at all, but that’s what I’d like to see for my own kids and students.

    1. Thanks, Jim!

      Grades don’t do much. I always say, the less eval a lang teacher does, the better. Input trumps everything, at least in my experience.

      And “weighing the pig doesn’t make it fatter.”

  5. I am frequently not compelling, although compelling input is my aim. Expectations like those on JGR help to keep kids on track when my lessons fail to engage every kid every day. I have never been able to do that, although I come closer to it with TPRS than with other methods I have tried.

    My official goal as an employee of the school district is to teach the syllabus–measurable outcomes for each student, and if they don’t get there they fail. But my personal goals are bigger. they are warmer and not as concrete. I do the former without disregarding the latter, which I see as more important on a human level:

    –Language Goals: Regardless of outcomes on state or district tests, I want my students to be confident and comfortable with the language. They almost always are.

    –Emotional Goals: I want them to feel good about the language and want to keep on learning it. the way I do that is with lots of personal connections so they associate good feelings with Spanish. The memory of good feelings lasts.

    –Success Goals: I want to give my students tools to succeed in life in any endeavor they choose and that includes knowing how to behave and how their behavior is perceived by others. Using a rubric like JGR helps give some kids a nudge in the right direction. My grade on behavior is not a make or break thing. In fact, in my interpretation, a kid can get an A (90%) on that one part of their grade and not ever utter a full sentence. That is not harsh. I am almost embarrassed at how easy it is.

    On grades. They barely matter. I have used different grade calculation schemes over the years and students usually get the grade they feel they deserve no matter how it is calculated. The kids with IEPs often get higher grades than they have ever earned in any class.

    1. I researched the history of grades and grading in the United States, comparing the American system with those of other countries. It was quite revealing and showed how arbitrary the system is.

      Did you know that the 100-point (or percentage) scale has no pedagogical foundation or justification? It came into prominence with the widespread use of computers because programmers found it easier to program than other numbering schemes.

      Did you know that a three- or five-point system provides greater accuracy in sorting students by proficiency? A twenty-point quiz graded on a percentage scale has a ±2 letter grade margin of error. We reduce all grades to a five-point scale (A–F) anyway, so why not assign marks on that basis in the first place?

      Did you know that in the early days of the 100-point scale, 50% was a C? In some countries, it still is. That’s right, the middle percentage was ‘average.’ Now, ‘average’ is in the 75% range. As one researcher noted, the current system is skewed toward failure and is stacked against students. In France, they use a 20-point scale, and 10 is average. My brother took a course at a school in Albertville, France. The professor told them on the first day, “None of you will ever receive a grade above 18 in this class. That will be extremely rare and only if there are no errors. I as the professor would receive a 19. Twenty is reserved for God.”

      Did you know that standardized tests are, for the most part, norm referenced rather than standards referenced? That means you will never know how well you did on the test. You will be informed only of how well you did compared to everyone else who took the test. Thus, you receive notification that you scored ‘in the 99th percentile,’ for example. 99% of the people who took the test scored worse than you did. But how well did you do against an objective standard, how many questions did you get right? You will never know because the test makers and scorers consider it irrelevant. These are sorting mechanisms, and the only thing that matters is where you placed in the pack (or peloton).

      Did you know that the cut-off scores for standardized tests are arbitrary as well? When the Common Core State Standards were first introduced, this became obvious to anyone who was paying attention. Administrators, Teachers, and Parents expressed concern about what student scores would look like with the new tests. They were reassured that the scores would not vary greatly from past years? Why was that? Because 1) they were norm referenced (comparison with other test takers) and 2) the scoring company arbitrarily chose the cut-off scores. Early in my teaching career, I took the Spanish Praxis II Exam. I failed it by one point. When I looked at several years’ results, that same score would have passed with a comfortable margin in any of the previous five years. What had changed? Not the number of correct answers I provided but the number of people who scored better than I did (most of them native speakers).

      Did you know that the most accurate way of grading is through narrative grades? That was standard procedure until the perceived need for portability of grades reached prominence.

      I think that’s enough to show that grades do not accomplish or indicate what we often assume they do.

  6. I was involved in the discussions around JGR and wound up using it very similarly to the way you do, not as a mark or grade to be entered in the final grade. In my weighted gradebook, classwork (JGR primarily) and homework (given rarely) carried zero weight but maintained a record that I could show parents and students. We could then draw cause-and-effect inferences from the data.

    In my experience, one of your statistics is slightly off. You wrote, “Nineteen out of twenty kids will very honestly self-evaluate their behaviour, provided they understand exactly what is expected. One kid in twenty will heap false praise on him/herself.” What I found was that one student heaped false praise on him/herself while two students rated themselves too harshly and failed to recognize behaviors that fell into the guidelines. Sometimes you have to help students see when they are doing well.

    As for students who are not neurotypical, I agree that the argument assumes that teachers robotically (not that teachers are robots) and undifferentiatedly apply classroom rules and policies. I had a student on the autism spectrum who never wrote anything down unless it was something he had to turn in. At one of our first IEP meetings, I mentioned this to his parents. They stiffened and braced themselves for the predictable harangue. I continued, “But he aces all of his assessments, so I see no need to make him do something that will make no difference in his acquisition of German.” After that, his parents were two of my strongest supporters. BTW, this student passed the Advanced Placement Exam after three years of German.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s