Monday, September 9, 2013

I'm not grading this

I asked my students to turn in a draft of the Cheesecake Task last week. But when I sat down this weekend to prepare to write feedback on their tasks, I hit a snag. Simply put, the work was not good, but their self-evaluations were off the charts high. How could this be?

 Before I started inking comments, I decided to sort the stack into two piles:

Pile one: Almost got it, needs minimal feedback. 
Pile two: Needs a lot of work (and lots of feedback).

Pile one had 5 papers in it. Pile two had 19.

Ugh. 






This semester, I'm continuing my experimentation with standards based grading. (See my previous posts: Why rubrics?, Show me what you can do, and I can show you more than that.) This means setting up a clear set of expectations (learning targets, written as "I can" statements) and using an evidence-based rubric to evaluate learners' progress toward those targets.

Before they turned in their progress, I asked my students to use a rubric to self-evaluate their work. I was thinking this would help them focus their submission and help ensure they had "good evidence" of their proficiency. Very clever of me, I thought.

Except it didn't work. 

Here are the targets and rubric I shared with my students before they turned it in.


The curious thing was, many students rated themselves as 3's and 4's on almost everything, but when I looked at their work there was no evidence. For instance, quite a few students rated themselves highly on the target "I can represent patterns using T-charts..." but their written work showed no use of T-charts at all.

I was stymied.

So I explained it to my wife and asked for her thoughts. And that's when I learned something.

She pointed out that my targets are written as "I can..." statements, and students probably read them and said, "Yeah, I can do that" and rated themselves accordingly, regardless of whether their work contained any evidence of it. I was expecting an evidence-based approach based on the task at hand. The students were looking at it differently.

One student later summed it up this way: "I think the reason we rated ourselves high is because we can do these things, just not with this task." (There was some understandable frustration with the task, which they'd been working on for 10 days now.) It was a good point, and it is why this is not the only piece of evidence I will collect.
 
So what to do? Spend hours writing extensive feedback that would "help them" improve the quality of their evidence? Cross out their self-rated 3's and 4's and replace them with 0's and 1's? I really didn't want to go there. The issue was due to a misunderstanding that, once corrected, would allow students to improve the quality of the evidence without needing my feedback. So I decided: I'm not grading this.

Instead, I sorted the work into piles based on their progress. Four distinct piles emerged (I'll summarize them in a moment). One pile contained 5 student solutions that were either correct or almost correct.

My first thoughts were to use the "almost correct" group as a sort of expert group and have them help the others. But I worried that might devolve into the experts showing "what to do" and take all the problem solving out of the task. That felt like a good way to ruin a great task.

I decided (after conversing with my wife) to use these piles of similar efforts to form homogeneous groups. This way, I could consult with each group quickly at the beginning of the work time and then they could forge ahead with a united purpose.

To manage this, I wrote a single symbol on each students paper (star, triangle, phi, and hash-mark -- don't ask me how I came up with those, except I didn't want them to use anything that felt like a ranking).
 
Then, after a short in-class workshop to get students thinking about evidence-based grading, I invited everyone to form small teams based on their symbols.

Here are the summaries of the four "piles", along with the support and directions I gave them during my brief consultation at the beginning of their work time.
  1. Used simpler cases effectively. Obtained correct or nearly correct data. (5 students)
    1. Instructor support: None needed. 
    2. What's next: Work on finding an explicit rule and justifying the recursive pattern.
  2. Used simpler cases superficially. Obtained incorrect data. (8 students: 4 & 4)
    1. Instructor support: Recommend they hold themselves in the simpler cases a bit longer. 
    2. What's next: Correct your table of values and look for a pattern.
  3. Did not look at simpler cases. (4 students)
    1. Instructor support: Help them to understand what the "simpler cases" strategy looks like for this problem. 
    2. What's next: Work together to explore some simpler cases and build a table of values.
  4. Slicing like a pizza (star pattern or perpendicular grid pattern). (7 students: 3 & 4)
    1. Instructor support: Help them notice that these patterns do not maximize the number of regions formed.
    2. What's next: Hold yourselves in the simpler cases for a while until you are convinced you've found the largest possible number of regions. Then start looking for patterns. 

The groupings allowed different students to receive different levels of support,thereby allowing a degree of differentiation of the various stages of the gradual release of responsibility framework.


I also stole an idea from a colleague (and borrowed his materials) and gave each group a stack of three dixie-cups (red, yellow, green) to use to indicate their status. @delta_dc talks about that method in his post How's it going? if you want to learn more about that.

Yellow means "We have a question, but we're able to keep working."
It proved to be an effective way for me to distinguish between high priority (red cups) and lower priority questions (yellow). When all the cups were showing green, I circulated, observing individual groups and occasionally interrupting to get a status update or to point out a potential problem I had observed.

This was the first time in my teaching that I'd used homogeneous groups and a targeted, limited, preplanned intervention as a form of differentiated instruction. In the end, every group made good progress, and having had just the right amount of support to get them "unstuck", they managed to find their own way through their own collaborative efforts.

And to think, I was about ready to wear out my grading pen--and burn through several hours of my weekend time--writing individual (unnecessary) suggestions for what needed to be improved.


No comments:

Post a Comment