Friday, July 19, 2013

Show me what you can do

I have a new mantra this semester, reflecting a new (for me) way of thinking about my assessment and evaluation. The mantra is basically: "Show me what you can do." It's a big shift from my early assessment systems.
"Show off!"
Back then, students' grades were based on the traditional proportion of correct responses over the semester. Early mistakes were penalized, opening a permanent hole in the gradebook. Whenever the accumulated holes surpassed a multiple of 10% of the total points, you drop a letter grade. Those grade cut-offs, while very common, felt very artificial. It is an example of what's called the deficit model of assessment. If you've ever "taken points off" you are using the deficit model. I've taken a lot of points off in my day. I've taken points off for simple sign-errors. I've taken points off for botched arithmetic. And I've taken points off for conceptual errors (e.g. sqrt(x^2+y^2) = x + y), even if they were unrelated to the learning objectives at hand.

But the deficit model directly contradicted my long-held belief in the importance of learning from our mistakes. As I became more aware of the tension, I began shifting toward more of a proficiency model of assessment. This summer, the conditions were right, and I took the plunge.

There's a lot to say about this, but I wanted to at least get something out there. The basic structure is this:
  1. I identify a set of 10 to 15 clear learning targets ("I can..." statements) for each unit of study and align every task I collect with one or more learning targets. 
  2. I view collected tasks as "evidence of proficiency", and evaluate them in light of a single question: How convincing is this piece of evidence? 
  3. I invite (and expect) students to resubmit evidence and to submit additional evidence as needed to show proficiency.
So instead of looking for places where I need to "take points off", I've begun viewing my students work as evidence of their proficiency. I've shifted my assessor's role from focusing on uncovering deficiencies and misconceptions to making sure I give my students plenty of opportunities to show what they can do. Incidentally, these tasks do include opportunities for them to confront and explain potential misconceptions.

These opportunities are varied in form; they include both timed tasks and untimed tasks, ranging from fairly structured to open-ended. The cups task is an example of a fairly structured untimed task.  I'll share other examples as time permits.

Here's the rubric I use to assign a score on each learning target based on the evidence gleaned from a performance task:
0 No Evidence / Missed Opportunity. There is no evidence of this target available yet.
1 You’re not there yet. The evidence suggests you need additional support, you may have some misconceptions to overcome, or both.
2 I’m not convinced: the evidence is mixed or not yet convincing. Sometimes you make good progress, but you also make errors, get stuck, or struggle to complete some tasks.
3 I’m almost convinced: You can probably do this, but I don’t know if you can do it consistently.
4 I’m convinced: There’s good evidence that you can do this consistently. You explain your reasoning clearly, and any mistakes tend to be minor and easily corrected or explained.
5 I’m sold: You obviously “own this”: you understand it in a deep way, rarely make mistakes, and communicate your understanding clearly and convincingly.
The rubric is adapted from one my colleague Math Hombre used in a previous iteration of this course. [Update 2/27/14: Here's the new and improved version!] Inherent in this rubric is a focus on the evidence, not the student. Students are expected to submit more convincing evidence as it becomes available.

The bottom line: This semester, my students' grades are based on the accumulation of evidence of their proficiency, and as a result my students appear to be thinking about the tasks I provide as opportunities, not hurdles. I'll talk about that in a subsequent post.

No comments:

Post a Comment