We did, along with all of the review that comes along with it. However, there were a couple important differences:
- We refer to the tests as checkpoints, because they aren't "one and done". Courtesy of it being this way for years, the word "test" has the implication that if the student doesn't do well on it, there's nothing they can do about it. By using the word "checkpoint", we intend to imply that we are just getting an idea about where the student currently is with the material and what we need to do moving forward. In other words, we treated the checkpoints in a very formative way rather than in a summative way.
- The opportunity to make up for a poor performance on the checkpoint was mentioned from the beginning. The kids knew from the beginning of the semester that an individual day would not define their grade.
First, that making "that many" different versions of one test would be difficult to impossible, let alone trying to do so for every test. We found a simple way around this: have the students create their own make-up exercises. This was done topic by topic on the checkpoints, so putting a percentage or letter grade on the checkpoint didn't make sense. Instead, it was about giving the kids feedback about which things they showed they understood and which they didn't - and why. Just as I described in the previous post, the exercises they created needed to be approved by us so that they weren't wasting their time solving an exercise that wasn't going to be at an appropriate level (either too easy or too difficult). After the checkpoint, choosing an appropriate exercise was easier for the students, since they had now seen an example. Of course, the difficulty with this was trying to get the kids to do more than just change the numbers. On the other hand, the new exercises we would have come up with would probably have been at least close to the "change the numbers" type, so while certainly not our favorite type of exercises for the kids to create, it wasn't the worst thing that could happen. Once the exercise was approved, the feedback process for the portfolios took over, with an emphasis on having the students explain their work. Once it became clear that the student understood the material, the folder in the portfolio was marked with "meeting expectations". However, if on a subsequent checkpoint it was clear that the student was now struggling with a topic that had been previous marked as "meeting expectations", then the student needed to create and complete a new exercise.
The second objection to allowing students to work until they show they understand the material is that "the real world doesn't work that way". With all due respect, yes, it does. To cite a specific example, the evaluation process I go through as a teacher is not about a one-time test. Instead, it's about an ongoing conversation between me and the administrator doing the evaluations. Yes, it includes in-class observations - "tests" - but the observations don't get an A, B, or whatever. Instead, the observations give us a common experience on which to base our conversation, looking for strengths and opportunities for improvement. In other words, the evaluation process I go through with an administrator is strikingly similar to the checkpoint and redo process I go through with my students. In fact, the only common "real world" thing I know of that works in a one-and-done way after high school is college. Everything else is much more dialogue and improvement driven.
What we found through this process was that the students were more comfortable both day-to-day and on the days of the checkpoints. And despite the lack of one-and-done opportunities throughout the semester, the algebra 1 kids did well on the common assessment we give for our final exam. In fact, they did better on the second semester exam, after a semester of portfolios, than they did on the first semester exam which didn't have them.
So the final reflection on this process will be on how we determined the grade that was place on the transcript - which I'll do in the next update.