Saturday, December 24, 2016

Food Network

I enjoy watching Food Network.  There are a number of shows that, as I’m flipping through the stations, I will always stop to watch.  Two of my favorites are Good Eats and Cutthroat Kitchen, both hosted by Alton Brown.

Good Eats doesn’t appeal to everyone, as Alton goes beyond the usual format of a cooking show and explains why all of the ingredients interact the way they do, and oftentimes makes suggestions as to what you could use as a substitute for some of the ingredients.  This makes the show sort of a hybrid between the standard PBS cooking shows and chemistry class.

Cutthroat Kitchen, on the other hand, is more of a game show where the contestants are told they will be making a certain type of dish (not a specific recipe), but then Alton throws different sabotages at them that force them to think on their feet without the assistance of any new ingredients or utensils.  Most of the sabotages are ridiculous and would never be experienced in a real kitchen, but the entertainment value is high and the dishes actually turn out well in most cases - at least that’s what the judges on the show say.

Those who watch Good Eats purely for the recipes are usually annoyed by the chemistry part.  They don’t care about why the recipe works. All they want is to have the recipe in hand, watch Alton make the recipe, and then try it themselves.  Substitutions aren’t important. Variations aren’t important. The only thing that matters is to be given step-by-step instructions to create the recipe.

But what happens when you have to change the recipe, either because you forgot to buy one of the ingredients at the store or because you have a friend who is allergic to something in the original recipe?  For those who rely on step-by-step instructions, the answer is to go back to the store or to pick a different recipe.  For those who understand how the recipe works, a quick (or a planned) substitution can be made, and the dish will turn out just fine.

Sound familiar?  All that matters in many math classes, to both students and teachers alike, is that the students are able to follow a set of step-by-step instructions to get the right answer. Why it all works isn’t important, just follow the directions.  Variations aren’t important, since there won’t be any variations on the tests, at least not the ones given in class.  So the teacher sees their job as being to present the material in as many ways as necessary for the students to be able to follow the step-by-step algorithm.  Students repeat the refrain “just tell me what to do” because they see their role as being to memorize the steps and reproduce them on the test.

Ummm...this isn’t math.  Not really.

Real math involves understanding why the algorithm works, and being able adjust it to fit a new situation.  This is why both teachers and students struggle with preparing for some standardized tests - namely, the ones that make it a habit of throwing in exercises that rely on the appropriate concepts but that are not like the standard exercises found in a textbook.  The AIR test last spring is a good example of this.  At times, AP tests can be.  When this happens, the students complain that the test wasn’t fair because they had never seen any problems like these before.  The teachers complain that if they had known “that kind of problem” was going to be on the test, they would have made sure the kids saw some of them.  In other words, both the students and the teachers see memorizing the algorithm for a specific type of exercise as the purpose of math class., it’s not.

Math class is supposed to be about empowering the students with both the basic concepts and mechanics as well as with the problem-solving skills to flexibly use them in new situations - even if the situation occurs on a standardized test.

The implications for what needs to happen in the classroom are deep.  Teachers are told that they need to differentiate the instruction to meet the needs of every student.  Students come to expect the teacher to “teach the way they learn best”.  Let’s be honest: we all learn differently.  That means a high school teacher would need to teach the same lesson in over a hundred different ways each day, assuming the teacher knows the way each student learns best.  This is not reasonable.  Actually, it’s not even possible.  Twenty-four hours wouldn’t be enough time for this to happen, let alone the seven hours in a school day. And besides that, this would still focus on the basic algorithms and not on the flexible problem solving.

So what is possible?  Does the method of instruction exist that empowers the students to take responsibility for their learning?  Can one method actually be used every day that provides the opportunity for every student to learn how to problem solve the way they learn best?

Yep.  Any of the discussion-based methods - project-based learning, problem-based learning, and Harkness, for example - do exactly this.  It’s not necessary to change the instruction for each student.  What is necessary is to give the kids the freedom to learn the material in the way they are most comfortable, and have a deep enough understanding of the material ourselves to be able to support them when they get stuck without resorting to “here, let me show you how to do this”.  And by learn, I don’t mean memorize the algorithm.  I mean learn the material to the point that they can use the content and skills flexibly.

The kids can’t stop once they are able to make dinner from the recipe.  The standardized tests - and life, for that matter - demand more than the ability to reproduce a recipe they've seen on an episode of Good Eats. They need to be ready to compete on Cutthroat Kitchen.

Saturday, September 24, 2016

Evidence-Based Assessment: The Self-Assessment

At the beginning of each school year, we, as a district, as a school, as a department, and as individuals go through the process of setting goals for the year.  We set metrics against which we will measure ourselves, work throughout the year to make progress on the goals, meet with peers and administrators to get feedback, and make adjustments along the way.  In addition to this, we keep track of our own progress, often times being our own harshest critic.  Occasionally, we will do a short reflection to see what we have accomplished and what we still need to work on.

Now, imagine replacing all of this with a system in which we receive a new set of goals every three to four weeks, and at the end of every three to four weeks we are judged, without regard to anything else that may be going on, on how well we have met the previous set of goals.  If we haven't met the previous goals, we have no chance to revisit them and no chance to improve our performance on them. Instead, we receive a rating that will be used at the end of the year as part of a final judgement we will receive.

I'll take the first system. Thanks.

Especially since I have a lot of control over the process in the first system.  Of course, I also have a lot of responsibility to monitor my own progress, as well as to seek out and respond to feedback from others.

Many doubt that students have the capacity for the responsibility part of the equation. That doesn't mean we should resort to the second system described above, which, sadly, is the standard system we use to give grades to our kids. Instead, it means that we should teach them how to set goals, how to self-assess, how to respond to other words, we should help them gain the life skills they will need.  You know, the skills we need as part of our professional growth.

This is why we  included self-assessment as part of the process of going gradeless.  The structure was simple:

  1. Give the kids a list of the skills we have been working on, asking them to choose from a list whether they can do any question, a limited set of questions, or essentially no questions based on each skill.  Emphasis was placed on the evidence the students have provided (on checkpoints or in their portfolios) and not simply on how well they feel they understand the material.
  2. Have the kids look at their responses to the list and, based on the responses, give themselves a letter grade.

This was done through a Google form, which made administration and compilation easy.  The first time we did this, the algebra 1 kids were brutally honest with themselves, with many giving themselves a lower grade than I would have assigned.  Their honesty continued throughout the semester.  The honors precalc kids, on the other hand, all stated they currently deserved an A.  My response was not to tell the kids they were wrong, but rather to tell them that the evidence did not match their assessment, and to please provide the evidence .  While there were a few kids who consistently gave themselves higher marks than the evidence indicated, the overwhelming majority of the precalc kids were as honest as the algebra 1 kids for the remainder of the semester.  At the end of the semester, when we did this self-assessment one last time, we followed it up with a one-on-one conference with each student.  If the student and I agreed on the semester grade, then the significance of the final exam was to confirm the grade.  If the student still believed they deserved a better grade than I believed their evidence had demonstrated, then the final exam became the vehicle through which they could convince me they were correct.

So, looking at the entire process, while we obviously have to set a few of the goals in place - the curriculum is the curriculum - we allow the kids a lot of choice in the way in which they demonstrate their understanding and mastery of the material, discussing their progress with them regularly and responding to their work with feedback and opportunity rather than with judgement.

In other words, we're preparing them for the real world far better than chasing points for a grade ever could.

Friday, July 22, 2016

Evidence-Based Assessments: The Checkpoints

So what about the tests?  Surely you still had tests and didn't just rely on stuff the kids did at home and put into their portfolios?

We did, along with all of the review that comes along with it. However, there were a couple important differences:
  • We refer to the tests as checkpoints, because they aren't "one and done". Courtesy of it being this way for years, the word "test" has the implication that if the student doesn't do well on it, there's nothing they can do about it.  By using the word "checkpoint", we intend to imply that we are just getting an idea about where the student currently is with the material and what we need to do moving forward.  In other words, we treated the checkpoints in a very formative way rather than in a summative way.
  • The opportunity to make up for a poor performance on the checkpoint was mentioned from the beginning.  The kids knew from the beginning of the semester that an individual day would not define their grade.
I have heard two main arguments against allowing students to redo anything in general, and tests in particular.

First, that making "that many" different versions of one test would be difficult to impossible, let alone trying to do so for every test.  We found a simple way around this: have the students create their own make-up exercises. This was done topic by topic on the checkpoints, so putting a percentage or letter grade on the checkpoint didn't make sense.  Instead, it was about giving the kids feedback about which things they showed they understood and which they didn't - and why.  Just as I described in the previous post, the exercises they created needed to be approved by us so that they weren't wasting their time solving an exercise that wasn't going to be at an appropriate level (either too easy or too difficult).  After the checkpoint, choosing an appropriate exercise was easier for the students, since they had now seen an example.  Of course, the difficulty with this was trying to get the kids to do more than just change the numbers.  On the other hand, the new exercises we would have come up with would probably have been at least close to the "change the numbers" type, so while certainly not our favorite type of exercises for the kids to create, it wasn't the worst thing that could happen.  Once the exercise was approved, the feedback process for the portfolios took over, with an emphasis on having the students explain their work.  Once it became clear that the student understood the material, the folder in the portfolio was marked with "meeting expectations". However, if on a subsequent checkpoint it was clear that the student was now struggling with a topic that had been previous marked as "meeting expectations", then the student needed to create and complete a new exercise.

The second objection to allowing students to work until they show they understand the material is that "the real world doesn't work that way".  With all due respect, yes, it does.  To cite a specific example, the evaluation process I go through as a teacher is not about a one-time test. Instead, it's about an ongoing conversation between me and the administrator doing the evaluations. Yes,  it includes in-class observations - "tests" - but the observations don't get an A, B, or whatever. Instead, the observations give us a common experience on which to base our conversation, looking for strengths and opportunities for improvement.  In other words, the evaluation process I go through with an administrator is strikingly similar to the checkpoint and redo process I go through with my students.  In fact, the only common "real world" thing I know of that works in a one-and-done way after high school is college. Everything else is much more dialogue and improvement driven.

What we found through this process was that the students were more comfortable both day-to-day and on the days of the checkpoints.  And despite the lack of one-and-done opportunities throughout the semester, the algebra 1 kids did well on the common assessment we give for our final exam.  In fact, they did better on the second semester exam, after a semester of portfolios, than they did on the first semester exam which didn't have them.

So the final reflection on this process will be on how we determined the grade that was place on the transcript - which I'll do in the next update.

Thursday, June 9, 2016

Evidence-Based Assessment: The Portfolio

One of the first things about making the transition to evidence-based assessment is letting go of the idea that tests are the gold standard when it comes to determining whether or not the student really understands the material.  As teachers, we know this already.  How many times have you had the experience of having a student who is able to explain everything in class, answers all of the questions, clearly understand the material completely...and then bombs the test?  If the tests and quizzes are the main contributors to a semester grade - as they usually are - then this student ends up with a much lower grade than they deserve.  They understand the material better than the grade shows, and you know it.

Traditional assessment focus on answering the question "Has the student shown that they know that material in a way I have prescribed?" Evidence-based assessment focuses, instead, on answering the question: "Does the student know the material?"  The big difference is that it is up the the teacher and the student - instead of the teacher alone - to come up with ways by which the student can demonstrate their understanding.  Yes, this is intentionally vague.  In removing the restriction of "you can only show me you understand the material on the tests", I didn't want to indirectly place another restriction in the way.  The only requirement was: "Show me you understand."

To that end, here are the directions that were given to the students on the syllabus at the beginning of the semester:
"Your semester grade will strictly reflect your ability to communicate your understanding of the material, both verbally and in writing.  Throughout the semester we will gather evidence of your learning, and based on the evidence we will determine the letter grade that will be placed on your transcript.   The evidence I will gather will be my observations of the daily discussions, the checkpoints, and the final exam.  The evidence you gather and present can take on any form you wish; for example: second-chance projects (similar to those you did on Google Docs after the checkpoints last semester), presentations of discussion exercises in class, leading the discussion of a review exercise as we prepare for a checkpoint, and other pre-approved projects.  Please keep in mind that your grade will depend on your ability to communicate your understanding of the material both verbally and in writing, so the evidence you gather should likewise include both."
The only reason the other projects needed to be pre-approved was to make sure what the student wanted to do would actually demonstrate an understanding of the material. I didn't want them to waste their time on something that from the beginning wasn't going to meet the requirements.

So, what we needed was a way to capture all of this evidence in one place so that we could easily look at all of the evidence at the end of the semester.  The answer we found was an online portfolio and assessment tool called FreshGrade (  Th strengths of the site included:
  • students had the ability to contribute to the portfolio anywhere, anytime
  • the folders in the portfolio were labeled with the skills we were working on, rather that with "unit 1", so the students were focused on the material
  • parents had access to the portfolio, and so had the ability to see the progress their child was making in real time
  • parents also had the ability to see whether or not their child had contributed to the portfolio
  • the site provides the ability to report progress without putting a letter nor a number to it ("meeting expectations", "approaching expectations", and "not meeting expectations" was what the students and parents saw regarding the student's progress)
  • apps for Android and Apple, with slightly different apps for students, parents, and teachers, so everyone had access anywhere, anytime
While there was a bit of a learning curve and there were some glitches with the site itself, it did what I needed it to do.  The students placed samples of their written work, links to a Google Doc that contained their work, videos of them explaining their work (both in class and at home), etc., in the portfolio, in the folder specific to the topic they were covering.  If the work did not match the topic, then I directed them to put the work in the correct folder and did not give any other feedback until this had been done.  If it was clear from the work that the student independently understood the material, then that category was finished.  Otherwise, we began the feedback loop, with me pointing out the strengths and weaknesses of the work the student had turned in, continuing until the work was up to par.  If too many loops were needed, then I had the student prepare another piece of evidence to demonstrate that they really did understand the material independently.

By the end of the semester, the students were asking questions such as, "Before I make my video, could you look this over to make sure it will work for demonstrating my understanding of how to solve a linear system by elimination?"  Seriously, freshmen in algebra 1 were asking me this.  And among the questions I didn't hear was, "What do I need to get on my final to get an A?"

The portfolio helped focus the students on the material, and not on the grades - just like I wanted it to.

Next time: Reflecting on the fact that yes, we still gave regular checkpoints (tests) in class.  

Monday, May 30, 2016

Evidence-Based Assessment: A Rough Outline

So it's been a few months.  Turns out transitioning to evidence-based assessment was a bit time-consuming.  Worth it, but time-consuming.

What is evidence-based assessment?  Well, rather than semester grades being based on the average of tests, quizzes, homework, etc., grades are based on the evidence - any evidence - that the students produce throughout the semester to demonstrate that they understand a concept or can perform a skill.  The evidence can be performance on a traditional test, a presentation in class, a mini-project...anything that demonstrates the student has learned the material.  And the letter that gets placed on the report card is the result of a conference between the student and the teacher at the end of the semester.  In short, the entire semester is a conversation between the student and the teacher, and during the final conversation a grade for the semester is discussed and determined.

Simple, right?

OK, not really.  There's a lot to say, so I'm going to break this up into several posts.  For this first installment, here's an outline of what we did:

(1) We set up an electronic portfolio for each student.  There was a "folder" for each standard, into which the students were to place any evidence they thought demonstrated their ability.  If they presented an exercise during an in-class discussion that demonstrated their independent understanding of the material, they took a picture and placed it in the portfolio.  If they made a short video at home in which they were able to show they understood the material, into the portfolio it went.  If they showed their understanding on a checkpoint, I adjusted the progress mark in the portfolio accordingly.

(2) We still gave the regular checkpoints.  If the checkpoint didn't go well, however, then it was up to the student to contribute to their portfolio to show that they had continued to work on the material and now had a solid understanding of it.  This allowed the students to "redo" the checkpoints without us having to come up with alternate versions of the checkpoint.

(3) We had the students do a reflection, which we called a "self-assessment", several times during the semester.  In these, we specifically asked the kids what "letter grade" they would give themselves to represent their current level of understanding of the material.  We did this one final time at the end of the semester, and followed it up with a one-on-one conference.  By the end of the conference, the student and I had decided upon a grade to place on their transcript.  If we were unable to agree upon a grade, the student had one last opportunity - the final exam - to demonstrate their abilities.  If we did agree upon a grade, then the point of the exam was to convince me that the portfolio we had just examined was, in fact, a true representation of their understanding.

In the next few posts I'll expand on each of the above, with information about what went right and what needs changed.  The short version is this: yes, it went well, and yes, we're doing it again next year.  The details are coming soon.

Friday, February 12, 2016

Behold the Power of Video

So let's say you've given the students free reign to demonstrate that they really do understand the material in whatever way they wish, whether it's a project, a pamphlet, a thoroughly-explained sample problem...whatever.  You're not really sure what the kids will come up with, because you haven't given them concrete examples or a whole lot of direction in terms of what the end-product is supposed to look like.  The only requirement is for them to convince you that they "get it".

And they come up with this. Behold the power of video.


Is there any doubt that this kid understands how to create a sine/cosine function to model a situation?  She struggled a bit when this was on the checkpoint in class, but one week later this video shows up.  And it has been obvious in the discussions since that she understands this really well now.

So, how is the whole "focus on the content and not on the grades thing" going?  Do you even have to ask?

Ask the kids to learn the material and to demonstrate that they have.  Then, prepare to be impressed.

Monday, January 18, 2016

This Is What I Meant

In the years before Harkness, I used to have the students write a weekly journal.  It was completely free-form, but was to focus on what we had covered during the week, commenting on what the students felt comfortable with and with what they were still struggling.  The purpose from my standpoint was to obtain a different point of view than I was able to get from my position lecturing in front of the classroom, and in this regard the journals worked really well, as I not only got information about the math but also about other aspects of their lives, such as band practice, volleyball matches, or student government activities.  Most students followed the directions, and briefly reflected on the week, and while I was getting additional information from the journals, it never really seemed to be having the impact on the students I was intending. 

Once I switched over to Harkness, the journals disappeared.  Since I was talking with the kids every day, I was already receiving the extra information the journals had previously provided, and since the impact the journals were having on the students was negligible, I decided not to waste their time in writing them (nor mine in reading them).

However, a lot of what I’ve been reading lately has emphasized student reflection, and since one of the aspects of going gradeless is to have the students be part of the assessment process, I decided to have the students fill out a weekly form on Google Classroom, asking them to rank their understanding of the skills we are covering in the current unit (based on mastery, not A,B,C,…) and to give either evidence of their learning or their plan for the upcoming week to make progress toward learning the material.  In most cases the kids have been very honest with themselves, seeing in the evidence they have provided how thoroughly they understand a particular topic, and in the lack of evidence where they need to place a little more effort.

This is what I had in mind all along.  This is what I had wanted the journals to be: meaningful reflection that guides student learning.  Maybe it was the fact that I didn’t explicitly name which skills we were working on, taking it for granted that the kids knew.  Maybe it was the more free-form format.  Maybe it was that I didn’t explicitly ask the kids to “grade” themselves.  Whatever it was, I honestly don’t care right now.  The new format is working for the kids that are taking advantage of it.  Since it doesn’t “count for points” (nothing does in the gradeless format), some of the kids are not filling out the form.  However, the kids that are filling out the form seem to be more focused in class and are providing higher quality evidence in their online portfolios than those who aren’t.

I’m going to mention all of this to the kids in class tomorrow, so we’ll see if the participation increases next week.  But for now, I’m just glad to see that for those taking advantage of the opportunity, the reflections are working.