Das Gradeinator

I’ve been thinking about grading a lot lately– well, “lately” meaning the last year or so.  I had a few thoughts about grading in general here last year, and also here, where I reflected on a particularly unpleasant discussion with a student about a grade on a project.  At the end of last term, I had decided I was going to try an experiment in English 328, the “advanced writing” course of sorts I’ve taught at EMU dozens and dozens of times over the years.

Basically, what I did was this:

  • Each student would submit a draft without their name on it for peer review.  (I should mention a) I’m teaching online, so these students don’t necessarily know each other that well to begin with, and b) I combined two online classes into one big group, which I probably wouldn’t do again, but that’s a slightly different story).
  • For each student’s essay, I created google docs survey with rubric/peer review questions, some very structured, some open-ended.  That’s 28-30 or so surveys that are individual documents.
  • Then I figured out who would review who, striving to make it so that no one reviewed each other– that is, student 1 reviewed students 2, 3, and 4; student 2 reviewed 5, 6, and 7; and so forth.  This was a math problem for me that took a while to resolve.
  • Then I distributed the still anonymous essays along with corresponding google doc links  to each member of the class for them to fill out.
  • Then, I collected those responses from the surveys, dumped the results into FileMaker, and then distributed them back to students in a readable fashion.
  • After students received feedback from their reviewers, I then gave them the chance (via a SurveyMonkey survey) to review their reviewers on a simple number scale.
  • And then, I more or less repeated the whole thing, but for the second time around, students assigned 100 out of the 150 points for the project as a grade (the other 50 points were tied to the “review of reviewers” and my own assessment).
  • I reviewed the survey results for each student, read through each essay myself, assigned my portion of the grade, made any other adjustments I thought necessary (for example, when students clearly didn’t follow the assessment instructions), and then passed all of this back to students.

You got all that?

That was the “Gradeinator v.1,” or at least what I was calling it.  A better name will need to be in place eventually for marketing purposes, I’m sure.  Right now, I’m working with what I am calling “Gradeinator v.2,” where I attempted to use one very large SurveyMonkey survey (hundreds and hundreds of questions, with lots of skip logic) to handle all of the reviews.  My theory was that this would be easier than Google Docs/Survey, but as I am going through and tabulating things now, I’m not convinced that is the case.  That might be more about my lack of me knowing what I am doing more than anything else though; more on that in a bit.

Now, I suppose the first question any sensible person might have here is why in the heck would I do this in the first place?  It’s certainly something I’ve wondered about myself, believe me.

Basically, I was trying to do three things.  First– and I always knew this was a long-shot– I was trying to see if there was a way to save some time with grading.  Second, I wanted to make peer review a little more systematic, and, by adding the anonymous feature, I wanted to make it a little more “honest.”  My theory was that one of the reasons why students sometimes give kind of lame peer review was because they don’t want to hurt people’s feelings, and making things anonymous might make it more about the “writing” rather than the “writer.”

And third, I wanted to minimize my role as “the grader” and to empower the students as critics.  This is an advanced writing class after all, one where must about all of the students in the class are hoping to be secondary school teachers or professional writers of some sort, and it seems to me worthwhile trying a different way to highlight the critique process.  What I’m getting at is I wanted to find a way to emphasize my role of “facilitator,” while at the same time recognizing that the teacherly role of final decider really never goes away.

I’ve learned a lot already and I’m learning more; in no particular order:

  • On the whole, this process has a certain amount of “truthiness” for me.  I do think that the peer review is better through this more systematic “rubric” approach and also when its anonymous, though these peer reviews are far from perfect.  My students tend to cautiously agree:  they tend to see the advantages of anonymous peer review, but they also see problems with it, notably that it isn’t really possible to ask “follow-up” questions about particular points of critique.  Though I should also point out that a lot of my students are thrown off by the “weirdness” of all this, and I am sure that the reviews at the end of the class will demonstrate that not everyone was on-board with this.  I will also be taking a poll for the next big peer-reviewed assignment on the process that students want to follow, and the results of that poll might be telling.
  • I like the process of reading students’ assessments and then making my own assessment, because what happens most/much of the time is I am able to build constructively off of my students’ comments.  That makes me feel more “coach” and/or “team leader”-like, rather than the final judge/authority/teacher, if that makes sense.  I have many more opportunities to say “I agree with what your peers said about your essay” with this set-up.
  • But I have to emphasize most of the time and not all of the time, because even well-intentioned and earnest students have given some not excellent feedback.  Even with a rubric and guidelines and my coaching/instruction and all the rest, much of the feedback has still been– to be blunt about it–irrelevant.  I’m talking about critiques of font choices, how the pages are numbered (or not), really petty (and debatable) comments on grammar, etc.  And then there were also a number of student reviews and assessments that are just lazy, comments along the lines of “good!” and just giving “A”s without anything remotely helpful. The kind of reviews that are useless to students are also  impossible for me to build off of in any meaningful way.  Though I know this is all part of the learning process and it appears that the comments on the Gradeinator v.2 are better.
  • Anyway, the next time I do this (if there is a “next time”), I’ll be assigning points to students on the quality of their assessments, either instead of or in addition to the quality of their reviews.  That’s the teacherly part of things, I suppose, but I kind of feel like some folks need to be held accountable for their less than stellar reviews.
  • The technical part of the process– setting up the surveys, dumping the data into things like FileMaker, on and on and on– is an enormous pain in the ass, far far more time consuming and tedious than I had anticipated.  It was a little easier to do this the second time around, but just a little easier.  Of course, the thing is grading itself is kind of tedious, so I can’t really say if this is more or less tedious than the more traditional sitting down with a stack of essays with a pen.  More time-consuming, but that’s slightly different.
  • A lot went wrong, and went wrong in unanticipated ways.  Some students filled out the wrong surveys.  Some students supplied letters instead of numbers for grades, and it’s hard to average the letter “B” in with two other numbers.  And when students are late in handing in work or in getting the surveys done, well, that can mess up the works for others– that is, if a student doesn’t do the peer assessment they were supposed to do, then that means one of her or his peers doesn’t get the feedback/grade.

I think the most useful, interesting, and/or frustrating part of this for me is what I’ve learned a lot about things like Google Docs and Excel, and I know even more clearly now that there is much I do not know about these things and much I wish I could know about relational databases and programming.

I do not know if this is actually true, but it feels to me like this is a system that could be largely automated.  First, a student go to a web site and uploads her document for review.  Then, probably based on being informed about this or maybe based on some sort of due date, that student goes back to that web site, enters in some identifying information, and she is then taken to the documents she needs to review.  She completes them and submits them back to the system. Then, when her peers have completed their reviews of her project, the student receives an email alert and a link to review those reviews.  And as an instructor, the idea is you can watch, participate, or otherwise intervene in the process.

If all this were possible– if this could be set up with some kind of content management system or other database program, and if it were easy enough to customize the rubric/review questions based on an assignment and goals and such– then it might both save a teacher some time and make for a richer teaching experience.  Maybe that’s a big “if,” maybe not.

Anyone out there interested in a programming project?

3 thoughts on “Das Gradeinator”

  1. In theory, this sounds like a wonderful method of grading especially for the students -you get multiple perspectives in the feed back not just the professor; who may or may not have a bias toward the individual being graded. In addition, the students benefit from the process of reviewing their colleagues papers and seeing a wide variety of writing styles. Brilliant!
    Unfortunately, the whole thing hinges on the specious assumption that all students will follow directions, complete the work on time and participate fully in all aspects of the process.
    Now, if public flogging of errant students were more widely accepted on American campuses, this method would be most beneficial for everyone; with the possible exception of the prof. (due to the additional work required by him/her).

    I am interested in seeing how this all plays out. What is the likely hood of implementing a flogging program at EMU? Despite my lengthy discourses on the potential benefits IPFW fails to take the issue seriously….

  2. The flogging hasn’t been necessary– yet!– but it is true that empowering students like this means holding them responsible, too. But I’ve been doing that for a while now. I had been having trouble getting students to get a draft in on time for peer review, so I changed the rules slightly and deducted a letter grade for work submitted late for peer review. Suddenly, students turn things in for peer review. It’s more of a stick approach than I would prefer, but it does help.

    And to the extent that students don’t follow directions for the peer review process, most of it is my fault in not knowing how to explain things to them. In other words, my bad explanations of instructions were as responsible for students not being able to follow the directions as anything else. But that’s what revision is about.

  3. This peer review process has already been automated in Moodle. I would suggest opening an account at one of the free Moodle hosts and exploring the peer review feature. Nearly all of what you did on your own is already built into Moodle. The moodle.org forums are a good place to discuss the process with other teachers who use peer review.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.