More MOOC than you can MOOC at! (or, World Music Week 4 and Some Thoughts on Peer Review)

Jeez, MOOC-mania is busting out all over!  I was going to begin this post by posting a ton of links to other sites and references to MOOCs that have cropped up in the last week, but there are just too many.  If this is the “year of the MOOC,” last week felt like the week of the articles about the year of the MOOC.  But two resources I’ll point to that also point to a bunch of other links:

  • Just this morning from The Chronicle of Higher Education comes “What You Need to Know About MOOCs,” which is both a summary and a timeline of a lot of/most of the articles they’ve had about MOOC and MOOC-related stuff all the way back to 2008.
  • Then there was the MOOC MOOC, a massive (though in this case, I think it was less than 1000 people) open online course about MOOCs that lasted a week.  I unfortunately didn’t have time to actually participate– day job, class I’m teaching, World Music class I’m taking, etc.– but if you follow that link and then check out each of the day’s activities, you’ll see lots more info and links.

As is so often the case in education, what’s emerging for me is a simplistic and reductive view of “good MOOCs” versus “bad MOOCs.” And to give credit where credit is due, “Good MOOCs, Bad MOOCs” was the title of a pretty insightful column from Marc Bouquet.  Good MOOCs are characterized by the socialization and openness of learning (and learning for the sake of learning is in and of itself its own reward), they highlight how knowledge is constructed by participants, and good MOOCs are more or less run by people out of the goodness of their hearts as experiments of one sort or another– like the MOOC MOOC.  I don’t think anyone in the Good MOOC world is thinking “we’re going to make a lot of money at this.”

Bad MOOCs are also social and open, but they present knowledge as a product apparently possessed by the elite (why else would Coursera focus on partnering with the most prestigious American universities?) but also as something that can be delivered from an expert to students, and ultimately those students can somehow be tested or credentialed as having gained enough mastery to have that learning experience validated by others.  I don’t want to speak too much about the sincerity of Coursera founders Daphne Koller and Andrew Ng, but it seems a given that if you’re going to raise $20+ million in venture capital, someone somewhere is thinking “we’re going to make a lot of money at this.”   

It’s all more complicated than that of course, and I don’t want to rely too heavily on the caricature.  The good MOOC people ain’t all good and the bad MOOC people ain’t all bad.  But as is often the case in education when innovation and corporate values rub up against each other, the conflict is about how teaching ought to take place (and fundamentally the elimination of most faculty from the process) and how (and if!) we can reliably and ethically credential students on their experiences in MOOCs.  Good MOOCs are not (or at least not much of) a threat to the status quo, whereas bad MOOCs are.

Anyway, on to week 4 of World Music after the break.  Last week’s topic was on pygmy music, though it really is beginning to feel like less about music and more about the anthropology/sociology of different peoples and how that’s all tied up into geopolitics.  Professor Muller spent most of her lecturing time discussing the ways in which the Pygmy people have been misused and abused by colonizers up until this day– even the word we use to describe this group of nomads in central Africa, “Pygmies,” is a slur that the people themselves don’t use.  But there was very little time spent on the musical traditions of these folks, and the only connection to a western tradition (which I think in some ways is what defines “World Music” in the first place) are the appropriation of some Pygmy-styled techniques in Herbie Hancock’s “Watermelon Man” (it’s the kind of whistling sound at the beginning) and in the Madonna song “Sanctuary.”  On the one hand, I totally understand why so much of the discussion and the class is about these non-musical issues, and I’m grateful for it too.  I didn’t know that much about the Pygmies before this.  On the other hand, I kind of thought that in a class called “World Music” that there would be more examples and discussion of the music.

In part because the discussion of the Pygmy music was itself a little thin, I spent more time last week puttering around the “general discussion” threads about the nature of online courses and MOOCs and about the peer review for the writing assignments.  One interesting anonymous comment in a discussion labeled “Coursera itslef:”  “if every PhD and grad student currently on Coursera logged off, I think there would only be five people left. And three of those would be Coursera-employed IT workers doing bug fixes.”

That’s an exaggeration to be sure, but I do see a lot of my fellow students identifying themselves as a professor/teacher/graduate student in such-and-such, and as I think about it, almost everyone else who has provided some kind of educational background has identified themselves as someone with some college education and/or professional experience.  In other words, I don’t get a sense of a lot of “college kids” analogous to the freshmen and sophomores who might be more typically enrolled in a “gen ed” lecture hall version of a class like this at U Penn or some other college.

There’s nothing wrong with that of course, and I think it is more than fair to say that there is a lot of “non-traditional” students eager for access to higher education.  This is the reason why places like the University of Phoenix and Kaplan make so much money.  But I wonder about the appeal of MOOCs to students who are really interested in an alternative to more traditional educational experiences, especially those students who are currently attending community colleges or “opportunity granting” universities like EMU.  Probably not much?  And again, as I mentioned way back here, a lot of MOOC/open access learning that minimizes the role of the teacher and the extrinsic value of the credential seems to assume the all too rare person who is interested in learning for learning’s sake alone.

The other thing I spent more time with this week was the peer review process.  As I mentioned in the beginning of this class, there are short (though how short is debatable, as we’ll see in a second) writing assignments due every week in response to the lectures, music, and other assignments for the week.  In order to evaluate these writing assignments– because it would obviously be impossible for anything short of large staff of teaching assistants to evaluated tens of thousands of short writing assignments– these are all peer reviewed.  As I complained before about this, the peer review process started off pretty rough because there was no stated deadline, no information on how many peers we were supposed to review, etc.

But week four began with some interesting and notable changes to the peer review grading.  It was still the same 0 to 2 point scale and the same questions (more on that in a second too), but now we’ve been told how many we need to review– five– and this time we’re being encourage to review all the essays before assigning points.  After I completed reviews of my peers, I was then asked to do a self-evaluation, which I thought was a smart idea.  At the end of week three, I got 9 out of 10; I’m still waiting on the results from week four.

Now, on the positive side, my experiences in reading the work of my peers suggests some very earnest writers and researchers.  I’ve read a lot more enthusiasm about the writing assignments than grumbling about them in the discussion forums.  I have yet to come across any writing sample that strikes me as particularly sloppy or as plagiarized– far from it, actually.  Of course, I’ve only read 10 responses so far, which is a pretty insignificant sample-size.  I also think that the general theory that peers are capable of giving valuable and important feedback to each other is sound.  I even think this can work as part of assigning grades, and I experimented with this last year and wrote about it in some detail in describing what I called “Das Gredinator.”

But beyond that, I think these peer assessments are really problematic.

First, I think the grading rubric itself (PDF) is pretty dubious on a number of different levels, and I’m not exactly a scholar on assessment.  For example all five questions are scored as either 0, 1, or 2.  The problem here though is that zero generally means “nothing” and not “low score,” so practically speaking, the scale is 1 or 2, no or yes.  And there’s no reason for this, no reason why it couldn’t be the typical one through five Likert scale.

A lof of the criteria on this rubric aren’t especially clear, or they could at least use a lot more explanation.  For example, one criterion is “Does it (meaning the writing) address the question?”  The possible answers are 0=”No, or barely;” 1=”Sort of;” and 3=”Yes.”  I’m skipping by some of the details here, but they aren’t particularly helpful in my view.  Or consider criterion “strength of the argument:”

0=Unconvincing. The points made do not advance the argument; or the response is a purely subjective opinion.

1=Convincing, but pedestrian. The argument mostly hangs together, but it might be elementary, or perfunctory.

2=Convincing and nuanced. Points are clear, forceful, and– in the best cases– show creative thinking.

That’s just not enough explanation for me, and I am not sure most of my students have a clear understanding of the meaning of “convincing,” “pedestrian,” or “nuanced” as it applies to writing.  I’m not sure I understand these terms either.

And finally it’s the same rubric for all of the assignments; again, not a very subtle instrument.

The second problem I have here is there is no training or norming for the peer review process.  When I have peer review and peer assessment in the classes I teach, my students and I spend a fair amount of time talking about and reflecting on the process.  Granted, I’m teaching writing and not simply assigning writing assignments, but it seems to me that it’s awfully hard to give students a rubric for grading each other and then to never really talk about that rubric but then expect it to work.

Because of the vagueness of the rubric and the lack of norming among students, the writings that folks are doing for these assignments are all over the place.  There’s a thread in the general discussion called “Ten rated Essays” where students are sharing the essays that received perfect 10 scores.  Several of the examples here are quite well done, but they also are between about 750 and 1500 words, considerably longer than the “two or three paragraphs” asked by the prompts.  Besides the fact that it is an important writing skill to answer a prompt in two or three paragraphs (which I take as meaning about 500 words at most), the longer writings are probably getting assessed more favorably because they are simply longer.

And last but not least, there’s no accountability for these reviews.  None.  When I am working with a group of 20 or so students and the peer review is not anonymous, we all start to get a sense of who is taking the review process seriously and who is not.  In fact, part of what I tried to do with “Das Gradeinator” (which I might be trying to revisit this fall) is to build in a “review of the reviewers” step:  that is, after the peer review process was completed, I asked students to review (grade) their reviewers on the effectiveness of their responses.  Not to pimp too hard for Eli (because I am good friends with a lot of those people), but that is essentially the promise and possibility of that software on a larger than a small-class scale.  Would this work for 20,000 students?  I have no idea.

So I for one have zipped through the “World Music” peer reviews very quickly with few comments and few concerns since I’m reviewing people I likely have not come across in any possible way in this largely anonymous and oddly lonely class.  Why should I do more than that?  And for this week, I tried an experiment with taking a slip-shod approach to my writing to see if it gets flagged by anyone in the peer review process.  Stay tuned.

Anyway, as I was quoted as saying in CHE and as I’ve said elsewhere, one thing that is very clear from these grand MOOC experiments is that while content is scalable, instruction is not.  No where is this more clear than in the teaching of writing.  To the best of my knowledge, even large public universities that are used to teaching lecture hall classes with hundreds of students teach classes like first year writing in groups of 25 (or so) students or less.  This is because teaching writing– particularly grading writing– is A LOT of work, even when a teacher uses some kind of peer assessment as part of the process.

Frankly, I think they would be better off by skipping the writing assignments entirely and instead encourage students to vote for the more successful comments in the discussion forums.  Those writings are more contextually appropriate, in direct response to an audience, and they exist with a clear purpose.

This entry was posted in Academia, MOOCs, Scholarship, Teaching, Technology. Bookmark the permalink.