A Comcast Customer Service Experience: Screwing Up in Reverse

We had been having problems with our Comcast/Xfinity/Whatever it’s called internet access for a while, and my calls to Comcast to check on the service were pretty futile (“Is your modem plugged in? Is your computer on? You should unplug your modem and then plug it back in. Okay, is your modem plugged in?” and repeat).

I finally got around to doing some “research” with the Google and, according to some web site I found (so obviously it must be true), our modem was no longer supported. And actually, that did have a ring of truth to it because that modem had to be at least six years old, maybe a lot older. So off to Best Buy and then back home with a new modem.

I knew that there was a reactivation process with the modem, so I was prepared for being on the phone with Comcast again. I made it through the electronic screening gauntlet and started talking to a nice human. “I need to set up a new modem,” I said. “I can help you with that,” she said. We were off to the races.

Things started turning bad almost immediately when the “tech” person asked me for the number on the back of the modem. “Which one? There are three of them”– that is, a couple of different device serial numbers of some sort and (just to skip ahead a bit, the one that Comcast actually needed) the Media Access Control ID. She asked for all of them, which took a while because a) it was a crappy phone connection and b) I’m pretty sure this person was not in the U.S. So there was a lot of me saying “D! I said D!” and her saying “Did you say B? or G?” But fine, eventually we worked it out and she had all the numbers she could ever need.

Then after about twenty minutes of numbers and waiting for something, my increasingly unfriendly and less competent customer service person said something like “oh, no!” in a low voice. “What?” I asked. “The system went down, I… I… I’m sorry this is taking so long,” she said. We were about 40 minutes in at this point. I’d had it.

“You know, this is really stupid. I don’t think you know what you’re doing here,” I said in my testy angry voice. She sighed, and then– click– hung up.

oh no she didn’t….

So I called right back, ran through the Comcast phone tree, got to a human. “How can I help you?” she asked. “I just got hung up on by another customer service person. That’s completely unacceptable and I would like to speak with a supervisor,” I said.

“Oh, I’m so sorry that happened sir, but I’m sure I can help you with–”

“I JUST GOT HUNG UP ON BY ANOTHER CUSTOMER SERVICE PERSON. THAT IS COMPLETELY UNACCEPTABLE AND I WOULD LIKE TO SPEAK WITH A SUPERVISOR!”  I said a bit more forcefully.

That worked. I got on with a supervisor (or at least someone who said he was a supervisor) who got the modem running. But even better: the supervisor dude apologized and completely jacked up our service for all the trouble. So now, we’ve got (for the next year at least) HBO, Showtime, a bunch of channels I’m sure we’ll never watch, and some higher speed of internet access. There must be some kind of checkbox on a service screen at Comcast that he clicked to give us everything.

So the moral of the story:

  • If you get a new modem for your Comcast internet set-up, plan on spending the better part of an afternoon to get it done.
  • Ask for the supervisor, especially if they hang up on you.
  • And hey, Comcast supervisor dude: good job of turning this into a positive.

In defense of machine grading

In defense of machine grading?!?! Well, no, not really. But I thought I’d start a post with a title like that. You know, provocative.

There has been a bit of a ruckus on WPA-L for a while now in support of a petition against machine grading and for humans at the web site humanreaders.org and I of course agree with the general premise of what is being presented on that site.  Machine grading software can’t recognize things like a sense of humor or irony, it tends to favor text length over conciseness, it is fairly easy to circumvent with gibberish kinds of writing, it doesn’t work in real world settings, it fuels high stakes testing, etc., etc., etc. I get all that.

We should keep pushing back against machine grading for all of these reasons and more. Automated testing furthers the interests of Edu-business selling this software and does not help students nor teachers, at least not yet.  I’m against it, I really am.

However:

  • It seems to me that we’re not really talking about grading per se but about teaching,  and the problem is writing pedagogy probably doesn’t work when the assessment/ grading part of things is completely separated from the teaching part of things. This is one of the differences between assigning writing and teaching writing.
  • There’s a bit of a catch 22 going on here. Part of the problem was that writing teachers complained (rightly so, I might add) about big standardized tests of various sorts not having writing components. So writing was added to a lot of these tests. However, the only way to assess thousands of texts generated through this testing is with specifically trained readers (see my next point) or with computer programs. So we can skip the writing altogether with these tests or we can accept a far from perfect grading mechanism.
  • I’ve participated in various holistic/group grading sessions before (though it’s been a long time), which is how they used to do this sort of thing before the software solutions. The way I recall it working was dozens and dozens of us were trained to assign certain ratings for essays based on a very specific rubric.  We were, in effect, programmed, and there was no leeway to deviate from the guidelines.  So I guess what I’m getting at is in these large group assessment circumstances, what’s the difference if it’s a machine or a person?
  • This software doesn’t work that well yet, especially in uncontrolled circumstances: that is, grading software is about as accurate as humans with these standardized prompt responses written in specific testing situations, but it doesn’t work well at all as an off-the-shelf rating solution for just any chunk of writing that students write for classes or that writers write for some other reason.  But the key word in that last sentence is yetbecause this software has (and is) getting a lot better. So what happens when it gets as good as a human reader (or at least good enough?) Will we accept the role of this evaluation software much in the same way we now all accept spell checking in word processors? (And by the way, I am old enough to remember resistance among English teacher-types to that, too– not as strong as the resistance to machine grading, but still).
  • As a teacher, my least favorite part of teaching is grading. I do not think that I am alone in that sentiment. So while I would not want to outsource my grading to someone else or to a machine (because again, I teach writing, I don’t just assign writing), I would not be against a machine that helps make grading easier. So what if a computer program provided feedback on a chunk of student writing automatically, and then I as the teacher followed behind those machine comments, deleting ones I thought were wrong or unnecessary, expanding on others I thought were useful? What if a machine printed out a report that a student writer and I could discuss in a conference? And from a WPA point of view, what if this machine helped me provide professional development support to GAs and part-timers in their commenting on students’ work?

A few miscellaneous comments on online teaching, what I learned about eCollege (again), and MOOCs

I’ve been pretty crazy-busy this semester because I took on too much and because there were things I could not refuse. So the blog has been pretty neglected lately, mostly because I’ve been thinking and writing about online stuff and MOOCs. (And now I’m coming back to this blog to procrastinate a bit in getting it done with the crazy-busy semester).

In no super-specific order:

  • I have been working on (and I think it’s done) my contribution to a “symposium” on MOOCs that will be in College Composition and Communication, I think in January. It’s about my experiences specifically with the writing assignments in “Listening to World Music” and the ways that they failed in spectacular ways.  If you are someone who has read my entries about MOOCs as of late, you probably have a sense of what’s going to be in that relatively short piece.  In any event, I really appreciate the opportunity to participate and it just goes to show you that sometimes blogging about stuff can pay off.
  • I was in a meeting just the other day where the topic of online teaching came up, and some of the folks complaining about it– literature colleagues (I know that’s shocking!)– said online classes were obviously not as good as face to face classes.  “So, are you saying that the classes I teach online aren’t any good?” I asked. No-no-no, we don’t mean you, they quickly said, but yes, that is what they meant. What I find continually most annoying about this critique is that it inevitably comes from people who have had no experience with online teaching. I mean none, and it also usually comes from people who don’t have a whole lot of experience or connection to this whole new-fangled Internets thing.  So part of what I said in this meeting was “Look, before you argue that online teaching can’t be as good as face to face teaching, go out and take an online class. Before that, your pronouncements about what online classes are like is a little like me telling you what Antartica is like even though I’ve never been there. I mean, I know it’s cold, but so what?”
  • MOOCs are pretty much the same way, which is why I spent the time I did in “Listening to World Music.” I wanted to see first-hand what these things were like, and since I am unlikely to teach one anytime soon, I experienced one as a student and wrote lots and lots about it. A lot of what I’ve been reading lately about MOOCs though (frankly, including some of what I am linking to/talking about in this post) seem to be coming from folks making educated guesses or knee-jerk reactions.
  • MOOCs have had the advantage of raising the profile of online teaching as a “real” environment for learning. But even though the likes of Daphne Koller and Peter Norvig think they “invented” online education with their MOOCs, the fact of the matter is students have been taking classes online at real universities– particularly regional ones like EMU– for over a decade now. Something like a third of all college students in the U.S. have taken at least one online class. We know a lot about what works and what doesn’t. Which brings me to my next point….
  • I don’t think the discussion should be about online classes being as “good” as face-to-face classes or even whether or not online classes “work.” (See “Do Online Classes Suck?” by Alex Halavais on this point). We’ve all seen bad teaching in the best of cozy face-to-face classroom settings, so the idea that that format for teaching is inherently “better” than online teaching seems a little dubious to me. Rather, I think the issue is of what are the trade-offs of these different formats, how do teachers adjust their pedagogy to best fit the situation, and what do we know about the best fit for the subject being taught. One of the trade-offs for teaching classes in a large lecture format is there is not a lot of opportunity for discussion or for student assessment in a format other than an easily graded test. One of the trade-offs for teaching first year writing in small discussion sections is it is prohibitively expensive to staff all of those sections with equally great and experienced professors (let alone great and experienced non-tenure-track faculty), so there can be pretty significant differences between different sections of the same course– thus the point of writing program administration.
  • One of the differences between how my writing colleagues think about online teaching and how my literature colleagues think about it is at what level it is most appropriate. Folks in literature have been somewhat okay with online versions of gen-ed classes but not for classes in the major or at the graduate level. We have the opposite take: we have come to believe that online (and hybrid) format classes need to be a part of the mix for our undergraduate and graduate programs, but we want our students in first year writing to take the class in person and on campus. That might change– I can especially imagine a scenario where we offer sections of first year writing in a hybrid format– but it isn’t going to be changing soon, largely because of the nature of those classes.  Students in first year writing typically need to learn some of the habits that will help them succeed in college: showing up, meeting schedules, learning how to become more self-disciplined, etc. Which leads me to my next rambling point:
  • Who thinks that MOOCs will work in “remedial” college courses? I personally find the term “remedial” both problematic and offensive, not unlike a well-intentioned and ill-informed person referring to someone of Chinese descent as “Oriental,” but I don’t want to go into that for now. The Gates foundation has given out a bunch of grants for creating “developmental” MOOCs– including courses in first year writing being developed by folks at Duke, Georgia Tech, Mt. San Jacinto College, and Ohio State. Each of these are using Coursera as a delivery platform. I’ll be very curious to see how this works out, but based on what I know about online teaching, MOOCs, and first year writing, I think this is doomed.
  • In the 24 or so years I’ve been teaching first year writing, I think it’s fair to say that the vast majority of students I’ve had in that class did not want to take it, and some of my students really really didn’t want to take it. Students take first year writing because it is a universal requirement (insert arguments ala Crowley et al as to why that is a bad idea here if you feel so inclined), and this is quite a bit different than the “Edu-tainment” appeal of MOOCs so far. We have decades of evidence on how to best help students who are struggling with subjects like writing, and all of that evidence suggests that these students need a lot personal attention of the sort not afforded in a class of thousands powered by freeze-dried/pre-recorded videos presented in a “stand and deliver” lecture format. The drop-out rate for Coursera MOOCs is already 90%; how much worse will they be in these courses?
  • Frankly, I think the folks working on MOOCs might have it backwards. Maybe they shouldn’t be replacing introductory or developmental college courses, the kinds of classes populated by young, inexperienced, and not particularly motivated students. Maybe MOOCs should replace upper-level undergraduate or graduate courses, the kinds of classes populated by older, experienced, savvy, and highly motivated students.
  • And once again, I discovered this semester in my own teaching that the content/learning management system matters. I have a chapter called “Blogs as an Alternative to Course Management Systems:  Public, Interactive Teaching with a Round Peg in a Square Hole” that is in a book (that’s supposed to be coming out any day now) called Designing Web-Based Applications for 21st Century Writing ClassroomsThe basic point of my chapter is to explain the hows/whys/pros/cons of using WordPress as an alternative to institutional CMSs. Despite the fact that I wrote this piece and despite the fact that I’ve used my own installations of WordPress as my primary platform for teaching online for years, I decided for some reason to give EMU’s CMS (eCollege) another try to host the entire class. Not a great idea. The short version is that eCollege works fine to host the grade book and to host content in a series of units where content is delivered, discussed, and tested. It doesn’t work well when a course is an on-going discussion or when it is something that exists in relation with the rest of the world– e.g., not behind a firewall.  So what I found most frustrating was there was no narrative to the class, no place where it was easy to post an update to something I just came across that I thought would be useful to share with everyone. Long story short, I’m going back to some kind of blog space for English 516 this winter term, which is also going to be online.
  • Clay Shirky wrote an interesting blog entry, “Napster, Udacity, and the Academy,” and  Jeff Rice had an interesting response (and he also pointed to this good Inside Higher Ed rebuttal). Of course, “unbundling” the college degree is not something that is new, though it might appear to be new to Shirky who went to Yale and who teaches (once in a while, at least) at places like NYU. I have lots and LOTS of students at EMU who credits from two or three different other institutions on their transcript, and there are lots of EMU students who are simultaneously enrolled at Washtenaw Community College or another school in the area.
  • One place where I agree with Shirky is that if the point of comparison for what works (or doesn’t) in higher ed should not be Harvard or Yale; that said, one of the major concerns I have about MOOCs (and actually online education in general) is that it simply rarifies the already existing (albeit largely unspoken) hierarchy.  I think this is basically what Nigel Thrift in The Chronicle of Higher Education and what Ian Bogost is saying here. The analogy in those last two pieces is to restaurants, but no need to make that analogy when we can make an actual comparison. There are thousands of colleges and universities on this continent that award bachelor degrees in some kind of humanities– English, let’s say. As an initial qualification for some kind of want ad– “bachelors degree required’– these thousands of different institutions are all the same. But we all know that a degree from Harvard is worth more in the marketplace than a degree from the University of Michigan, which is worth more than one from Michigan State, which is worth more than one from EMU, which is worth more than one from the University of Phoenix. It’s been that way for a long long time. What I think MOOCs will do is simply add another lower rung to that ladder.
  • In any event, I’m probably going to be signing up for “E-Learning and Digital Cultures,” as are (apparently) a lot of people in tech-rhet and the wpa-l mailing list. I’m thinking about making my grad students in English 516: Computers and Writing, Theory and Practice take this course as part of my course. These people seem pretty sharp: I like this manifesto, and I like this article quite a bit. If nothing else, we’ll be reading/thinking a lot more about MOOCs and online teaching while in an online class, which ought to be meta-meta.

Five ways EdX can help “the little people:” you know, community colleges, etc.

I still have a “what’s good about MOOCs” and/or “MOOCs are textbooks” post in me, but I wanted to post briefly about an article from The Chronicle of Higher Education, “5 Ways That edX Could Change Education” that came out a few days ago.  It’s mostly in the vein of articles about the great potential of MOOCs, this from the comparatively conservative and slow-paced edX group.

But this part annoyed me. After discussing the succes of bringing a small MIT-like class and lab to Mongolia for three months, there’s this:

EdX is now preparing a bigger experiment that is expected to test the flipped-classroom model at a community college, combining MOOC content with campus instruction. Two-year colleges have struggled with insufficient funds and large demand; they also have “trouble attracting top talent and teachers,” says Anant Agarwal, who taught the circuits class and is president of edX. The question is how MOOC’s might help community colleges, and how the courses would have to change to work for their students.

“MOOC’s have yet to prove their value from an educational perspective,” says Josh Jarrett, of the Bill & Melinda Gates Foundation, which backs the community-college project. “We currently know very little about how much learning is happening within MOOC’s, particularly for novice learners.”

Why does this annoy me?

Let’s just assume for a moment that the initial claims were true, that one of the problems two-year colleges face is insufficient funds, this despite the fact that many community colleges have been better off than four year schools under both Obama and Bush II. First off, I might have a bit of a chip on my shoulder here since I teach at an “opportunity granting” university to begin with, but I don’t think the troubles CCs and schools like EMU are having stem from an inability to attract top talent and teachers.  In fact, I don’t know what evidence is there for that assumption at all, other than “we all just know” that the talent and teachers at places like MIT and Harvard have to be the best and obviously welcomed at CC’s and lesser colleges and universities.

Second,  if Agarwal really wanted to replicate the success of his Mongolia experiment elsewhere, he (and edX) would send some “talent and teachers” and lab equipment into some inner city school districts in Boston or Philadelphia or Detroit or something and see what happens. Using this experience as a justification for an online class involving thousands of students just doesn’t make any sense.

And third, there’s this point from Josh Jarrett, that “MOOCs have yet to prove their value” and how we don’t know how MOOCs can help “novice learners.” Everything else I’ve read about MOOCs suggests that the drop-out rate is extremely high, which I think is pretty good evidence MOOCs are not well-suited for novice, ill-prepared and otherwise disenfranchised students. But beyond that, just how many “novice learners” have these MIT and Harvard people ever dealt with in any classroom setting, let alone an online one?

I think it might be useful for edX and their ilk to look at this from the other direction. Instead of claiming that they must know best (since they are “the best”), why not start with the premise that community colleges and regional universities might have something to offer the world about MOOCs and working with novice learners? After all, CCs are the places that actually have taught these students for decades, so they might know a little something about how to do it. Further, it is the CCs and regional universities in this country that (other than proprietary schools) have done most of the online teaching and they’ve been most successful at holding down costs.

I mean, I realize the folks at MIT and Harvard (and Stanford, Princeton, U of M, etc., etc.) are really smart and I presume they’re well-intentioned. But isn’t it more than arrogant for these folks who have almost no previous experience in online teaching and who are charging students $30-50,000 a year in tuition to come into CCs and tell them they can solve CC’s problems with free online classes?

“What did we learn here and what’s it worth to you?” The end of World Music, part 2

I probably have only two or three more posts in me about MOOCs generally and World Music in particular. This is one that I’ve been working on off and on while the course was wrapping up.  I don’t know if it’s really done, but I decided to go ahead and post it because the World Music/Coursera people sent around a survey about the course.  Here’s the link to it; I’m not sure if just anyone can get to it or not.  Most telling to me was the last question:  “If Penn hosted a World Music Extension experience based on the priorities you selected, what rate (US $) would you be willing to pay to enroll?”  The answer space was a sliding scale going from $0 to $500.  This email also said “we are working on grades; and the certificate system, which comes from Coursera, is currently being revised, so information about certificates will probably come later next week.”  So wish me luck on that.

So, while I wait to find out if I’m “certifiable” from this class or not, I contemplate the basic “what did we learn here” questions of this experience.  What exactly was this?  What would I pay for this?  And is this the future of education?  I’m tempted to just point everyone to these last lines from the Coen Brother’s black comedy Burn After Reading and leave it at that:

But I will add more for now and I am hoping to write more about this and other MOOC-iness later.

The first thing I’m still wrestling with is just what exactly a MOOC is and what is it for.  I can tell you what it’s not: I don’t think this version of a MOOC is “educational” in the sense of having all three of these components:

  • Learning, or the opportunity for people to learn.
  • Teaching, which is the active involvement of a person(s) who is in some sense an expert in the subject and process.
  • Credentialing, which is some sort of evaluation process that others acknowledge and, explicitly or implicitly, has merit and value.

I blogged in more detail about what I mean by all this here back in April.  Some may think I’m setting the “educational” bar too high, but I’m trying to get away from the way the word is casually tossed around, especially in the press reports that have suggested that MOOCs are the next big thing.

Okay, learning:  yes, World Music was clearly a “learning opportunity,” but so what? Learning is really the easiest part of education because learning opportunities are everywhere; almost any content provides it. I learn a lot from watching cooking shows or DIY home improvement shows on TV, not to mention listening to NPR or browsing web sites or even reading those old world content management systems, books.  The World Music MOOC provided learning for learning’s sake/learning as “infotainment,” and it’s a resource for the motivated self-learner/student of life, which is fine. Creating content that people can learn from is easy and it scales well. So yes, there was learning– or at least the potential of learning.

How about teaching?  Not really.  All the lectures and graduate student discussions were recorded months ago and there was minimal interaction from Professor Muller and her grad assistants in the class.  Teaching, as opposed to content, requires some give and take exchange with an expert who has enough experience with that content to at a minimum guide the student interaction and make some judgement of student success.  This teacher/expert presumably is a human, though, as I was discussing with a colleague the other day, maybe a teacher could theoretically be a machine with enough artificial intelligence to anticipate and respond to questions from the student.  To the extent that students can teach each other (and I think that’s limited), Coursera didn’t work because the discussion forums were almost useless.

And just to head this off at the pass:  yes, it is possible for one to “teach one’s self” how to do something just from the content, but as I wrote about back in May playing off of some posts from Aaron Barlow, few of us are “true autodidacts,” self-motivated or self-disciplined enough to do this effectively.  Most people who begin something as “self-taught” eventually seek the help of some teacher or other expert.  Also, I think there’s a limit to what one can teach themselves:  learning something procedural like computer code or how to juggle (I taught myself to juggle when I was in middle school) probably lends itself to self-instruction more than learning about a more abstract concept, like “World Music,” or learning something procedural but with a high degree of difficulty and complexity, like surgery.

So to continue the cooking show analogy: sure, I can learn a lot about how to make Linguine Con le Vongole and Penne Puttanesca by watching Mario Batali on Malto Mario,  but that’s different from Mario actually teaching me to make this stuff– answering my questions about measurements, checking on my work, offering me pointers, etc.  And by the way, I think this is true with even this segment of the show, where Mario actually does something closer to teaching than happened in World Music in that he actually interacts with people who are there while he’s cooking, answering some questions and explaining some finer points of techniques.  But he’s teaching them, not me.

Again, content scales easily and teaching doesn’t, which is why education is still expensive.

And just to repeat again a reoccurring theme:  specifically with “World Music,” the production values of the video lecturers and graduate student talks in this class were piss-poor. This whole experience could have been dramatically improved if there was even a tiny bit more time and thought put into how the course should be presented. I think the usefulness of Khan Academy is highly over-rated, but at least this guy can give a presentation.  These videos and a few other really well-done instructional sites (like Instructables or Code Academy) can come pretty close to teaching, particularly when dealing with very procedural instruction.  But it still ain’t teaching, and Sal Khan has made it clear in a number of places that he sees his materials as a supplement for actual teaching and not a replacement for teaching.

I think MOOCs could provide something closer to what I mean by teaching if there was less “freeze-dried” lecture content and a lot more interaction between the professor and graduate assistants and students. I think this is possible because even though all the hype around MOOCs have focused on the large number of students who sign up for these courses, the real number to focus on is the number of students who are active in the class.  So in the case of World Music, we were never really talking about 30,000 students, but rather 3,000.  I suspect that ratio is pretty consistent across these different MOOCs.

Of course, even if the class were “only” 3,000 students, effective teaching would take more than one professor and one or two GAs. In other words, you still have the problem of scale, again bringing us back to the reason why universities have lecture classes in the hundreds and not thousands and why universities employ graduate assistants and other part-time labor to supplement (well, in many ways surpass, but that’s another story) the labor of faculty.

Credentialing?  Clearly not there yet.  The peer rating/review of short writing assignments failed for lots of reasons I’ve already covered and which I am hoping to write about in more detail later.  But if I were to sum it up in a long sentence:  because there was no instructor involvement, because there was no reason for students to take the process seriously, and because the peer rating instrument itself was so poor, the results of the process were meaningless. If the Coursera people came to me asking for advice on what to do about this, I’d tell them to abandon the short writing assignments entirely and to focus instead on measuring student writing and involvement in the class with the discussion forums.  To me, that’s a much better level of judging engagement with the material and it would be a lot easier for peers to rate.  As it was, there was no accountability in those forums, and as a result, the discussion was scattered, meaning that to the extent that there was any assessment process for the class, it was all based on this heavily flawed peer rating of short writing assignments, the quizzes that popped up immediately after videos, and a final that repeated many of those quiz questions.

This missing credentialing piece is critical. Unless you believe that there have been tens of millions of dollars invested in Coursera et al for quasi-charitable reasons, the goal of these corporate MOOCs is to have them be worth something to consumers (both students and other stakeholders, notably employers), to have them “count” in the real world.

My guess is that Coursera is working on two angles to get over the credentialing problem. The long-term/long-con business plan is to convince the world that corporate MOOCs are in and of themselves valuable, thus bypassing entirely the whole college degree process.  Who needs a bachelors degree–especially in jobs where a college degree might not be needed at all (e.g., sales) or fields where people already succeed without degrees (e.g., computer coding, especially for various web/mobile apps) –when you can take a curriculum of sorts through these different platforms, and/or when you can demonstrate what it is you have learned/know via various MOOC certificates?

Given that a college degree has been the ticket to the white-collar class in the U.S. and beyond for quite some time and with higher education, we’re talking about international institutions that have been around for hundreds of years, I think this is a very long con indeed.  So the more realistic and shorter-term goal seems to be to get these classes to count as transfer credit in some fashion– general education, for example. As CHE reported, Colorado State is going to accept a Udacity Intro to Computer Science course on building a search engine as credit after the faculty reviewed it, and according to this CHE piece, edX is planning on offering proctored exams for some of their MOOCs; the exams will cost $89.  This could be a good deal for students if it worked, much like the CLEP test.  (By the way, one of my other MOOC-oriented posts is going to be on that Intro to Computer Science course, but I need to monkey around with that one a little more first).

But there are at least three catches.

First, getting these classes to count as real college credit depends on the educational system that MOOCs are supposedly trying to disrupt.  This strikes me as awkward:  “We think we can provide a better education to students than traditional universities, so we want traditional universities to let students take our courses and then have you count them for credit at your university for a degree.” Rrrright.  Second, as every transfer student knows, the portability of credits from one institution to another varies wildly.  It’s all fine and good that Colorado State is contemplating taking that Udacity Intro to Computer Science course, but if there are only a handful of institutions that follow suit, that doesn’t do much good.

Third, I’m afraid that ultimately the real impact of corporate MOOCs on higher education– if they are even modestly successful at granting actual college credit for their courses at some universities– is that both the explicit and implicit distance in the value of degrees from different types of universities will only widen the already existing gap.  As it is, there is a bias against online versus “real” classroom experiences, which is what is so maddening about Coursera and the elite institutions “discovering” online teaching in the first place, as if this hasn’t been going on at places like EMU for years.  If we add MOOCs into the mix, I think that simply expands the already existing gap between elite and non-elite universities.

In fact, just to be overly cynical for a moment, this is perhaps the main reason why elite institutions have gotten into the corporate MOOC business in the first place.  It’s a good PR and “branding” move for them to offer up some content for free, recognizing that content in and of itself– especially without teaching and credentialing– isn’t worth anything to their bottom line.  At the same time, if corporate MOOCs take off, then that only rarifies all that much more the elite sort of instruction at the top of the heap, making the rich even richer.  Seems like a win-win for the likes of Stanford, UPenn, and U of M.

But I digress.

Even if I can’t exactly come up with an answer to the “what exactly was this” question (other than to say that it is not “educational” per se), I can come up with an answer to the “how much would you pay for this” question:  $0.  Judging from what I’ve seen in the discussion on the Facebook group for the class, I’d say that my answer is in the general price range of my fellow students (though one went as high as $25).  In fact, now that I think of it, I’d say that Coursera has more or less the same problem as Facebook: I know lots of people who are wildly enthusiastic and near obsessive users of Facebook, but I don’t know anyone who would actually pay for it.  Content (aka learning) in and of itself has very little value on the internet.

That said, if there was a test or certification that you could use to get general education credit transferable to a community college or a university as part of some other more traditional degree program, that’d probably be worth something.  I don’t know if that’s a multimillion dollar business or not, but I am awfully confident that it is not “the future of higher education:”  that is, maybe these kinds of MOOC, CLEP-like tests/classes will be a part of the system, but I find it extremely unlikely that they will replace the system.

And if MOOCs do replace higher education as we know it, well, then I want out– or rather, I suspect the administration will be kicking me out.

The end of the World Music MOOC (part 1)

Well, that’s it:  I’ve reached the end of Coursera’s World Music,  and it seems like over the last seven weeks the MOOC talk in CHE and InsideHigherEd and other places has done nothing but get even more out of hand.  I was going to catalog/index all of the articles I’ve seen one way or the other on MOOC-madness just this past month and then have a “grand statement” on what I think of MOOCs and all MOOC-iness.  But it was all proving too much for one post, so I’ll concentrate here on just the end/last week of World Music.

This last week of class was on the Buena Vista Social Club specifically and Cuban music generally, and I appreciated this as a close-out to the class.  I learned a few new things about Cuban music and it’s fun listening to it– I have several examples in my iTunes.  Of course, all the previous problems of the class were still there:  the bad public access quality production of the videos, the unrehearsed lectures, the rambling grad student responses, and the generally thin content.  Largely absent by now were students in the conversation forums specifically about this week’s materials.  I would guestimate there were about a total of 200 or fewer posts last week, which isn’t a lot for a class that supposedly has thousands of students.  The Facebook page for the class has been a lot more active lately, largely made up of fans and world music enthusiasts more than students, if that makes sense.

There was one more peer review based on the week six unit on the Kalahari Bushmen.  As I posted here last week, I rushed to complete the assignment and I wrote something very short not based on the material (I was supposed to watch and respond to a movie but I skipped it) but rather based on what I thought “the teacher would want.”  And guess what?  I got a 9 out of 1o!  Here’s what my student reviewer peers wrote:

student2 → Interesting general conclusion but lack sufficient information about your argument. You think Voter ID laws are racist, you may be right but “half” the USA politicians claim you are wrong. Give some reason for us to believe you, explain why or link to a website link or two that explains. Are you saying building a border fence was racist? Again, you may be right, but tell why we should believe you instead of the officials who say “no, it was for national security.” Is all use of art for money “selling out/buying in”? The essay is a good outline of points, but your arguments need development and support.
student3 → Well written, although you seem angry about it. Relax! The class is almost over.
student4 → You make a good point about the “fine line!” I’d like to have seen more of your personal reaction added to your essay. Do you think it is right/wrong, good/bad, etc. for each of the examples you gave. You showed how the question is not simple, and yet we still need to evaluate it critically and form an opinion!
student5 → great piece – no reference to the video clip that was required for that question, though.

I’ll return to the problems of Coursera peer review in another post that’s coming, but as a student, I am once again left with the feeling that it just doesn’t matter what I write.  Garbage in, garbage out.

Instead of a writing prompt, this week featured a 100 multiple choice question final.  I got a 73.  I didn’t exactly study for this test and I am sure some simple review of the stuff we had done before would have probably helped my score quite a bit.  It was also a test fairly easy to cheat on take advantage of the open book/open note format of things.  I had plenty of time to do some Google searches for some of the questions that were stumping me, and if I had thought about it ahead of time, I probably could have opened up parts of the course in another browser to look stuff up as I was taking the test.  Plus, if I’m understanding things right, it appears that I could even retake the exam if I wanted to, which seems like quite the advantage, especially if I had saved the original exam.

Anyway, I’m not quite sure what my grade means yet, though I am hoping I am going to get some kind of certificate I can print out and put up in my office.  Or maybe a t-shirt that says “I survived to the end of World Music” on it.

Just this morning, the Coursera (or the folks at UPenn running the class, I’m not sure which) sent around an email with some interesting stats on the course.  Here’s what they sent (with a few comments from me along the way):

Users
Total Registered Users 36295
Active Users Last Week 3859

So, just over 10 percent of the students stuck it out until the end.  I don’t care how much a course does or doesn’t cost, a teacher who had that kind of drop-out rate in anything approaching a “normal” setting would be likely looking for a job.

Video Lectures
Total Streaming Views 206621
Total Downloads 67884
# Unique users watching videos 22018

I’m not sure what this means, but I think it means that videos were watched by 22,000 people, though obviously, a lot fewer people at the end were watching them.

Quizzes
Total quiz submissions 3503
# Unique users submitted (quiz ) 1671

Total video submissions 353999
# Unique users submitted (video ) 9822

These numbers seem kind of out of whack for me: how doe they get 350,000 quiz submissions from just shy of 10,000 quiz submitters?  Maybe that’s 350,00 quiz answers?

Peer Assessments Total Submissions 8077
# Unique users who submitted 2731
Total Evaluations 45242
# Unique users who evaluated 2191

Again, less than 10 percent of the students who were “enrolled” in the class participated in the peer review process.

Discussion Forums
Total Threads 8045
Total Posts 17339
Total Comments 5419
Total Views 243711
Total Reputation Points 6947

Given the number of students who started , this isn’t a lot– like an average of 1.5 comments per student just assuming that we count the students who finished.  Just to give a point of comparison on the opposite side of the spectrum:  a couple years ago, I taught an online graduate course called “Rhetoric of Science and Technology.”  It had (I think) 13 students and there were a total of 884 comments in the discussions, or (if you include me in the mix) an average 60 or so comments per student.  That’s the difference between an online class where the discussion matters and counts for the grade and one where it doesn’t– not to mention the difference between an online course of a manageable size where students are actually involved in the learning process and a MOOC.

So for now, I’m left with two thoughts.  First, the reporting on the number of students enrolling in these MOOCs is pure hype and nearly meaningless.  As I mentioned last week, what is clearly happening here is 30,000 (or so) people signed up for World Music the same way that people sign up for lots of internet services, just to check it out.  It’s not just that they didn’t stick with it; they never intended to stick with it.

Second, I am just baffled and puzzled as to why this attrition rate isn’t being described in the media as one of the reasons why MOOCs are a failure as a solution to the educational crisis.   EMU only graduates about 30-40% of the students within five years of starting their degree and this low graduation rate is considered a major part of the crisis in higher education; 90% of students who started World Music dropped out (and there is no reason to believe that these are atypical results) and Coursera is being trotted out as the solution to the higher education crisis.

WTF?

More MOOC summing up is coming, along with news (I hope) about a certificate or a t-shirt or something.

MOOC week 6, from thousands to hundreds (maybe)

First off, this past week of MOOCs in the news:

There’s “Learning From One Another” from Inside Higher Ed, which looks at Coursera’s MOOCs generally and from the peer review process in particular.  I’ll chat more about my own peer review experiences again from this last week, but I think the approach that student J.R. Reddig (who is also a “61-year-0ld program director for a Virginia-based defense software contractor”) has taken to these peer reviews synchs with my experiences: “Mainly, Reddig said, he learned how to read past the spelling and grammar hiccups of non-English speakers and try to grade them based on their ideas. ‘I said, Well, O.K., you can’t apply an empiric standard to them,’ said Reddig. ‘These people attempted to follow a thought, and so give them a 10.'”  Very much a “shooting from the hip” to commenting, reviewing, and grading.

Then there’s this quote:

Daphne Koller, one of the co-founders of Coursera, says that the peer-grading experiment is still very much a work-in-progress. “We will undoubtedly learn a lot from the experiences of our instructors as they encounter this phenomenon, and then have a better sense of where exactly the tensions lie and how one might deal with them,” she says. “We also have some ideas of our own that we’ll throw in the mix and evaluate as we plan the next phase of this experiment.”

Which basically means “we’re making this shit up as we go along and we’ll see what sticks.”  A shame since there are academic fields/disciplines out there that have been working through strategies for peer review and writing instruction for a long long time.

The other thing in this article I found interesting is information on the drop-out rates in these courses, which I will also get to a moment in relation to World Music.  The class that Reddig is in,  Internet History, Technology and Security, started with 45,000 registered students and after one of the writing assignments, dropped down to 6,000.  The fantasy and science fiction class that I believe Laura Gibbs is taking dropped from about 39,000 to 8,000.

If these classes are anything like World Music, I don’t think people are dropping out because the class is “too rigorous,” though since there are a lot of people taking these classes who are not native English speakers, I am sure language problems are proving too much for many students.  Rather, I think there are two basic causes for the drop-out rate.  First, I think people are “dropping” these classes in the same way that folks sign up for some kind of service just to see what it’s like:  that is, the numbers that Coursera et al are reporting are grossly inflated by the “I’m just curious to see what this looks like so I’ll sign up, look around, and then never do this again” factor.  Which is to say that the majority of people signing up for these classes were never really interested in taking these classes in the first place.  Remember Second Life?  Tons of people (including me) signed up, played around with it for a while, thought it was kind of dumb, and then never went back.  The same is true here, and much like Second Life was over-hyped based on misleading numbers of users, so is the case here.

Second, I think a lot of people are dropping Coursera courses because they are disappointed in what’s being offered– at least there has been some commentary along those lines in World Music.  I think that’s a different phenomenon than “this course is too hard for me.”

Speaking of Coursera and their ongoing efforts of making it up as they go along:  they have spiffed up their web site a bit.  Students can set up profiles (I set one up and I would link to it here but I don’t know how) and they have a link for jobs at the start-up.  It would appear that most of their hiring is still focused on computer programmers of various sorts, though they are searching for “Course Operations Specialists” (which is an “interface” position between the “world-class instructors” and Coursera engineers) and “Community Managers,” which I think is kind of like people who patrol the class sites to make sure nothing bad is happening.  Following the trends of conventional higher education, it would appear that Coursera is going to continue to keep hiring the people who actually teach and provide the content for these courses on a contract and/or part-time basis.  It’d be interesting to find out how much they are paying people like World Music Professor Carol Muller.

The other article I thought I’d mention was from The Chronicle of Higher Education “U. of South Florida Professors Try ‘University of Reddit’ to Put Courses Online.” Apparently through the University of Reddit, just about anyone can teach or take a class online, and two folks at USF are jumping in to see how it works.  I’m not sure why the CHE focused on these two since there are dozens and dozens of courses on Reddit already, but there it is.

More about World Music:

Continue reading “MOOC week 6, from thousands to hundreds (maybe)”

Ah yes, the new honor code will fix everything

From The Chronicle of Higher Education, “Coursera Adds Honor-Code Prompt in Response to Reports of Plagiarism.”  To quote:

The step is a small one, but it was carried out with the start-up company’s signature swiftness. Students in Coursera’s courses must now renew their commitment to its academic honor code every time they submit an essay assignment for grading by peers.

Specifically, they must check a box next to this sentence: “In accordance with the Honor Code, I certify that my answers here are my own work, and that I have appropriately acknowledged all external sources (if any) that were used in this work.”

I noticed this in the World Music class this last week when I posted my writing about Aboriginal music, but I didn’t exactly give it a whole lot of thought.  Frankly, it reminded me a lot of all those “terms of service” agreements that we all check without reading.  Hopefully I haven’t agreed to some kind of sick HUMANCENTiPAD project.

Anyway, as I wrote before on this, I don’t think plagiarism is actually that big of a problem in these classes so far, and it is frankly low on my list for the problems of the writing assignments and the peer review process.  But hey, if it makes Coursera et al feel better that I check a box, sure.

 

MOOC Week five, and the peer review turns

I’m wrapping up week five of the World Music MOOC, and I have to say it’s starting to drag a little.  This week was about Australian Aboriginal music, though it was another week that had very little to do with music and more to deal with the politics of oppression against indigenous peoples.  I understand the obvious relevance for this being a part of the discussion of world music, but it’s all starting to feel more and more like I went to a music class and an anthropology/sociology class decided to barge in and taket things over.

I continue to be less than blown away by the quality of the presentation of class materials.   Just a simple example of what I mean about the lectures:  in the introduction to this week’s unit, Carol Muller gets the dates of when Australia was first “discovered” by Cook mixed up– that is, she says 1788, which was the year the British set up a penal colony in Australia, and not 1770, which is when Cook first landed in Australia.  The video is interrupted and the correction is clumsily inserted, and there was even a quiz question about the error.  Now, it’s not a problem per se that Muller misspoke.  Lord knows I say lots of wrong stuff to my students.  But isn’t that a reason why these ought to be rehearsed and organized for the screen and not just a rehash of a in-class lecture?  Isn’t this one of the benefits of recorded materials in the first place?

So I’m kind of getting bored here.  If I weren’t doing this thing for other academic purposes and future writing projects, I’d probably “drop out.”  This brings me to this Chronicle of Higher Ed commentary from Kevin Carey, “The MOOC-Led Meritocracy.”  Carey argues that the enormous drop-out rate in MOOCs is not only not a problem, but rather it allows MOOCs to operate as a meritocracy.  A quote:

That meritocracy will serve as a powerful mechanism for signaling quality to an uncertain labor market. Traditional colleges rely mostly on generalized institutional reputations and, in a minority of cases, admissions selectivity to demonstrate what graduates know and can do. The opacity of most collegiate learning processes (see again, lack of standards) and the eroding force of grade inflation have left little other useful information.

MOOC credentials, by contrast, will signal achievement selectivity. Instead of running a tournament to decide who gets to take the class and very likely get an A-minus or A, they’re running tournaments to decide who did best in the class. That’s why people are already resorting to plagiarism in MOOC courses. That’s troublesome, although perhaps not distinctly so, given that the antiplagiarism software that will presumably be deployed in defense was developed in response to widespread cheating in traditional higher ed.

In a sense, this was what college was like when I was a student nearly 30 years ago.  Most people who were in college 20 or more years ago can recall some kind of moment where the high drop-out rate was touted as a sign of rigor and that the earned college degree separated you from those drop-outs.  This is the classic “look to your left, look to your right, because one of you won’t be here at the end of this first year” spiel.  I don’t recall hearing that speech directly, but the University of Iowa at the time did have a reputation for being a fairly easy school to get into but not that easy to graduate from.

This has all changed dramatically and now one of the key markers of a successful and “good” university is its graduation rate.  There are lots of reasons for this change, but one of the reasons is a direct response to cost:  if you’re spending $40K plus a year for attending some quasi-fancy school (and I am aware that dollar figure is actually low for many fancy schools), you’d damn well better be able to graduate in a reasonable amount of time.

So in theory, Carey has a point:  who cares if the drop-out rates from MOOCs are super-high if that means that the best and brightest are able to make it through these free courses, separating them from the many drop-outs in their wake?  Maybe that could be something that employers could look at as a sign of success.  But right now in practice, there are no standards governing this meritocracy and I don’t think some crappy plagiarism software is going to make these problems go away.

And that brings me to peer review.

Continue reading “MOOC Week five, and the peer review turns”

More MOOC than you can MOOC at! (or, World Music Week 4 and Some Thoughts on Peer Review)

Jeez, MOOC-mania is busting out all over!  I was going to begin this post by posting a ton of links to other sites and references to MOOCs that have cropped up in the last week, but there are just too many.  If this is the “year of the MOOC,” last week felt like the week of the articles about the year of the MOOC.  But two resources I’ll point to that also point to a bunch of other links:

  • Just this morning from The Chronicle of Higher Education comes “What You Need to Know About MOOCs,” which is both a summary and a timeline of a lot of/most of the articles they’ve had about MOOC and MOOC-related stuff all the way back to 2008.
  • Then there was the MOOC MOOC, a massive (though in this case, I think it was less than 1000 people) open online course about MOOCs that lasted a week.  I unfortunately didn’t have time to actually participate– day job, class I’m teaching, World Music class I’m taking, etc.– but if you follow that link and then check out each of the day’s activities, you’ll see lots more info and links.

As is so often the case in education, what’s emerging for me is a simplistic and reductive view of “good MOOCs” versus “bad MOOCs.” And to give credit where credit is due, “Good MOOCs, Bad MOOCs” was the title of a pretty insightful column from Marc Bouquet.  Good MOOCs are characterized by the socialization and openness of learning (and learning for the sake of learning is in and of itself its own reward), they highlight how knowledge is constructed by participants, and good MOOCs are more or less run by people out of the goodness of their hearts as experiments of one sort or another– like the MOOC MOOC.  I don’t think anyone in the Good MOOC world is thinking “we’re going to make a lot of money at this.”

Bad MOOCs are also social and open, but they present knowledge as a product apparently possessed by the elite (why else would Coursera focus on partnering with the most prestigious American universities?) but also as something that can be delivered from an expert to students, and ultimately those students can somehow be tested or credentialed as having gained enough mastery to have that learning experience validated by others.  I don’t want to speak too much about the sincerity of Coursera founders Daphne Koller and Andrew Ng, but it seems a given that if you’re going to raise $20+ million in venture capital, someone somewhere is thinking “we’re going to make a lot of money at this.”   

It’s all more complicated than that of course, and I don’t want to rely too heavily on the caricature.  The good MOOC people ain’t all good and the bad MOOC people ain’t all bad.  But as is often the case in education when innovation and corporate values rub up against each other, the conflict is about how teaching ought to take place (and fundamentally the elimination of most faculty from the process) and how (and if!) we can reliably and ethically credential students on their experiences in MOOCs.  Good MOOCs are not (or at least not much of) a threat to the status quo, whereas bad MOOCs are.

Anyway, on to week 4 of World Music after the break.  Last week’s topic was on pygmy music, though it really is beginning to feel like less about music and more about the anthropology/sociology of different peoples and how that’s all tied up into geopolitics.  Professor Muller spent most of her lecturing time discussing the ways in which the Pygmy people have been misused and abused by colonizers up until this day– even the word we use to describe this group of nomads in central Africa, “Pygmies,” is a slur that the people themselves don’t use.  But there was very little time spent on the musical traditions of these folks, and the only connection to a western tradition (which I think in some ways is what defines “World Music” in the first place) are the appropriation of some Pygmy-styled techniques in Herbie Hancock’s “Watermelon Man” (it’s the kind of whistling sound at the beginning) and in the Madonna song “Sanctuary.”  On the one hand, I totally understand why so much of the discussion and the class is about these non-musical issues, and I’m grateful for it too.  I didn’t know that much about the Pygmies before this.  On the other hand, I kind of thought that in a class called “World Music” that there would be more examples and discussion of the music.

Continue reading “More MOOC than you can MOOC at! (or, World Music Week 4 and Some Thoughts on Peer Review)”