As the number of universities (including my own) announce covid-19 plans that include requiring all classes finish out their terms online, I’m imagining an increasing number of college instructor and faculty-types doing a Google search along these lines of “how to teach a course online.”
In case there is someone who has been asked to suddenly stop what they’ve been doing for decades (let alone all semester) in order to shift everything online, and that person did a Google search and then landed here, I thought I’d jot down a few bits of advice based on my experiences and research about online teaching.
This post is both notes on my research so far (for myself and anyone else who cares), and also a “teaser” for Corridors: the 2019 Great Lakes Writing and Rhetoric Conference. I’m looking forward to this year’s event for a couple of different reasons, including the fact that I’ve never been on campus at Oakland University.
Anyway: as I wrote about back in June, I am on leave right now to get started on a brand-new research project officially called “Investigating Classroom Technology Bans Through the Lens of Writing Studies,” but which is more informally known as the “Classroom Tech Bans Are Bullshit” project. I give a little more detail in that June post, but basically, I have been reading a variety of studies about the impact of devices– mostly laptops, but also cellphones– in classrooms (mostly lecture halls) and how they negatively impact students (mostly on tests). I’ve always thought these studies seemed kind of bullshitty, but I don’t know a lot of research in composition and rhetoric that refutes these arguments. So I wanted to read that scholarship and then try to do something to apply and replicate that scholarship in writing classrooms.
So far, I’ve mostly just been reading academic articles in psychology and education journals. It’s always challenging to step just a little outside my comfort zone and do some reading in a field that is not my own. If nothing else, it reminds me why it’s important to be empathetic with undergraduates who complain about reading academic articles: it’s hard to try figure out what’s going on in that Burkean parlor when pretty much all you can do is look through the window instead of being in the room. For me, that’s most evident in the descriptions of the statistics. I look at the explanations and squiggly lines of various formulas and just mutter “I’m gonna have to trust you on that.” And as a slight but important tangent: one of the reasons why we don’t do this kind of research in writing studies is because most people in the field feel the same about math and stats.
The other thing that has been quite striking for me is the assumptions in these articles on how the whole enterprise of higher education works. Almost all of these studies take it as a completely unproblematic given that education means a lecture hall with a professor delivering knowledge to students who are expected to (and who know how to) pay attention and who also are expected to (and who know how to) take notes on the content delivered by the lecturer. Success is measured by an end of the course (or end of the experiment) test. That’s that. In other words, most of this research assumes an approach to education that is more or less the opposite of what we assume in writing studies.
I have also figured out there are some important and subtle differences to the arguments about why laptops and cell phones ought to be banned (or at least limited) in classrooms. As I wrote back in June, the thing that perhaps motivated me the most to do this research is the argument that laptops ought to be banned from lecture halls because handwritten notes are “better.” This is the argument in the frequently cited Pam Mueller and Daniel Oppenheimer “The Pen is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking.” I think this is complete bullshit. This is a version of the question that used to circulate in the computers and writing world, whether it was “better” for student to write by hand or to type, a question that’s been dismissed as irrelevant for a long time. But as someone who is so bad at writing things by hand, I personally resent the implication that people who have good handwriting are somehow “better.” Fortunately, I think Kayla Morehead, John Dunlosky, and Katherine A. Rawson replication of that study, “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014),” does an excellent job refuting this “handwriting is better” bullshit.
Then there’s the issue of “distraction” that results when students trying to do things right are disturbed/put off by other students fiddling around with their laptops or cellphones. This is the argument in Faria Sana, Tina Weston, Nicholas J. Cepeda “Laptop multitasking hinders classroom learning for both users and nearby peers.” They outline a clever and complicated methodology that involved arranging students so a laptop was (or wasn’t) in their line of sight and also by having some of those students acting as “confederates” in the study by purposefully doing stuff that is distracting. One issue I have with this research is it is a little dated, having been published in 2013. Maybe it’s just me, but I think laptops in classes were a little more novel (and thus distracting) a few years ago than they are now. Regardless though, one of the concluding points these folks make is that laptops shouldn’t be banned because the benefits outweigh the problems.
There are a lot of studies focusing on the multitasking and divided attention issues: that is, devices and the things students look at on those devices distract them from the class, which again typically means paying attention to the lecture. I find the subtly different degrees of multitasking kind of interesting, and there is a long history in psychology of research about attention, distraction, and multitasking. For example, Arnold L. Glass and Mengxue Kang in “Dividing attention in the classroom reduces exam performance” argue (among other things) that there’s a kind of delayed effect with students multitasking/dividing attention in a lecture hall setting. Students seem to be able to comprehend a lecture or whatever in the midst of their multitasking, but they don’t perform as well on tests at the end of the semester.
Interestingly– and I have a feeling this is more because of what I haven’t read/studied yet– most of these studies I’ve seen on the multitasking/dividing attention angle don’t separate tasks like email or texting from social media apps. That’s something I want to read about/study more because it seems to me that there is a qualitative difference in how applications like Facebook and Twitter distract since these platforms are specifically designed to grab attention from other tasks.
And then there’s the category of research I wasn’t even aware was happening, and I guess I’d describe that as the different perceptions/attitudes about classroom technology. This is mostly based on surveys and interviews, and (maybe not surprising) students tend to believe the use of devices is no big deal and/or “a matter of personal autonomy,” while instructors have a more complex view. Interestingly, the recommendation a lot of these studies make is students and teachers ought to talk about this as a way of addressing the problem.
So, that’s what I “know” so far. Where I’m going next, I think:
I think the first tangible (not just reading) research part of this project is going to be to design a survey of both faculty and instructors– probably just for first year writing, but maybe beyond that– about their attitudes on using these devices. If I dig a bit, I might be able to use some of the same questions that come up in the research I’ve read.
We’ll see what kind of feedback/participation I get from those surveys, but my hope is also to use a survey as a way of recruiting some instructors to participate in something a little more case study/observational in the winter term, maybe even trying to replicate some of the “experimental” research on note taking in a small class setting. That would happen in Winter 2020.
I need to keep reading, especially about the ways in which social media specifically functions here. It’s one thing for a student (or really anyone) to be bored in a badly run lecture hall and thus allowing themselves to drift into checking their messages, email, working on homework for other classes, checking sports, etc. I think it’s a different thing for a student/any user to feel the need to check Facebook or Twitter or Instagram or whatever.
I can see a need to dive more deeply into thinking/writing about the ways in which this research circulates in MSM and then back into the classroom. As I wrote in my proposal and back in June, I think there are a lot of studies– done with lecture hall students in very specific experimental settings– that get badly translated into MSM articles about why people should put their laptops and cell phones away in classrooms or meetings. Those MSM articles get read by well-meaning faculty who then apply the MSM’s misunderstanding of the original study as a justification for banning devices even though the original research doesn’t support that. Oh, and perhaps not surprising, but the tendency of the vast majority of the MSM pieces I’ve seen on tech bans is basically reinforcing the very worn theme of “the problem with the kids today.”
I also wonder about this attitude difference and maybe students have a point: maybe these technologies are a matter of personal autonomy and personal choice. This was an idea put into my head while chatting about all this with Derek Mueller over not very good Chinese food this summer, and I still haven’t thought it through yet, but if students have a right to their own language use in writing classrooms, do they also have a right to their own technology use? When and when not?
And even though this is kind of where I began this project (so I guess I’m once again showing my bias here), a lot of the solution that motivates faculty to ban laptops and devices from their classrooms in the first place really comes back to better pedagogy. Teaching students how to take notes with a laptop immediately comes to mind. I’m also reading (slowly but surly) James M. Lang’s Small Teaching: Everyday Lessons From the Science of Teaching right now, and there’s a clear connection to his advice and this project too. So much of the complaints about students being distracted by their devices really comes back to bad teaching.
I was away from work stuff this past May– too busy with Will’s graduation from U of M followed quickly by China, plus I’m not teaching or involved in any quasi-administrative work this summer. As I have written about before, I am no longer apologetic for taking the summer off, so mostly that’s what I’ve been doing. But now I need to get back to “the work–” at least a leisurely summer schedule of “the work.”
Along with waiting for the next step in the MOOC book (proofreading and indexing, for example), I’m also getting started on a new project. The proposal I submitted for funding (I have a “faculty research fellowship” for the fall term, which means I’m not teaching though I’m still supposed to do service and go to meetings and such) is officially called “Investigating Classroom Technology Bans Through the Lens of Writing Studies.” Unofficially, it’s called “Classroom Tech Bans are Bullshit.”
To paraphrase: there have been a lot of studies (mostly in Education and/or Psychology) on the student use of mobile devices in learning settings (mostly lecture halls– more on that in a moment). Broadly speaking, most of these studies have concluded these technologies are bad because students take worse notes than they would with just paper and pen, and these tools make it difficult for students to pay attention. Many of these studies have been picked up in mainstream media articles, and the conclusions of these studies are inevitably simplified with headlines like “Students are Better Off Without a Laptop In the Classroom.”
I think there are couple of different problems with this– beyond the fact that MSM misinterprets academic studies all the time. First, these simplifications trickle back into academia when those faculty who do not want these devices in their classrooms use these articles to support laptop/mobile device bans. Second, the methodologies and assumptions behind these studies are very different from the methodologies and assumptions in writing studies. We tend to study writing– particularly pedagogy– with observational, non-experimental, and mixed-method research designs, things like case studies, ethnographies, interviews, observations, etc., and also with text-based work that actually looks at what a writer did.
Now, I think it’s fair to say that those of us in Composition and Rhetoric generally and in the “subfield/specialization” of Computers and Writing (or Digital Humanities, or whatever we’re calling this nowadays) think tech bans are bad pedagogy. At the same time, I’m not aware of any scholarship that directly challenges the premise of the Education/Psychology scholarship calling for bans or restrictions on laptops and mobile devices in classrooms. There is scholarship that’s more descriptive about how students use technologies in their writing process, though not necessarily in classrooms– I’m thinking of the essay by Jessie Moore and a ton of other people called “Revisualizing Composition” and the chapter by Brian McNely and Christa Teston “Tactical and Strategic: Qualitative approaches to the digital humanities” (in Bill Hart-Davidson and Jim Ridolfo’s collection Rhetoric and the Digital Humanities.) But I’m not aware of any study that researches why it is better (or worse) for students to use things like laptops and cell phones while actually in the midst of a writing class.
So, my proposal is to spend this fall (or so) developing a study that would attempt to do this– not exactly a replication of one or more of the experimentally-driven studies done about devices and their impact on note taking, retention, and distraction, but a study that is designed to examine similar questions in writing courses using methodologies more appropriate for studying writing. For this summer and fall, my plan is to read up on the studies that have been done so far (particularly in Education and Psych), use those to design a study that’s more qualitative and observational, and recruit subjects and deal with the IRB paperwork. I’ll begin some version of a study in earnest beginning in the winter term, January 2020.
I have no idea how this is going to work out.
For one thing, I feel like I have a lot of reading to do. I think I’m right about the lack of good scholarship within the computers and writing world about this, but maybe not. As I typed that sentence in fact, I recalled a distant memory of a book Mike Palmquist, Kate Kiefer, Jake Hartvigsen, and Barbara Godlew wrote called Transitions: Teaching Writing in Computer-Supported and Traditional Classrooms. It’s been a long time since I read that (it was written in 1998), but I recall it as being a comparison between writing classes taught in a computer lab and not. Beyond reading in my own field of course, I am slowly making my way through these studies in Education and Psych, which present their own kinds of problems. For example, my math ignorance means I have to slip into “I’m just going to have to trust you on that one” mode in the discussions about statistical significance.
Reading these two studies has been a quite useful way for me to start this work. Maybe I should have already known this, but there are actually two fundamentally different issues at stake with these classroom tech bans (setting aside assumptions about the lecture hall format and the value of taking notes as a way of learning). Mueller and Oppenheimer claimed with their study handwriting was simply “better.” That’s a claim that I have always thought was complete and utter bullshit, and it’s one that I think was debunked a long time ago. Way back in the 1990s when I first got into this work, there were serious people in English and in writing studies pondering what was “better,” a writing class equipped with computers or not, students writing by hand or on computers. We don’t ask that question anymore because it doesn’t really matter which is “better;” writers use computers to write and that’s that. Happily, I think Morehead, Dunlowsky, and Rawson counter Mueller and Oppenheimer’s study rather persuasively. It’s worth noting that so far, MSM hasn’t quite gotten the word out on this.
But the other major argument for classroom tech bans– which neither of these studies addresses– is about distraction, and that’s where the “or are they?” part of my post title comes from. I still have a lot more reading to do on this (see above!), but it’s clear to me that the distraction issue deserves more attention since social media applications are specifically designed to distract and demand attention from their users. They’re like slot machines, and it’s clear that “the kids today” are not the only ones easily taken in. When I sit in the back of the room during a faculty meeting and I glance at the screens of my colleagues’ laptops in front of me, it’s pretty typical to see Facebook or Twitter or Instagram open, along with a window for checking email, grading papers– or, on rare occasion, taking notes.
Anyway, it’s a start. And if you’ve read this far and you’ve got any ideas on more research/reading or how to design a study into this, feel free to comment or email or what-have-you.
I’ll say this about Hilary’s email mess: lots of people (some of my colleagues, lots of my students) don’t think it’s important to discuss and teach things like “how to send an email” or the basics of how “the intertubes works” because this is just stuff people don’t need to know. Email and stuff, the argument goes, is like your car– you don’t need to know how it works to drive it. Well, I hope this convinces people that’s wrong.
Maybe this is all obvious, but given what’s happened with this election, maybe not.
I should point out that I’m voting for Clinton and I hope you vote for Clinton too. I don’t think a “President Trump” (geez, it hurts putting those two words together, even hypothetically) would necessarily be the end of democracy as we know it and/or plunge the U.S. into Mad Max-esque dystopia, but I do know it would be a hot hot mess.
I should also point out that I think Hillary Clinton is the most qualified person (based on previous experiences, at least) to run for president in my lifetime. In a lot of ways, this is Clinton’s problem because even though I have “been with her” from the start, she has done/said/supported things over the last 30 years I disagree with, which is inevitable based on being in public life for the last 30 years. And yes, there are other ways in which Hillary and her family (I’m talking about “the big dog” here) have sometimes done stuff that doesn’t seem completely above board– again, almost inevitable for politicians in the public eye for decades.
But this email mess? In my opinion, it’s not a reason to vote against Clinton because I really really doubt there was any criminality there, either intentionally or unintentionally. (And as a slight but relevant tangent: let’s just set aside the fact that government argues amongst itself all the time but what’s a “secret” and how information should be classified and about proper procedures for handling this information. The second Bush administration apparently had an email server owned and operated by the RNC that “lost”/deleted 22 million or so emails, lots other politicians have in the past or currently still operate some version of a private server, etc., etc. In other words, lots of politicians have done a version of what Hillary did, but the difference is Hillary is running for president.)
So vote for Hillary Clinton, okay? But let’s also learn (or really, relearn) some email basics based on these mistakes, both the ones that she has made and the mistakes I know I continue to make all the time.
Mind you, I only skimmed this and all of the economics math is literally a foreign language to me. But there are a couple of passages here that I find interesting and not exactly convincing to me that me and my students should indeed “leave it in the bag.”
Permitting laptops or computers appears to reduce multiple choice and short answer scores, but has no effect on essay scores, as seen in Panel D. Our finding of a zero effect for essay questions, which are conceptual in nature, stands in contrast to previous research by Mueller and Oppenheimer (2014), who demonstrate that laptop note-taking negatively affects performance on both factual and conceptual questions. One potential explanation for this effect could be the predominant use of graphical and analytical explanations in economics courses, which might dissuade the verbatim note-taking practices that harmed students in Mueller and Oppenheimer’s study. However, considering the substantial impact professors have on essay scores, as discussed above, the results in panel D should be interpreted with considerable caution. (page 17)
The way I’m reading this is for classes where students are expected to take multiple choice tests as a result of listening to a lecture from a sage on the stage, laptops might be bad. But in classes where students are supposed to write essays (or at least more conceptual essay questions), laptops do no harm. So if it’s a course where students are supposed to do more than take multiple choice tests….
After describing the overall effects of students performing worse when computing technology is available, Carter, Greenberg, and Walker write:
It is quite possible that these harmful effects could be magnified in settings outside of West Point. In a learning environment with lower incentives for performance, fewer disciplinary restrictions on distracting behavior, and larger class sizes, the effects of Internet-enabled technology on achievement may be larger due to professors’ decreased ability to monitor and correct irrelevant usage.” (page 26)
Hmmm…. nothing self-congratulatory about that passage, is there?
Besides the fact that there is no decent evidence that the students at West Point (or any other elite institution for that matter) are on the whole such special snowflakes that they are more immune from the “harm” of technology/distraction compared to the rest of us simpletons, I think one could just as easily make the exact opposite argument. It seems to me that is is “quite possible” that the harmful effects are more magnified in a setting like West Point because of the strict adherence to “THE RULES” and authority for all involved. I mean, it is the Army after all. Perhaps in settings where students have more freedom and are used to the more “real life” world of distractions, large class sizes, the need to self-regulate, etc., maybe those students are actually better able to control themselves.
And am I the only one who is noticing the extent to which laptop/tablet/technology use really seems to be about a professor’s “ability to monitor and correct” in a classroom? Is that actually “teaching?”
And then there’s this last paragraph in the text of the study:
We want to be clear that we cannot relate our results to a class where the laptop or tablet is used deliberately in classroom instruction, as these exercises may boost a student’s ability to retain the material. Rather, our results relate only to classes where students have the option to use computer devices to take notes. We further cannot test whether the laptop or tablet leads to worse note taking, whether the increased availability of distractions for computer users (email, facebook, twitter, news, other classes, etc.) leads to lower grades, or whether professors teach differently when students are on their computers. Given the magnitude of our results, and the increasing emphasis of using technology in the classroom, additional research aimed at distinguishing between these channels is clearly warranted.(page 28)
First, laptops might or might not be useful for taking notes. This is at odds with a lot of these “laptops are bad” studies. And as a slight tangent, I really don’t know how easy it is to generalize about note taking and knowledge across large groups. Speaking only for myself: I’ve been experimenting lately with taking notes (sometimes) with paper and pen, and I’m not sure it makes much difference. I also have noticed that my ability to take notes on what someone else is saying — that is, as opposed to taking notes on something I want to say in a short speech or something– is now pretty poor. I suppose that’s the difference between being a student and being a teacher, and maybe I need to relearn how to do this from my students.
This paragraph also hints at another issue with all of these “laptops are bad” pieces, of “whether professors teach differently when students are on their computers.” Well, maybe that is the problem, isn’t it? Maybe it isn’t so much that students are spending all of this time being distracted by laptops, tablets, and cell-phones– that is, students are NOT giving professor the UNDIVIDED ATTENTION they believe (nay, KNOWS) they deserve. Maybe the problem is professors haven’t figured out that the presence of computers in classrooms means we have to indeed “teach differently.”
But the other thing this paragraph got me to thinking about the role of technology in the courses I teach, where laptops/tablets are “used deliberately in classroom instruction.” This paragraph suggests that the opposite of banning laptops might also be as true: in other words, what if, instead of banning laptops from a classroom, the professor mandated that students each have a laptop open at all times in order to take notes, to respond to on-the-fly quizzes from the professor, and look stuff up that comes up in the discussions?
It’s the kind of interesting mini-teaching experiment I might be able to pull off this summer. Of course, if we extend this kind of experiment to the realm of online teaching– and one of my upcoming courses will indeed be online– then we can see that in one sense, this isn’t an experiment at all. We’ve been offering courses where the only way students communicate with the instructor and with other students has been through a computer for a long time now. But the other course I’ll be teaching is a face to face section of first year writing, and thus ripe for this kind of experiment. Complicating things more (or perhaps making this experiment more justifiable?) is the likelihood that a significant percentage of the students I will have in this section are in some fashion “not typical” of first year writing at EMU– that is, almost all of them are transfer students and/or juniors or seniors. Maybe making them have those laptops open all the time could help– and bonus points if they’re able multitask with both their laptop and their cell phones!
In defense of machine grading?!?! Well, no, not really. But I thought I’d start a post with a title like that. You know, provocative.
There has been a bit of a ruckus on WPA-L for a while now in support of a petition against machine grading and for humans at the web site humanreaders.org and I of course agree with the general premise of what is being presented on that site. Machine grading software can’t recognize things like a sense of humor or irony, it tends to favor text length over conciseness, it is fairly easy to circumvent with gibberish kinds of writing, it doesn’t work in real world settings, it fuels high stakes testing, etc., etc., etc. I get all that.
We should keep pushing back against machine grading for all of these reasons and more. Automated testing furthers the interests of Edu-business selling this software and does not help students nor teachers, at least not yet. I’m against it, I really am.
It seems to me that we’re not really talking about grading per se but about teaching, and the problem is writing pedagogy probably doesn’t work when the assessment/ grading part of things is completely separated from the teaching part of things. This is one of the differences between assigning writing and teaching writing.
There’s a bit of a catch 22 going on here. Part of the problem was that writing teachers complained (rightly so, I might add) about big standardized tests of various sorts not having writing components. So writing was added to a lot of these tests. However, the only way to assess thousands of texts generated through this testing is with specifically trained readers (see my next point) or with computer programs. So we can skip the writing altogether with these tests or we can accept a far from perfect grading mechanism.
I’ve participated in various holistic/group grading sessions before (though it’s been a long time), which is how they used to do this sort of thing before the software solutions. The way I recall it working was dozens and dozens of us were trained to assign certain ratings for essays based on a very specific rubric. We were, in effect, programmed, and there was no leeway to deviate from the guidelines. So I guess what I’m getting at is in these large group assessment circumstances, what’s the difference if it’s a machine or a person?
This software doesn’t work that well yet, especially in uncontrolled circumstances: that is, grading software is about as accurate as humans with these standardized prompt responses written in specific testing situations, but it doesn’t work well at all as an off-the-shelf rating solution for just any chunk of writing that students write for classes or that writers write for some other reason. But the key word in that last sentence is yet, because this software has (and is) getting a lot better. So what happens when it gets as good as a human reader (or at least good enough?) Will we accept the role of this evaluation software much in the same way we now all accept spell checking in word processors? (And by the way, I am old enough to remember resistance among English teacher-types to that, too– not as strong as the resistance to machine grading, but still).
As a teacher, my least favorite part of teaching is grading. I do not think that I am alone in that sentiment. So while I would not want to outsource my grading to someone else or to a machine (because again, I teach writing, I don’t just assign writing), I would not be against a machine that helps make grading easier. So what if a computer program provided feedback on a chunk of student writing automatically, and then I as the teacher followed behind those machine comments, deleting ones I thought were wrong or unnecessary, expanding on others I thought were useful? What if a machine printed out a report that a student writer and I could discuss in a conference? And from a WPA point of view, what if this machine helped me provide professional development support to GAs and part-timers in their commenting on students’ work?
I’ve been pretty crazy-busy this semester because I took on too much and because there were things I could not refuse. So the blog has been pretty neglected lately, mostly because I’ve been thinking and writing about online stuff and MOOCs. (And now I’m coming back to this blog to procrastinate a bit in getting it done with the crazy-busy semester).
In no super-specific order:
I have been working on (and I think it’s done) my contribution to a “symposium” on MOOCs that will be in College Composition and Communication, I think in January. It’s about my experiences specifically with the writing assignments in “Listening to World Music” and the ways that they failed in spectacular ways. If you are someone who has read my entries about MOOCs as of late, you probably have a sense of what’s going to be in that relatively short piece. In any event, I really appreciate the opportunity to participate and it just goes to show you that sometimes blogging about stuff can pay off.
I was in a meeting just the other day where the topic of online teaching came up, and some of the folks complaining about it– literature colleagues (I know that’s shocking!)– said online classes were obviously not as good as face to face classes. “So, are you saying that the classes I teach online aren’t any good?” I asked. No-no-no, we don’t mean you, they quickly said, but yes, that is what they meant. What I find continually most annoying about this critique is that it inevitably comes from people who have had no experience with online teaching. I mean none, and it also usually comes from people who don’t have a whole lot of experience or connection to this whole new-fangled Internets thing. So part of what I said in this meeting was “Look, before you argue that online teaching can’t be as good as face to face teaching, go out and take an online class. Before that, your pronouncements about what online classes are like is a little like me telling you what Antartica is like even though I’ve never been there. I mean, I know it’s cold, but so what?”
MOOCs are pretty much the same way, which is why I spent the time I did in “Listening to World Music.” I wanted to see first-hand what these things were like, and since I am unlikely to teach one anytime soon, I experienced one as a student and wrote lots and lots about it. A lot of what I’ve been reading lately about MOOCs though (frankly, including some of what I am linking to/talking about in this post) seem to be coming from folks making educated guesses or knee-jerk reactions.
MOOCs have had the advantage of raising the profile of online teaching as a “real” environment for learning. But even though the likes of Daphne Koller and Peter Norvig think they “invented” online education with their MOOCs, the fact of the matter is students have been taking classes online at real universities– particularly regional ones like EMU– for over a decade now. Something like a third of all college students in the U.S. have taken at least one online class. We know a lot about what works and what doesn’t. Which brings me to my next point….
I don’t think the discussion should be about online classes being as “good” as face-to-face classes or even whether or not online classes “work.” (See “Do Online Classes Suck?” by Alex Halavais on this point). We’ve all seen bad teaching in the best of cozy face-to-face classroom settings, so the idea that that format for teaching is inherently “better” than online teaching seems a little dubious to me. Rather, I think the issue is of what are the trade-offs of these different formats, how do teachers adjust their pedagogy to best fit the situation, and what do we know about the best fit for the subject being taught. One of the trade-offs for teaching classes in a large lecture format is there is not a lot of opportunity for discussion or for student assessment in a format other than an easily graded test. One of the trade-offs for teaching first year writing in small discussion sections is it is prohibitively expensive to staff all of those sections with equally great and experienced professors (let alone great and experienced non-tenure-track faculty), so there can be pretty significant differences between different sections of the same course– thus the point of writing program administration.
One of the differences between how my writing colleagues think about online teaching and how my literature colleagues think about it is at what level it is most appropriate. Folks in literature have been somewhat okay with online versions of gen-ed classes but not for classes in the major or at the graduate level. We have the opposite take: we have come to believe that online (and hybrid) format classes need to be a part of the mix for our undergraduate and graduate programs, but we want our students in first year writing to take the class in person and on campus. That might change– I can especially imagine a scenario where we offer sections of first year writing in a hybrid format– but it isn’t going to be changing soon, largely because of the nature of those classes. Students in first year writing typically need to learn some of the habits that will help them succeed in college: showing up, meeting schedules, learning how to become more self-disciplined, etc. Which leads me to my next rambling point:
Who thinks that MOOCs will work in “remedial” college courses? I personally find the term “remedial” both problematic and offensive, not unlike a well-intentioned and ill-informed person referring to someone of Chinese descent as “Oriental,” but I don’t want to go into that for now. The Gates foundation has given out a bunch of grants for creating “developmental” MOOCs– including courses in first year writing being developed by folks at Duke, Georgia Tech, Mt. San Jacinto College, and Ohio State. Each of these are using Coursera as a delivery platform. I’ll be very curious to see how this works out, but based on what I know about online teaching, MOOCs, and first year writing, I think this is doomed.
In the 24 or so years I’ve been teaching first year writing, I think it’s fair to say that the vast majority of students I’ve had in that class did not want to take it, and some of my students really really didn’t want to take it. Students take first year writing because it is a universal requirement (insert arguments ala Crowley et al as to why that is a bad idea here if you feel so inclined), and this is quite a bit different than the “Edu-tainment” appeal of MOOCs so far. We have decades of evidence on how to best help students who are struggling with subjects like writing, and all of that evidence suggests that these students need a lot personal attention of the sort not afforded in a class of thousands powered by freeze-dried/pre-recorded videos presented in a “stand and deliver” lecture format. The drop-out rate for Coursera MOOCs is already 90%; how much worse will they be in these courses?
Frankly, I think the folks working on MOOCs might have it backwards. Maybe they shouldn’t be replacing introductory or developmental college courses, the kinds of classes populated by young, inexperienced, and not particularly motivated students. Maybe MOOCs should replace upper-level undergraduate or graduate courses, the kinds of classes populated by older, experienced, savvy, and highly motivated students.
And once again, I discovered this semester in my own teaching that the content/learning management system matters. I have a chapter called “Blogs as an Alternative to Course Management Systems: Public, Interactive Teaching with a Round Peg in a Square Hole” that is in a book (that’s supposed to be coming out any day now) called Designing Web-Based Applications for 21st Century Writing Classrooms. The basic point of my chapter is to explain the hows/whys/pros/cons of using WordPress as an alternative to institutional CMSs. Despite the fact that I wrote this piece and despite the fact that I’ve used my own installations of WordPress as my primary platform for teaching online for years, I decided for some reason to give EMU’s CMS (eCollege) another try to host the entire class. Not a great idea. The short version is that eCollege works fine to host the grade book and to host content in a series of units where content is delivered, discussed, and tested. It doesn’t work well when a course is an on-going discussion or when it is something that exists in relation with the rest of the world– e.g., not behind a firewall. So what I found most frustrating was there was no narrative to the class, no place where it was easy to post an update to something I just came across that I thought would be useful to share with everyone. Long story short, I’m going back to some kind of blog space for English 516 this winter term, which is also going to be online.
Clay Shirky wrote an interesting blog entry, “Napster, Udacity, and the Academy,” and Jeff Rice had an interesting response (and he also pointed to this good Inside Higher Ed rebuttal). Of course, “unbundling” the college degree is not something that is new, though it might appear to be new to Shirky who went to Yale and who teaches (once in a while, at least) at places like NYU. I have lots and LOTS of students at EMU who credits from two or three different other institutions on their transcript, and there are lots of EMU students who are simultaneously enrolled at Washtenaw Community College or another school in the area.
One place where I agree with Shirky is that if the point of comparison for what works (or doesn’t) in higher ed should not be Harvard or Yale; that said, one of the major concerns I have about MOOCs (and actually online education in general) is that it simply rarifies the already existing (albeit largely unspoken) hierarchy. I think this is basically what Nigel Thrift in The Chronicle of Higher Education and what Ian Bogost is saying here. The analogy in those last two pieces is to restaurants, but no need to make that analogy when we can make an actual comparison. There are thousands of colleges and universities on this continent that award bachelor degrees in some kind of humanities– English, let’s say. As an initial qualification for some kind of want ad– “bachelors degree required’– these thousands of different institutions are all the same. But we all know that a degree from Harvard is worth more in the marketplace than a degree from the University of Michigan, which is worth more than one from Michigan State, which is worth more than one from EMU, which is worth more than one from the University of Phoenix. It’s been that way for a long long time. What I think MOOCs will do is simply add another lower rung to that ladder.
I have a post in mind about what is good about the MOOC thing as far as I can tell and I think I’m going to be proposing something with Bill HD and some others about MOOCs for ATTW. But first, I want to post all the MOOC links I’ve got open in browser windows right now; so in no particular order:
The University of Illinois at Urbana-Champaign is an example of a campus that moved swiftly. As soon as Phyllis M. Wise, the university’s chancellor, heard about Coursera from other administrators who had signed on, she wanted to follow suit. She asked the executive committee of the university’s Academic Senate for a recommendation on whether to work toward a Coursera deal, and a faculty task force quickly issued a report giving a green light for such a partnership.
The task force devised a list of questions about how a Coursera partnership would work, said Nicholas C. Burbules, a former chair of the Academic Senate and a professor of educational-policy studies. For example, how would potential revenues from Coursera be divided within the university, and how would faculty members be compensated for teaching Coursera courses?
“I don’t think anyone knows exactly where this is going,” Mr. Burbules said. “We’re on a very fast train right now, and we’re jumping on board and seeing where it ends up.”
From CHE, “Publishers See Online Mega-Courses as Opportunity to Sell Textbooks.” You get the idea from the headline. First off, this is at odds with the corporate MOOC movement’s public declarations of offering a free education for the world: textbooks are expensive. Second, and this is partly what I want to write more about later, it seems to me that MOOCs could be a replacement for textbooks, or at least a platform for them.
A couple links via Stephen Downes. First, “The Coursera Gift Horse,” from Jonathan Becker’s blog “Educational Insanity.” Basically, he’s saying that there are some problems with Coursera, sure, but why are people complaining about this awesome and free resources? Of course, he’s just started this Social Network Analysis class, the one that Bill is taking too (more on that as I make my way through my links), so let’s see what he thinks in a couple weeks.
The danger of MOOCs (which, by the way, are at the intersection of Wall Street and Silicon Valley, two cultures inordinately obsessed with meritocracy) is that they will return us to seeing a world that sees large levels of failure validating small levels of success. And they will build a breed of student that is the Jamie Dimon or Bill Gross of tomorrow, someone who knows they are chosen, and becomes oblivious to their own privilege, luck, and detachment.
These are the cultures which have destroyed America over the last 30 years – the idea that our job as a society is to look only at the levelness of the playing field, and ignore how the rules consistently favor the team in power.
If we begin talking about MOOCs as meritocracies, we are doubling down on the flawed ideology that got us into this mess.
Sorta hard to be a movement that’s supposed to empower those who are disenfranchised from higher education and to be a movement of elites at the same time, isn’t it?
Third, there’s mooc.ca, which is “a place to host MOOC news and information,” for all your MOOC-y overload needs.
And last from Downes (for now) is this handy little graphic:
In the nutshell, this is for me the problem of Coursera and other models that are are trying to replicate/replace the way education works. For education to work, you’ve got to have some version of the diagram on the left. Education requires an instructor who coordinates what’s going to happen in a given experience (that is, creates the “syllabus” for a “class”), is the expert to whom students turn for a definitive answer (and in my view, this is true in educational settings that are “student-centered” and/or where knowledge is more epistemic, contextual, or contested), and is the person who determines if the student has learned what they were supposed to learn to get credit (assessing, grading, credentialing, etc.) On the other hand, the “many to many” and scalable diagram on the right depicts learning, which can happen in lots of situations, including a MOOC.
Elsevier, the academic publishing giant, announced on Tuesday that it will offer a free version of one of its textbooks this fall to students who register for Circuits & Electronics, a massive open online course (MOOC) being offered by edX.
The publisher actually made available a free version of the textbook during the first iteration of that course last fall, with little fanfare. The results are in: Rather than prompting scores of traditional students in similar courses to pass on purchasing the textbook in favor of registering for the MOOC and freeloading, Elsevier found that providing a “static” digital version of the text for free to MOOC students actually galvanized sales elsewhere.
“The version that is online on edX is a static version — a PNG file, which is not downloadable, not manipulable and doesn’t have all the flexibility that a true full e-book does,” said Dan O’Connell, a publicist for Elsevier. “So we found that actually it isn’t cutting into, and in fact it seems to be elevating, sales.”
Many many moons ago, when I was working on a textbook and the editing people were asking me what I thought would be innovative, I suggested to make a version of it available online for free and so they could recognize the “value added” of the actual book. They thought that was pretty funny.
Also from IHE, “Gates, MOOCs, and Remediation.” Given the drop-out rate on MOOCs (which is all part of that meritocracy argument), I’d say this is not the role of MOOCs. But there is apparently some grant money tied to it, so I don’t know, maybe it’s worth checking out.
And last but far FAR from least comes news that my friend and colleague Bill Hart-Davidson is going to be blogging about the Coursera MOOC he’s enrolled in, “Social Network Analysis.” Here’s his first entry; he’s apparently already getting into trouble.
I’ve been teaching at least some of my classes online since 2005 and I’ve been using various other online tools (what I’ve heard described as “blended” learning, whatever that means) for a lot longer than that. But I’ve never taken an online class before, and I haven’t exactly done a lot of studying of online pedagogy, certainly not from the perspective of education scholars. So when I read about Curtis Bonk’s Massively Open Online Course about teaching online, I figured what the heck? I signed up.
It’s very very early, of course. The class technically doesn’t start until Monday. But there are already a couple of things that give me, well, pause.
First, there’s the introductions part of the class, which is basically 1200 or so different people posting a message that says “hi, my name is…” with not much other interaction. How could there be, really?
Second, Bonk posted this introduction that comes across to me as, well, goofy:
I’ve been known to make a few attention-getting and goofy videos for my online classes too, but there sure seems to be a lot of props here. But hey, who knows? Bonk has a fist full of articles and books on online pedagogy and somebody must think he knows what he’s talking about or he wouldn’t be doing this at all.
Third, I think Bonk signals here a bit as to what Blackboard’s interest in this whole MOOC thing is all about. As Bonk explains in this video (at about the 9 minute mark), week 5 is going to feature the folks from Blackboard coming on the site to more or less explaining all the “cool” Blackboard tools we’ve been using. Now, I don’t know if this is what’s going to happen, but it sounds like the angle here is Blackboard is going to try to sell us on Blackboard, sort of like the way that textbook companies try to sell faculty on their textbooks and other products. Which again makes me think that this whole MOOC thing is mostly a marketing stunt.
We’re getting ready to do the family holiday thing around here (first to Iowa, then to Florida– an unusual “duel parental visit” Christmas season trip), and I’ve decided more or less on the spur of the moment to not take my computer. I’ll probably borrow Annette’s to check my email a couple of times in the next two weeks, but that’ll be about it. I dunno, I kind of feel like it might do me some good to have some computer “away time,” and I have a bunch of reading (both school and for fun) that I want to do, and I’m afraid that if I take my computer, then what I’ll do is pick at that instead.
But not to leave my millions of regular readers in the lurch, I thought I’d share with you a bery strange movie trailer I came across, Fiend of Dope Island:
It is completely apropos of nothing in my life, but man, this movie appears to have it all: drugs, sex, tropical climes, and Yugoslavian bombshells. Here’s what IMDB had to say. If anyone sees it over the holiday, let me know. See y’all next year.