Classroom Tech Bans Are Bullshit (or are they?): My next/current project

I was away from work stuff this past May– too busy with Will’s graduation from U of M followed quickly by China, plus I’m not teaching or involved in any quasi-administrative work this summer. As I have written about before,  I am no longer apologetic for taking the summer off, so mostly that’s what I’ve been doing. But now I need to get back to “the work–” at least a leisurely summer schedule of “the work.”

Along with waiting for the next step in the MOOC book (proofreading and indexing, for example), I’m also getting started on a new project. The proposal I submitted for funding (I have a “faculty research fellowship” for the fall term, which means I’m not teaching though I’m still supposed to do service and go to meetings and such) is officially called “Investigating Classroom Technology Bans Through the Lens of Writing Studies.” Unofficially, it’s called “Classroom Tech Bans are Bullshit.” 

To paraphrase: there have been a lot of studies (mostly in Education and/or Psychology) on the student use of mobile devices in learning settings (mostly lecture halls– more on that in a moment). Broadly speaking, most of these studies have concluded these technologies are bad because students take worse notes than they would with just paper and pen, and these tools make it difficult for students to pay attention.  Many of these studies have been picked up in mainstream media articles, and the conclusions of these studies are inevitably simplified with headlines like “Students are Better Off Without a Laptop In the Classroom.”

I think there are couple of different problems with this– beyond the fact that MSM misinterprets academic studies all the time. First, these simplifications trickle back into academia when those faculty who do not want these devices in their classrooms use these articles to support laptop/mobile device bans. Second, the methodologies and assumptions behind these studies are very different from the methodologies and assumptions in writing studies. We tend to study writing– particularly pedagogy– with observational, non-experimental, and mixed-method research designs, things like case studies, ethnographies, interviews, observations, etc., and also with text-based work that actually looks at what a writer did.

Now, I think it’s fair to say that those of us in Composition and Rhetoric generally and in the “subfield/specialization” of Computers and Writing (or Digital Humanities, or whatever we’re calling this nowadays) think tech bans are bad pedagogy. At the same time, I’m not aware of any scholarship that directly challenges the premise of the Education/Psychology scholarship calling for bans or restrictions on laptops and mobile devices in classrooms. There is scholarship that’s more descriptive about how students use technologies in their writing process, though not necessarily in classrooms– I’m thinking of the essay by Jessie Moore and a ton of other people called “Revisualizing Composition” and the chapter by Brian McNely and Christa Teston “Tactical and Strategic: Qualitative approaches to the digital humanities” (in Bill Hart-Davidson and Jim Ridolfo’s collection Rhetoric and the Digital Humanities.) But I’m not aware of any study that researches why it is better (or worse) for students to use things like laptops and cell phones while actually in the midst of a writing class.

So, my proposal is to spend this fall (or so) developing a study that would attempt to do this– not exactly a replication of one or more of the experimentally-driven studies done about devices and their impact on note taking, retention, and distraction, but a study that is designed to examine similar questions in writing courses using methodologies more appropriate for studying writing. For this summer and fall, my plan is to read up on the studies that have been done so far (particularly in Education and Psych), use those to design a study that’s more qualitative and observational, and recruit subjects and deal with the IRB paperwork. I’ll begin some version of a study in earnest beginning in the winter term, January 2020.

I have no idea how this is going to work out.

For one thing, I feel like I have a lot of reading to do. I think I’m right about the lack of good scholarship within the computers and writing world about this, but maybe not. As I typed that sentence in fact, I recalled a distant memory of a book Mike Palmquist, Kate Kiefer, Jake Hartvigsen, and Barbara Godlew wrote called Transitions: Teaching Writing in Computer-Supported and Traditional Classrooms. It’s been a long time since I read that (it was written in 1998), but I recall it as being a comparison between writing classes taught in a computer lab and not. Beyond reading in my own field of course, I am slowly making my way through these studies in Education and Psych, which present their own kinds of problems. For example, my math ignorance means I have to slip into  “I’m just going to have to trust you on that one” mode in the discussions about statistical significance.

One article I came across and read (thanks to this post from the Tattooed Prof, Kevin Gannon) was “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014).” As the title suggests, this study by Kayla Morehead, John Dunlosky, and Katherine A. Rawson replicates a 2014 (which is kind of the “gold standard” in the ban laptops genre) study by Pam Mueller and Daniel Oppenheimer “The Pen is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking.” The gist of these two articles is all in the titles: Mueller and Oppenheimer’s conclusions were that it was much better to take notes by hand, while Morehead, Dunlosky, and Rawson’s conclusions were not so much. Interestingly enough, the more recent study also questioned the premise of the value of note taking generally since one of their control groups didn’t take notes and did about as well on the post-test of the study.

Reading these two studies has been a quite useful way for me to start this work. Maybe I should have already known this, but there are actually two fundamentally different issues at stake with these classroom tech bans (setting aside assumptions about the lecture hall format and the value of taking notes as a way of learning).  Mueller and Oppenheimer claimed with their study handwriting was simply “better.” That’s a claim that I have always thought was complete and utter bullshit, and it’s one that I think was debunked a long time ago. Way back in the 1990s when I first got into this work, there were serious people in English and in writing studies pondering what was “better,” a writing class equipped with computers or not, students writing by hand or on computers. We don’t ask that question anymore because it doesn’t really matter which is “better;” writers use computers to write and that’s that. Happily, I think Morehead, Dunlowsky, and Rawson counter Mueller and Oppenheimer’s study rather persuasively. It’s worth noting that so far, MSM hasn’t quite gotten the word out on this.

But the other major argument for classroom tech bans– which neither of these studies addresses– is about distraction, and that’s where the “or are they?” part of my post title comes from. I still have a lot more reading to do on this (see above!), but it’s clear to me that the distraction issue deserves more attention since social media applications are specifically designed to distract and demand attention from their users. They’re like slot machines, and it’s clear that “the kids today” are not the only ones easily taken in. When I sit in the back of the room during a faculty meeting and I glance at the screens of my colleagues’ laptops in front of me, it’s pretty typical to see Facebook or Twitter or Instagram open, along with a window for checking email, grading papers– or, on rare occasion, taking notes.

Anyway, it’s a start. And if you’ve read this far and you’ve got any ideas on more research/reading or how to design a study into this, feel free to comment or email or what-have-you.

Three thoughts on the “Essay,” assessing, and using “robo-grading” for good

NPR had a story on Weekend Edition last week, “More States Opting to ‘Robo-Grade” Student Essays By Computer,” that got some attention from other comp/rhet folks though not as much as I thought it might. Essentially, the story is about the use of computers to “assess” (really “rate,” but I’ll get to that in a second) student writing on standardized tests. Most composition and rhetoric scholars think this software is a bad idea. I think this is not not true, though I do have three thoughts.

First, I agree with what my friend and colleague Bill Hart-Davidson writes here about essays, though this is not what most people think “essay” means. Bill draws on the classic French origins of the word, noting that an essay is supposed to be a “try,” an attempt and often a wandering one at that. Read any of the quite old classics (de Montaigne comes to mind, though I don’t know his work as well as I should) or even the more modern ones (E.B. White or Joan Didion or the very contemporary David Sedaris) and you get more of a sense of this classic meaning. Sure, these writers’ essays are organized and have a point, but they wander to them and they are presented (presumably after much revision) as if the writer was discovering their point along with the reader.

In my own teaching, I tend to use the term project to describe what I assign students to do because I think it’s a term that can include a variety of different kinds of texts (including essays) and other deliverables. I hate the far too common term paper because it suggests writing that is static, boring, routine, uninteresting, and bureaucratic. It’s policing, as in “show me your papers” when trying to pass through a boarder. No one likes completing “paperwork,” but it is one of those necessary things grown-ups have to do.

Nonetheless, for most people including most writing teachers–  the term “essay” and “paper” are synonymous. The original meaning of essay has been replaced by the school meaning of essay (or paper– same thing).  Thus we have the five paragraph form, or even this comparably enlightened advice from the Bow Valley College Library and Learning Commons, one of the first links that came up in a simple Google search. It’s a list (five steps, too!) for creating an essay (or paper) driven by a thesis and research.  For most college students, papers (or essays) are training for white collar careers to learn how to complete required office paperwork.

Second, while it is true that robo-grading standardized tests does not help anyone learn how to write, the most visible aspect of writing pedagogy to people who have no expertise in teaching (beyond experience as a student, of course) is not the teaching but the assessment. So in that sense, it’s not surprising this article focuses on assessment at the expense of teaching.

Besides, composition and rhetoric as a field is very into assessment, sometimes (IMO) at the expense of teaching and learning about writing. Much of the work of Writing Program Administration and scholarship in the field is tied to assessment– and a lot (most?) comp/rhet specialists end up involved in WPA work at some point in their careers. WPAs have to consider large-scale assessment issues to measure outcomes across many different sections of first year writing, and they usually have to mentor instructors on small-scale assessment– that is, how to grade and comment all these student essays papers in a way that is both useful to students and that does not take an enormous amount of time.  There is a ton of scholarship on assessment– how to do it, what works or doesn’t, the pros and cons of portfolios, etc. There are books and journals and conferences devoted to assessment. Plenty of comp/rhet types have had very good careers as assessment specialists. Our field loves this stuff.

Don’t get me wrong– I think assessment is important, too. There is stuff to be learned (and to be shown to administrators) from these large scale program assessments, and while the grades we give to students aren’t always an accurate measure of what they learned or how well they can write, grades are critical to making the system of higher education work. Plus students themselves are too often a major part of the problem of over-assessing. I am not one to speak about the “kids today” because I’ve been teaching long enough to know students now are not a whole lot different than they were 30 years ago. But one thing I’ve noticed in recent years– I think because of “No Child Left Behind” and similar efforts– is the extent to which students nowadays seem puzzled about embarking on almost any writing assignment without a detailed rubric to follow.

But again, assessing writing is not the same thing as fostering an environment where students can learn more about writing, and it certainly is not how writing worth reading is created. I have never read an essay which mattered to me written by someone closely following the guidance of a typical  assignment rubric. It’s really easy as a teacher to forget that, especially while trying to make the wheels of a class continue to turn smoothly with the help of tools like rubrics. As a teacher, I have to remind myself about that all the time.

The third thing: as long as writing teachers believe more in essays than in papers and as long as they are more concerned with creating learning opportunities rather than sites for assessment, “robo-grader” technology of the soft described in this NPR story are kind of irrelevant– and it might even be helpful.

I blogged about this several years ago here as well, but it needs to be emphasized again: this software is actually pretty limited. As I understand it, software like this can rate/grade the response to a specific essay question– “in what ways did the cinematic techniques of Citizen Kane revolutionize the way we watch and understand movies today”– but it is not very good at more qualitative questions– “did you think Citizen Kane was a good movie?”– and it is not very good at all at rating/grading pieces of writing with almost no constraints, as in “what’s your favorite movie?”

Furthermore, as the NPR story points out, this software can be tricked. Les Perleman has been demonstrating for years how these robo-graders can be fooled, though I have to say I am a lot more impressed with the ingenuity shown by some students in Utah who found ways to “game” the system: “One year… a student who wrote a whole page of the letter “b” ended up with a good score. Other students have figured out that they could do well writing one really good paragraph and just copying that four times to make a five-paragraph essay that scores well. Others have pulled one over on the computer by padding their essays with long quotes from the text they’re supposed to analyze, or from the question they’re supposed to answer.” The raters keep “tweaking” the code to present these tricks, but of course, students will keep trying new tricks.

I have to say I have some sympathy with one of the arguments made in this article that if a student is smart enough to trick the software, then maybe they deserve a high rating anyway. We are living in an age in which it is an increasingly important and useful skill for humans to write texts in a way that can be “understood” both by other people and machines– or maybe just machines. So maybe mastering the robo-grader is worth something, even if it isn’t exactly what most of us mean by “writing.”

Anyway, my point is it really should not be difficult at all for composition and rhetoric folks to push back against the use of tools like this in writing classes because robo-graders can’t replicate what human teachers and students can do as readers: to be an actual audience. In that sense, this technology is not really all that much different than stuff like spell-checkers and grammar-checkers I have been doing this work long enough to know that there were plenty of writing teachers who thought those tools were the beginning of the end, too.

Or, another way of putting it: I think the kind of teaching (and teachers) that can be replaced by software like this is pretty bad teaching.

Instead of banning laptops, what if we mandated them?

Oy. Laptops are evil. Again.

This time, it comes from “Leave It in the Bag,” an article in Inside Higher Ed, reporting on a study done by Susan Payne Carter, Kyle Greenberg, and Michael S. Walker, all economists at West Point (PDF). This has shown up on the WPA-L mailing list and in my various social medias as yet another example of why technology in the classrooms is bad, but I think it’s more complicated than that.

Mind you, I only skimmed this and all of the economics math is literally a foreign language to me. But there are a couple of passages here that I find interesting and not exactly convincing to me that me and my students should indeed “leave it in the bag.”

For example:

Permitting laptops or computers appears to reduce multiple choice and short answer scores, but has no effect on essay scores, as seen in Panel D. Our finding of a zero effect for essay questions, which are conceptual in nature, stands in contrast to previous research by Mueller and Oppenheimer (2014), who demonstrate that laptop note-taking negatively affects performance on both factual and conceptual questions. One potential explanation for this effect could be the predominant use of graphical and analytical explanations in economics courses, which might dissuade the verbatim note-taking practices that harmed students in Mueller and Oppenheimer’s study. However, considering the substantial impact professors have on essay scores, as discussed above, the results in panel D should be interpreted with considerable caution. (page 17)

The way I’m reading this is for classes where students are expected to take multiple choice tests as a result of listening to a lecture from a sage on the stage, laptops might be bad. But in classes where students are supposed to write essays (or at least more conceptual essay questions), laptops do no harm. So if it’s a course where students are supposed to do more than take multiple choice tests….

After describing the overall effects of students performing worse when computing technology is available, Carter, Greenberg, and Walker write:

It is quite possible that these harmful effects could be magnified in settings outside of West Point. In a learning environment with lower incentives for performance, fewer disciplinary restrictions on distracting behavior, and larger class sizes, the effects of Internet-enabled technology on achievement may be larger due to professors’ decreased ability to monitor and correct irrelevant usage.” (page 26)

Hmmm…. nothing self-congratulatory about that passage, is there?

Besides the fact that there is no decent evidence that the students at West Point (or any other elite institution for that matter) are on the whole such special snowflakes that they are more immune from the “harm” of technology/distraction compared to the rest of us simpletons, I think one could just as easily make the exact opposite argument. It seems to me that is is “quite possible” that the harmful effects are more magnified in a setting like West Point because of the strict adherence to “THE RULES” and authority for all involved. I mean, it is the Army after all. Perhaps in settings where students have more freedom and are used to the more “real life” world of distractions, large class sizes, the need to self-regulate, etc., maybe those students are actually better able to control themselves.

And am I the only one who is noticing the extent to which laptop/tablet/technology use really seems to be about a professor’s “ability to monitor and correct” in a classroom? Is that actually “teaching?”

And then there’s this last paragraph in the text of the study:

We want to be clear that we cannot relate our results to a class where the laptop or tablet is used deliberately in classroom instruction, as these exercises may boost a student’s ability to retain the material. Rather, our results relate only to classes where students have the option to use computer devices to take notes.   We further cannot test whether the laptop or tablet leads to worse note taking, whether the increased availability of distractions for computer users (email, facebook, twitter, news, other classes, etc.) leads to lower grades, or whether professors teach differently when students are on their computers. Given the magnitude of our results, and the increasing emphasis of using technology in the classroom, additional research aimed at distinguishing between these channels is clearly warranted.(page 28)

First, laptops might or might not be useful for taking notes. This is at odds with a lot of these “laptops are bad” studies. And as a slight tangent, I really don’t know how easy it is to generalize about note taking and knowledge across large groups. Speaking only for myself: I’ve been experimenting lately with taking notes (sometimes) with paper and pen, and I’m not sure it makes much difference. I also have noticed that my ability to take notes on what someone else is saying — that is, as opposed to taking notes on something I want to say in a short speech or something– is now pretty poor. I suppose that’s the difference between being a student and being a teacher, and maybe I need to relearn how to do this from my students.

This paragraph also hints at another issue with all of these “laptops are bad” pieces, of “whether professors teach differently when students are on their computers.” Well, maybe that is the problem, isn’t it? Maybe it isn’t so much that students are spending all of this time being distracted by laptops, tablets, and cell-phones– that is, students are NOT giving professor the UNDIVIDED ATTENTION they believe (nay, KNOWS) they deserve. Maybe the problem is professors haven’t figured out that the presence of computers in classrooms means we have to indeed “teach differently.”

But the other thing this paragraph got me to thinking about the role of technology in the courses I teach, where laptops/tablets are “used deliberately in classroom instruction.” This paragraph suggests that the opposite of banning laptops might also be as true: in other words, what if, instead of banning laptops from a classroom, the professor mandated that students each have a laptop open at all times in order to take notes, to respond to on-the-fly quizzes from the professor, and look stuff up that comes up in the discussions?

It’s the kind of interesting mini-teaching experiment I might be able to pull off this summer. Of course, if we extend this kind of experiment to the realm of online teaching– and one of my upcoming courses will indeed be online– then we can see that in one sense, this isn’t an experiment at all. We’ve been offering courses where the only way students communicate with the instructor and with other students has been through a computer for a long time now. But the other course I’ll be teaching is a face to face section of first year writing, and thus ripe for this kind of experiment. Complicating things more (or perhaps making this experiment more justifiable?) is the likelihood that a significant percentage of the students I will have in this section are in some fashion “not typical” of first year writing at EMU– that is, almost all of them are transfer students and/or juniors or seniors. Maybe making them have those laptops open all the time could help– and bonus points if they’re able multitask with both their laptop and their cell phones!

Hmm, I see a course developing….

A “Modest Proposal” Revisited: Adjuncts, First Year Composition, and MOOCs

I’m posting this at 37,000 or so feet, on my way back from Italy from an international conference on MOOCs sponsored by the University of Naples (more accurately, Federica WebLearning). Normally, I wouldn’t pay as much as I’m paying for wifi on a plane, but I wanted to stay awake as much as possible to get back on USA time by Tuesday morning and because I had some school/teaching work to do. Plus there’s a weird extra seat next to me because my row with three chairs has a row of four chairs right in front of it.

Anyway, I’ll be blogging about that in the next few days once I go through my notes and collect my thoughts about the conference and about Italy. In the meantime though, I wanted to post this. I was trying to place this as a “thought piece” in something like Inside Higher Ed and/or The Atlantic, which is why there is more “apparatus” explaining the field and the state of adjunct labor in fycomp than is typical of things I write about that here. But nobody else wanted it/wanted to pay me to publish it, so it will find a home here.

Continue reading “A “Modest Proposal” Revisited: Adjuncts, First Year Composition, and MOOCs”

“Rhetoric and the Digital Humanities,” Edited by Jim Ridolfo and Bill Hart-Davidson

I’ve blogged about “the Digital Humanities” several times before. Back in 2012, I took some offense at the MLA’s “discovery” of “digital scholarship” because they essentially ignored the work of anyone other than literature scholars– in other words, comp/rhet folks who do things with technology need not apply. Cheryl Ball had an editorial comment in Kairos back then I thought was pretty accurate– though it’s also worth noting in the very same issue of Kairos, Ball also praised the MLA conference for its many “digital humanities” presentations.

Almost exactly a year ago, I had a post here called “If you can’t beat ’em and/or embracing my DH overlords and colleagues,” in which I was responding to a critique by Adam Kirsch that Marc Bousquet had written about. Here’s a long quote from myself that I think is all the more relevant now:

I’ve had my issues with the DH movement in the past, especially as it’s been discussed by folks in the MLA– see here and especially here.  I have often thought that a lot of the scholars in digital humanities are really literary period folks trying to make themselves somehow “marketable,” and I’ve seen a lot of DH projects that don’t seem to be a whole lot more complicated than putting stuff up on the web. And I guess I resent and/or am annoyed with the rise of digital humanities in the same way I have to assume the folks who first thought up MOOCs (I’m thinking of the Stephen Downes and George Siemens of the world) way before Coursera and Udacity and EdX came along are annoyed with the rise of MOOCs now. All the stuff that DH-ers talk about as new has been going on in the “computers and writing”/”computers and composition” world for decades and for these folks to come along now and to coin these new terms for old practices– well, it feels like a whole bunch of work of others has been ignored and/or ripped off in this move.

But like I said, if you can’t beat ’em, join ’em. The “computers and writing” world– especially vis a vis its conference and lack of any sort of unifying “organization”– seems to me to be fragmenting and/or drifting into nothingness at the same time that DH is strengthening to the point of eliciting backlash pieces in a middle-brow publication like the New Republic. Plenty of comp/rhet folk have already made the transition, at least in part. Cheryl Ball has been doing DH stuff at MLA lately and had an NEH startup grant on multimedia publication editing; Alex Reid has had a foot in this for a few years now; Collin Brooke taught what was probably a fantastic course this past spring/winter, “Rhetoric, Composition, and Digital Humanities;” and Bill Hart-Davidson and Jim Ridolfo are editing a book of essays that will come out in the fall (I think) called Rhetoric and the Digital Humanities. There’s an obvious trend here.

And this year, I’m going to HASTAC instead of the C&W conference (though this mostly has to do with the geographic reality that HASTAC is being hosted just up the road from me at Michigan State University) and I’ll be serving as the moderator/host of a roundtable session about what the computers and writing crowd can contribute to the DH movement.

In other words, I went into reading Jim and Bill’s edited collection Rhetoric and the Digital Humanities with a realization/understanding that “Digital Humanities” has more or less become the accepted term of art for everyone outside of computers and writing, and if the C&W crowd wants to have any interdisciplinary connection/relevance to the rest of academia, then we’re going to have to make connections with these DH people. In the nutshell, that’s what I think Jim and Bill’s book is about. (BTW and “full disclosure,” as they say: Jim and Bill are both friends of mine, particularly Bill, who I’ve known from courses taken together, conferences, project collaborations, dinners, golf outings, etc., etc., etc. for about 23 or so years).

Continue reading ““Rhetoric and the Digital Humanities,” Edited by Jim Ridolfo and Bill Hart-Davidson”

My iPad and “killer apps” for academics, almost four years later

I was checking out some of the statistics on hits and such to this site a week or so ago, and one thing that surprised me is that the most popular “all time” post I have on the site (at least since the WordPress plugin Jetpack started keeping track of things) is not about MOOCs, academic life, teaching, cooking, etc. Rather, the most popular single post on this site is “iPad “killer apps” for Academics (maybe),” which I posted on April 10, 2010.

Of course, it’s also important to point out out that no post on this site is really all that popular. I average about 50 or so views a day, sometimes up to 100 when I post something that people find interesting. The most views this site ever received in a single day was 737, and even this most popular of posts on iPads has only received (as of this writing) “all time” 4,794 views. Sure, that’s more people than have ever attended all of the conference presentations I’ve ever given and it’s probably more “views” than any print piece of scholarship I’ve published. But these are still not exactly the kind of traffic numbers that are going to allow me to quit the day job and just blog full-time.

(Oh, and as another thought/tangent: the archives for this site goes back eleven years now. I’ve slowed down quite a bit, but damn, that’s a lot of blogging. Another sabbatical project might involve going back to read through all that and/or “mine” it a bit for text/writing I can repurpose.)

Anyway, a few years later and after I bought my first iPad, what do I think now of what I said then?

Continue reading “My iPad and “killer apps” for academics, almost four years later”

The Comcast Strikes Back

Complaining about Comcast is sort of like complaining about death or taxes and about as common.  I know that. But because of a Twitter exchange I had, I thought I’d add to the genre generally and specifically to my latest Twitter follower, @ComcastLisa. This is more for her than anything else, so if you have had your fill of internet posts complaining about Comcast, feel free to move along. If you’re a glutton for punishment, read on.

Continue reading “The Comcast Strikes Back”

Enough with the “no laptops in classrooms” already

There has been a rash of “turn off the laptop” articles in various places in the educational media, but I think what has pushed me over the edge and motivated this post is Clay Shirky’s “Why I Just Asked My Students To Put Their Laptops Away” on Medium. In the nutshell, Shirky went to the no laptop camp because (he says) students can’t multitask and students are too easily distracted by the technology, particularly with the constant alerts from things like Facebook.

Enough already.

First off, while I am no expert regarding multitasking, it seems to me that there are a lot of different layers to multitasking (or perhaps it would make more sense to say attention on task) and most of us perform some level of multitasking all the time.  Consider driving. I think it’s always a bad idea to be texting while actually moving in traffic because, yes, that’s too much multitasking for most people. But how about texting or checking email or social media while at a long light? I do it all the time. Or how about talking on the phone? For me, it’s easy to talk on the phone while driving if I am using headphones or if I’m driving a familiar route in normal conditions. When I’m driving an unfamiliar route in bad weather or in heavy traffic, not so much.

Second, distraction and not paying a lot of attention in class isn’t exactly new. When I was in high school, I sat in the back of the room in that chemistry class I was required to take and I read paperbacks “hidden” under the table. Students used to pass these things called “notes” on paper. Students did and still do whisper at each other in distracting ways. As both a college student and as a college teacher (certainly as a GA way back when), I’ve been with/had students who were distracted by and multitasking with magazines, newspapers, other people, with napping, etc., etc.

I agree with Shirky and some of the articles he cites that what’s interesting and different about contemporary electronic devices generally and social media in particular is that these are designed to distract us, to break our concentration. I routinely experience the sort of instant and satisfying gratification suggested in the abstract of this article. But to suggest that teachers/professors can solve this attention problem by asking students to temporarily turn off their laptops and pay attention to the sage on the stage strikes me as both naive and egotistical.

So here are three tips for Clay and other would-be haters for how to mentally adjust to the inevitability of laptops in their classrooms.

Number one, stop lecturing so much. When professors take the “stand and deliver” approach to “teaching,” the laptops come out. And why shouldn’t they? In an era where anyone can easily record a video and/or audio of a lecture that can be “consumed” by students on their own time, why should they sit and pay attention to you droning on?

I realize this is easy for me to say since I teach small classes with 25 of fewer students, but there are lots of ways to break up the talking head in a large lecture hall class too. Break students into groups to ask them to discuss the reading. Ask students to take a moment to write about a question or a reading and then ask them to respond.  Require your students to discuss and respond. Use the time in class to actually do work with the laptops (individually and collaboratively) to do things. Just stop thinking that teaching means standing there and talking at them.

Number two, be more interesting. If as a teacher (or really, just a speaker) you are noticing a large percentage of students not paying attention and turning to laptops or cell phones or magazines or napping, there’s a pretty good chance you’re being boring. I notice this in my own teaching all the time: when my students and I are interested in a conversation or an activity, the laptops stay closed. When I start to drone on or it otherwise starts getting boring, I see the checks on Facebook or Twitter or ESPN Sports or whatever. I use that as a cue to change up the discussion, to get more interesting.

Number three, “Let it Go.” Because here’s the thing: there’s really nothing professors can do (at least in the settings where I teach) to completely eliminate these kinds of distractions and multitasking and generally dumb stuff that students sometimes do. Students are humans and humans are easily distracted. So instead of spending so much time demanding perfect attention, just acknowledge that most of us can get a lot done with a laptop open. If you as the teacher are not the center of the universe, it’ll be okay.

A Comcast Customer Service Experience: Screwing Up in Reverse

We had been having problems with our Comcast/Xfinity/Whatever it’s called internet access for a while, and my calls to Comcast to check on the service were pretty futile (“Is your modem plugged in? Is your computer on? You should unplug your modem and then plug it back in. Okay, is your modem plugged in?” and repeat).

I finally got around to doing some “research” with the Google and, according to some web site I found (so obviously it must be true), our modem was no longer supported. And actually, that did have a ring of truth to it because that modem had to be at least six years old, maybe a lot older. So off to Best Buy and then back home with a new modem.

I knew that there was a reactivation process with the modem, so I was prepared for being on the phone with Comcast again. I made it through the electronic screening gauntlet and started talking to a nice human. “I need to set up a new modem,” I said. “I can help you with that,” she said. We were off to the races.

Things started turning bad almost immediately when the “tech” person asked me for the number on the back of the modem. “Which one? There are three of them”– that is, a couple of different device serial numbers of some sort and (just to skip ahead a bit, the one that Comcast actually needed) the Media Access Control ID. She asked for all of them, which took a while because a) it was a crappy phone connection and b) I’m pretty sure this person was not in the U.S. So there was a lot of me saying “D! I said D!” and her saying “Did you say B? or G?” But fine, eventually we worked it out and she had all the numbers she could ever need.

Then after about twenty minutes of numbers and waiting for something, my increasingly unfriendly and less competent customer service person said something like “oh, no!” in a low voice. “What?” I asked. “The system went down, I… I… I’m sorry this is taking so long,” she said. We were about 40 minutes in at this point. I’d had it.

“You know, this is really stupid. I don’t think you know what you’re doing here,” I said in my testy angry voice. She sighed, and then– click– hung up.

oh no she didn’t….

So I called right back, ran through the Comcast phone tree, got to a human. “How can I help you?” she asked. “I just got hung up on by another customer service person. That’s completely unacceptable and I would like to speak with a supervisor,” I said.

“Oh, I’m so sorry that happened sir, but I’m sure I can help you with–”

“I JUST GOT HUNG UP ON BY ANOTHER CUSTOMER SERVICE PERSON. THAT IS COMPLETELY UNACCEPTABLE AND I WOULD LIKE TO SPEAK WITH A SUPERVISOR!”  I said a bit more forcefully.

That worked. I got on with a supervisor (or at least someone who said he was a supervisor) who got the modem running. But even better: the supervisor dude apologized and completely jacked up our service for all the trouble. So now, we’ve got (for the next year at least) HBO, Showtime, a bunch of channels I’m sure we’ll never watch, and some higher speed of internet access. There must be some kind of checkbox on a service screen at Comcast that he clicked to give us everything.

So the moral of the story:

  • If you get a new modem for your Comcast internet set-up, plan on spending the better part of an afternoon to get it done.
  • Ask for the supervisor, especially if they hang up on you.
  • And hey, Comcast supervisor dude: good job of turning this into a positive.

In defense of machine grading

In defense of machine grading?!?! Well, no, not really. But I thought I’d start a post with a title like that. You know, provocative.

There has been a bit of a ruckus on WPA-L for a while now in support of a petition against machine grading and for humans at the web site humanreaders.org and I of course agree with the general premise of what is being presented on that site.  Machine grading software can’t recognize things like a sense of humor or irony, it tends to favor text length over conciseness, it is fairly easy to circumvent with gibberish kinds of writing, it doesn’t work in real world settings, it fuels high stakes testing, etc., etc., etc. I get all that.

We should keep pushing back against machine grading for all of these reasons and more. Automated testing furthers the interests of Edu-business selling this software and does not help students nor teachers, at least not yet.  I’m against it, I really am.

However:

  • It seems to me that we’re not really talking about grading per se but about teaching,  and the problem is writing pedagogy probably doesn’t work when the assessment/ grading part of things is completely separated from the teaching part of things. This is one of the differences between assigning writing and teaching writing.
  • There’s a bit of a catch 22 going on here. Part of the problem was that writing teachers complained (rightly so, I might add) about big standardized tests of various sorts not having writing components. So writing was added to a lot of these tests. However, the only way to assess thousands of texts generated through this testing is with specifically trained readers (see my next point) or with computer programs. So we can skip the writing altogether with these tests or we can accept a far from perfect grading mechanism.
  • I’ve participated in various holistic/group grading sessions before (though it’s been a long time), which is how they used to do this sort of thing before the software solutions. The way I recall it working was dozens and dozens of us were trained to assign certain ratings for essays based on a very specific rubric.  We were, in effect, programmed, and there was no leeway to deviate from the guidelines.  So I guess what I’m getting at is in these large group assessment circumstances, what’s the difference if it’s a machine or a person?
  • This software doesn’t work that well yet, especially in uncontrolled circumstances: that is, grading software is about as accurate as humans with these standardized prompt responses written in specific testing situations, but it doesn’t work well at all as an off-the-shelf rating solution for just any chunk of writing that students write for classes or that writers write for some other reason.  But the key word in that last sentence is yetbecause this software has (and is) getting a lot better. So what happens when it gets as good as a human reader (or at least good enough?) Will we accept the role of this evaluation software much in the same way we now all accept spell checking in word processors? (And by the way, I am old enough to remember resistance among English teacher-types to that, too– not as strong as the resistance to machine grading, but still).
  • As a teacher, my least favorite part of teaching is grading. I do not think that I am alone in that sentiment. So while I would not want to outsource my grading to someone else or to a machine (because again, I teach writing, I don’t just assign writing), I would not be against a machine that helps make grading easier. So what if a computer program provided feedback on a chunk of student writing automatically, and then I as the teacher followed behind those machine comments, deleting ones I thought were wrong or unnecessary, expanding on others I thought were useful? What if a machine printed out a report that a student writer and I could discuss in a conference? And from a WPA point of view, what if this machine helped me provide professional development support to GAs and part-timers in their commenting on students’ work?