AI Can Save Writing by Killing “The College Essay”

I finished reading and grading the last big project from my “Digital Writing” class this semester, an assignment that was about the emergence of openai.com’s artificial intelligence technologies GPT-3 and DALL-E. It was interesting and I’ll probably write more about it later, but the short version for now is my students and I have spent the last month or so noodling around with software and reading about both the potentials and dangers of rapidly improving AI, especially when it comes to writing.

So the timing of of Stephen Marche’s recently published commentary with the clickbaity title “The College Essay Is Dead” in The Atlantic could not be better– or worse? It’s not the first article I’ve read this semester along these lines, that GPT-3 is going to make cheating on college writing so easy that there simply will not be any point in assigning it anymore. Heck, it’s not even the only one in The Atlantic this week! Daniel Herman’s “The End of High-School English” takes a similar tact. In both cases, they claim, GPT-3 will make the “essay assignment” irrelevant.

That’s nonsense, though it might not be nonsense in the not so distant future. Eventually, whatever comes after GPT-3 and ChatGPT might really mean teachers can’t get away with only assigning writing. But I think we’ve got a ways to go before that happens.

Both Marche and Herman (and just about every other mainstream media article I’ve read about AI) make it sound like GPT-3, DALL-E, and similar AIs are as easy as working the computer on the Starship Enterprise: ask the software for an essay about some topic (Marche’s essay begins with a paragraph about “learning styles” written by GPT-3), and boom! you’ve got a finished and complete essay, just like asking the replicator for Earl Grey tea (hot). That’s just not true.

In my brief and amateurish experience, using GPT-3 and DALL-E is all about entering a carefully worded prompt. Figuring out how to come up with a good prompt involved trial and error, and I thought it took a surprising amount of time. In that sense, I found the process of experimenting with prompts similar to the kind of  invention/pre-writing activities  I teach to my students and that I use in my own writing practices all the time.  None of my prompts produced more than about two paragraphs of useful text at a time, and that was the case for my students as well. Instead, what my students and I both ended up doing was entering in several different prompts based on the output we were hoping to generate. And my students and I still had to edit the different pieces together, write transitions between AI generated chunks of texts, and so forth.

In their essays, some students reflected on the usefulness of GPT-3 as a brainstorming tool.  These students saw the AI as a sort of “collaborator” or “coach,” and some wrote about how GPT-3 made suggestions they hadn’t thought of themselves. In that sense, GPT-3 stood in for the feedback students might get from peer review, a visit to the writing center, or just talking with others about ideas. Other students did not think GPT-3 was useful, writing that while they thought the technology was interesting and fun, it was far more work to try to get it to “help” with writing an essay than it was for the student to just write the thing themselves.

These reactions square with the results in more academic/less clickbaity articles about GPT-3. This is especially true about  Paul Fyfe’s “How to cheat on your final paper: Assigning AI for student writing.” The assignment I gave my students was very similar to what Fyfe did and wrote about– that is, we both asked students to write (“cheat”) with AI (GPT-2 in the case of Fyfe’s article) and then reflect on the experience. And if you are a writing teacher reading this because you are curious about experimenting with this technology, go and read Fyfe’s article right away.

Oh yeah, one of the other major limitations of GPT-3’s usefulness as an academic writing/cheating tool: it cannot do even basic “research.” If you ask GPT-3 to write something that incorporates research and evidence, it either doesn’t comply or it completely makes stuff up, citing articles that do not exist. Let me share a long quote from a recent article at The Verge by James Vincent on this:

This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”

I think this limitation (along with the limitation that GPT-3 and ChatGPT are not capable of searching the internet) makes using GPT-3 as a plagiarism tool in any kind of research writing class kind of a deal-breaker. It certainly would not get students far in most sections of freshman comp where they’re expected to quote from other sources.

Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre. Perhaps when Marche was still teaching Shakespeare (before he was a novelist/cultural commentator, Marche earned a PhD specializing in early English drama), he assigned his students to write an essay about one of Shakespeare’s plays. Perhaps he gave his students some basic requirements about the number of words and some other mechanics, but that was about it. This is what I mean by only assigning writing: there’s no discussion of audience or purpose, there are no opportunities for peer review or drafts, there is no discussion of revision.

Teaching writing is a process. It starts by making writing assignments that are specific and that require an investment in things like prewriting and a series of assignments and activities that are “scaffolding” for a larger writing assignment. And ideally, teaching writing includes things like peer reviews and other interventions in the drafting process, and there is at least an acknowledgment that revision is a part of writing.

Most poorly designed assigned writing tasks are good examples of prompts that you enter into GPT-3. The results are definitely impressive, but I don’t think it’s quite useful enough to produce work a would-be cheater can pass off as their own. For example, I asked ChatGPT (twice) to “write a 1000 word college essay about the theme of insanity in Hamlet” and it came up with this and this essay. ChatGPT produced some impressive results, sure, but besides the fact that both of these essays are significantly shorter than 1000 word requirement, they both kind of read like… well, like a robot wrote them. I think that most instructors who received this essay from a student– particularly in an introductory class– would suspect that the student cheated. When I asked ChatGPT to write a well researched essay about the theme of insanity in Hamlet, it managed to produce an essay that quoted from the play, but not any research about Hamlet.

Interestingly, I do think ChatGPT has some potential for helping students revise. I’m not going to share the example here (because it was based on actual student writing), but I asked ChatGPT to “revise the following paragraph so it is grammatically correct” and I then added a particularly pronounced example of “basic” (developmental, grammatically incorrect, etc.) writing. The results didn’t improve the ideas in the writing and it changed only a few words. But it did transform the paragraph into a series of grammatically correct (albeit not terribly interesting) sentences.

In any event, if I were a student intent on cheating on this hypothetical assignment, I think I’d just do a Google search for papers on Hamlet instead. And that’s one of the other things Marche and these other commentators have left out: if a student wants to complete a badly designed “college essay” assignment by cheating, there are much much better and easier ways to do that right now.

Marche does eventually move on from “the college essay is dead” argument by the end of his commentary, and he discusses how GPT-3 and similar natural language processing technologies will have a lot of value to humanities scholars. Academics studying Shakespeare now have a reason to talk to computer science-types to figure out how to make use of this technology to analyze the playwright’s origins and early plays. Academics studying computer science and other fields connected to AI will now have a reason to maybe talk with the English-types as to how well their tools actually can write. As Marche says at the end, “Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.”

Plus I have to acknowledge that I have only spent so much time experimenting with my openai.com account because I still only have the free version. That was enough access for my students and me to noodle around enough to complete a short essay composed with the assistance of GPT-3 and to generate an accompanying image with GPT-3. But that was about it. Had I signed up for openai.com’s “pay as you go” payment plan, I might learn more about how to work this thing, and maybe I would have figured out better prompts for that Hamlet assignment. Besides all that, this technology is getting better alarmingly fast. We all know whatever comes after ChatGPT is going to be even more impressive.

But we’re not there yet. And when it is actually as good as Marche fears it might be, and if that makes teachers rethink how they might teach rather than assign writing, that would be a very good thing.

A lot of what Leonhardt said in ‘Not Good for Learning’ is just wrong

I usually agree with David Leonhardt’s analysis in his New York Times newsletter “The Morning” because I think he does a good job of pointing out how both the left and the right have certain beliefs about issues– Covid in particular for the last couple years, of course– that are sometimes at odds with the evidence. But I have to say that this morning’s newsletter and the section “Not Good For Learning” ticks me off.

While just about every K-12 school went online when Covid first hit in spring 2020, a lot of schools/districts resumed in-person classes in fall 2020, and a lot did not. Leonhardt said:

These differences created a huge experiment, testing how well remote learning worked during the pandemic. Academic researchers have since been studying the subject, and they have come to a consistent conclusion: Remote learning was a failure.

Now, perhaps I’m overreacting to this passage because of my research about teaching online at the college-level, but the key issue here is he’s talking about K-12 schools that had never done anything close to online/remote instruction ever before. He is not talking about post-secondary education at all, which is where the bulk of remote learning has worked just fine for 125+ years. Maybe that’s a distinction that most readers will understand anyway, but I kind of doubt it, and not bringing that up at all is inaccurate and just sloppy.

Obviously, remote learning in the vast majority of K-12 schools went poorly during Covid and in completely predictable ways. Few of these teachers had any experience or training to teach online, and few of these school districts had the kinds of technologies and tools (like Canvas and Blackboard and other LMSes) to support these courses. This has been a challenge at the college level too, but besides the fact that I think a lot more college teachers at various levels and various types of institutions have had at least some prior to Covid experience teaching online and most colleges and university have more tech support, a lot (most?) college teachers were already making use of an LMS tool and using a lot more electronic tools for essays and tests (as opposed to paper) in their classes.

The students are also obviously different. When students in college take classes online, it’s a given that they will have the basic technology of a laptop and easy access to the internet. It’s also fairly clear from the research (and I’ve seen this in my own experiences teaching online) that the students who do best in these formats are more mature and more self-disciplined. Prior to Covid, online courses were primarily for “non-traditional” students who were typically older, out in the workforce, and with responsibilities like caring for children or others, paying a mortgage, and so forth. These students, who are typically juniors/seniors or grad students, have been going to college for a while, they understand the expectations of a college class, and (at least the students who are most successful) have what I guess I’d describe as the “adulting” skills to succeed in the format. I didn’t have a lot of first and second year students in online classes before Covid, but a lot of the ones I did have during the pandemic really struggled with these things. Oh sure, I did have some unusually mature and “together” first year students who did just fine, but a lot of the students we have at EMU at this level started college underprepared for the expectations, and adding on the additional challenge of the online format was too much.

So it is not even a teeny-weeny surprise that a lot of teenagers/secondary students– many of whom were struggling to learn and succeed in traditional classrooms– did not succeed in hastily thrown together and poorly supported online courses, and do not even get me started on the idea of grade school kids being forced to sit through hours of Zoom calls. I mean honestly, I think these students probably would have done better if teachers had just sent home worksheets and workbooks and other materials to the kids and the parents to study on their own.

I think a different (and perhaps more accurate) way to study the effectiveness of remote learning would be to look at what some K-12 schools were doing before Covid. Lots and lots of kids and their parents use synch and asynch technology to supplement home schooling, and programs like the Michigan Online School have been around for a while now. Obviously, home schooling or online schooling is not right for everyone, but these programs are also not “failures.”

Leonhardt goes on to argue that more schools that serve poor students and/or non-white students went remote for longer than schools. Leonhardt claims there were two reasons for this:

Why? Many of these schools are in major cities, which tend to be run by Democratic officials, and Republicans were generally quicker to reopen schools. High-poverty schools are also more likely to have unionized teachers, and some unions lobbied for remote schooling.

Second, low-income students tended to fare even worse when schools went remote. They may not have had reliable internet access, a quiet room in which to work or a parent who could take time off from work to help solve problems.

First off, what Leonhardt seems to forget that Covid was most serious in “the major cities” in this country, and also among populations that were non-white and that were poor. So of course school closings were more frequent in these areas because of Covid.

Second, while it is quite easy to complain about the teacher unions, let us all remember it was not nearly as clear in Fall 2020 as Leonhardt is implying that the risks of Covid in the schools were small. It did turn out that those settings weren’t as risky as we thought, but at the same time, that “not as risky” analysis primarily applies to students. A lot of teachers got sick and a few died. I wrote about some of this back in February here. I get the idea that most people who were demanding their K-12 schools open immediately only had their kids in mind (though a lot of these parents were also the same ones adamant against mask and vaccine mandates), and if I had a kid still in school, I might feel the same way. But most people (and I’d put Leonhardt in this camp in this article) didn’t think for a second about the employees, and at the end of the day, working in a public school setting is not like being in the ministry or some other job where we expect people to make huge personal sacrifices for others. Being a teacher is a white collar job. Teachers love to teach, sure, but we shouldn’t expect them to put their own health and lives at any level of risk–even if it’s small– just because a lot of parents haven’t sorted out their childcare situations.

Third, the idea that low-income students fared worse in remote classes (and I agree, they certainly did) is bad, but that has nothing to do with why they spent more time online in the first place. That just doesn’t make sense.

Leonhardt goes on:

In places where schools reopened that summer and fall, the spread of Covid was not noticeably worse than in places where schools remained closed. Schools also reopened in parts of Europe without seeming to spark outbreaks.

I wrote about back in February: these schools didn’t reopen because they never closed! They tried the best they could and often failed, but as far as I can tell, no K-12 school in this country, public or private, just closed and told folks “we’ll reopen after Covid is over.” Second, most of the places where public schools (and universities as well) that went back to at least some f2f instruction in Fall 2020 were in parts of the country where being outside and/or leaving the windows open to classrooms is a lot easier than in Michigan, and/or most of these schools had the resources to do things like create smaller classes for social distancing, to install ventilation equipment, and so forth.

Third– and I cannot believe Leonhardt doesn’t mention this because I know this is an issue he has written about in the past– the comparison to what went on with schools in Europe is completely bogus. In places like Germany and France, they put a much much higher priority on opening schools– especially as compared to things like restaurants and bars and other places where Covid likes to spread. So they kept those kinds of places closed longer so the chances of a Covid outbreak in the schools was smaller. Plus Europeans are much MUCH smarter about things like mask and vaccine mandates too.

No, the pandemic was not good for learning, but it was not good for anything else, either. It wasn’t good for our work/life balances, our mental health, a lot of our household incomes, on and on and on. We have all suffered mightily for it, and I am certain that as educators of all stripes study and reflect on the last year and a half, we’ll all learn a lot about what worked and what didn’t. But after two years of trying their fucking best to do the right things, there is no reason to through K-12 teachers under the bus now.

My CCCCs 2022

Here’s a follow-up (of sorts) on my CCCCs 2022 experiences– minus the complaining, critiques, and ideas on how it could have been better. Oh, I have some thoughts, but to be honest, I don’t think anyone is particularly interested in those thoughts. So I’ll keep that to myself and instead focus on the good things, more or less.

When the CCCCs went online for 2022 and I was put in the “on demand” sessions, my travel plans changed. Instead of going to Chicago on my own to enjoy conferencing, my wife and I decided to rent a house on a place called Seabrook Island in South Carolina near Charleston. We both wanted to get out of Michigan to someplace at least kind of warm, and the timing on the rental and other things was such that we were on the road for all the live sessions, so I missed out on all of that. But I did take advantage of looking at some of the other on demand sessions to see what was there.

Now, I have never been a particularly devout conference attendee. Even at the beginning of my career attending that first CCCCs in 1995 in Washington, DC, when everything was new to me, I was not the kind of person who got up at dawn for the WPA breakfast or even for the 9 am keynote address, the kind of conference goer who would then attend panels until the end of the day. More typical for me is to go to about two or three other panels (besides my own, of course), depending on what’s interesting and, especially at this point of my life, depending on where it is. I usually spend the rest of the time basically hanging out. Had I actually gone to Chicago, I probably would have spent at least half a day doing tourist stuff, for example.

The other thing that has always been true about the CCCCs is even though there are probably over 1000 presentations, the theme of the conference and the chair who puts it together definitely shapes what folks end up presenting about. Sometimes that means there are fewer presentations that connect to my own interests in writing and technology– and as of late, that specifically has been about teaching online. That was the case this year. Don’t get me wrong, I think the theme(s) of identity, race, and gender invoked in the call are completely legitimate and important topics of concern, and I’m interested them both as a scholar and just as a human being. But at the same time, that’s not the “work” I do, if that makes sense.

That said, there’s always a bit of something for everyone. Plus the one (and only, IMO) advantage of the on demand format is the materials are still accessible through the CCCCs conference portal. So while enjoying some so-so weather in a beach house, I spent some time poking around the online program.

First off, for most of the links below to work, you have to be registered for and signed into the CCCCs portal, which is here:

https://app.forj.ai/en?t=/tradeshow/index&page=lobby&id=1639160915376

If you never registered for the conference at all, you won’t be able to access the sessions, though the program of on-demand sessions is available to anyone here. As I understand it, the portal will remain open/accessible for the month of March (though I’m not positive about that). Second, the search feature for the portal is… let’s just say “limited.” There’s no connection between the portal and the conference on-demand program, so you have to look through the program and then do a separate search of the portal opened in a different browser tab. The search engine doesn’t work at all if you include any punctuation, and for the most part, it only returns results when you enter in a few words and not an entire title. My experience has been it seems to work best if you enter in the first three words of the session title. Again, I’m not going to complain….

So obviously, the first thing I found/went to was my own panel:

OD-301 Researching Communication in Practice

There’s not much there. One of the risks of proposing an individual paper for the CCCCs rather than as part of a panel or round table discussion is how you get grouped with other individual submissions. Sometimes, this all ends up working out really well, and sometimes, it doesn’t. This was in the category of “doesn’t.” Plus it looks to me like three out of the other five other people on the program for this session essentially bailed out and didn’t post anything.

Of course, my presentation materials are all available here as Google documents, slides, and a YouTube video.

To find other things I was interested in, I did a search for the key terms “distance” (as in distance education– zero results) and “online,” which had 54 results. A lot of those sessions– a surprising amount to me, actually– involved online writing centers, both in terms of adopting to Covid but also in terms of shifting more work in writing centers to online spaces. Interesting, but not quite what I was looking for.

So these are the sessions I dug into a bit more and I’ll probably be going back to them in the next weeks as I keep working on my “online and the new normal” research:

OD-45 So that just happened…Where does OWI go from here?: Access, Enrollment, and Relevance

Really nice talk that sums up some of the history and talks in broad ways about some of the experiences of teaching online in Covid. Of course, I’m also always partial to presentations that agree with what I’m finding in my own research, and this talk definitely does that.

OD-211 Access and Community in Online Learning– specifically, Ashley Barry, University of New Hampshire, “Inequities in Digital Literacies and Innovations in Writing Pedagogies during COVID-19 Learning.”

Here’s a link to her video in the CCCCs site, and here’s a Google Slides link. At some point, I think I might have to send this PhD student at New Hampshire an email because it seems like Barry’s dissertation research is similar to what I am (kinda/sorta) trying to do with own research about teaching online during Covid. She is working with a team of researchers from across the disciplines on what is likely a more robust albeit local study than mine, but again, with some similar kind of conclusions.

OD-295 Prospects for Online Writing Instruction after the Pandemic Lockdown— specifically, Alexander Evans, Cincinnati State Technical and Community College, “Only Out of Necessity: The Future of Online Developmental FirstYear Writing Courses in Post-Pandemic Society.”

Here’s a link to his video and his slides (which I think are accessible outside of the CCCCs portal). What I liked about Evans’ talk is it is coming from someone very new to teaching at the college level in general, new to community college work, and (I think) new to online teaching as well. A lot of this is about what I see as the wonkiness of what happens (as I think is not uncommon at a lot of community colleges for classes like developmental writing) where instructors more or less get handed a fully designed course and are told “teach this.” I would find that incredibly difficult, and part of Evans’ argument here is if his institution is really going to give people access to higher education, then they need to offer this class in an online format– and not just during the pandemic.

So that was pretty much my CCCCs experience for 2022. I’m not sure when (or if) I’ll be back.

 

 

CCCCs 2022 (part 1?)

Here is a link (bit.ly/krause4c22) to my “on demand” presentation materials for this year’s annual Conference for College Composition and Communication. It’s a “talk” called “When ‘You’ Cannot be ‘Here:’ What Shifting Teaching Online Teaches Us About Access, Diversity, Inclusion, and Opportunity.” As I wrote in the abstract/description of my session:

My presentation is about a research project I began during the 2020-21 school year titled “Online Teaching and the ‘New Normal.” After discussing broadly some assumptions about online teaching, I discuss my survey of instructors teaching online during Covid, particularly the choice to teach synchronously versus asynchronously. I end by returning to the question of my subtitle.

I am saying this is “part 1?” because I might or might not write a recap post about the whole experience. On the one hand, I have a lot of thoughts about how this is going so far, how the online experience could have been better. On the other hand (and I’ve already learned this directly and indirectly on social media), the folks at NCTE generally seem pretty stressed out and overwhelmed and everything else, and it kind of feels like any kind of criticism, constructive or otherwise, will be taken as piling on. I don’t want to do that.

I’m also not sure there will be a part 2 because I’m not sure how much conferencing I’ll actually be able to do. When the conference went all online, my travel plans changed. Now I’m going to be be on the road during most of live or previously recorded sessions, so most of my engagement will have to to be in the on demand space. Though hopefully, there will be some recordings of events available for a while, things like Anita Hill’s keynote speech.

The thing I’ll mention for now is my reasons for sharing my materials in the online/on demand format outside the walled garden of the conference website itself. I found out that I was assigned to present in the “on demand” format of the conference– if I do write a part 2 to this post, I’ll come back to that decision process then. In any event, the instructions the CCCCs provided asked presenters to upload materials– PDFS, PPT slides, videos, etc.– to the server space for the conference. I emailed “ccccevents” and asked if that was a requirement. This was their response:

We do suggest that you load materials directly into the platform through the Speaker Ready Room for content security purposes (once anyone has the link outside of the platform, they could share it with anyone). However, if you really don’t want to do that, you could upload a PDF or a PPT slide that directs attendees to the link with your materials.

The “Speaker Ready Room” is just want they call the portal page for uploading stuff. The phrase I puzzled over was “content security purposes” and trying to prevent the possibility that anyone anywhere could share a link to my presentation materials. Maybe I’m missing something, but isn’t that kind of the point of scholarship? That we present materials (presentations, articles, keynote speeches, whatever) in the hopes that those ideas and thoughts and arguments are made available to (potential) readers who are anyone and anywhere?

I’ve been posting web-based versions of conference talks for a long time now– sometimes as blog posts, as videos, as Google Slides with notes, etc. I do it mainly because it’s easy for me to do, I believe in as much open access to scholarship as possible, and I’m trying to give some kind of life to this work that is beyond 15 minutes of me talking to (typically) less than a dozen people. I wouldn’t say any of my self-published conference materials have made much difference in the scholarly trajectory of the field, but I can tell from some of the tracking stats that these web-based versions of talks get many more times the number of “hits” than the size of the audience at the conference itself. Of course, that does not really mean that the 60 or 100 or so people who clicked on a link to a slide deck are nearly as engaged of an audience as the 10 people (plus other presenters) who were actually sitting in the room when I read my script, followed by a discussion. But it’s better than not making it available at all.

Anyway, we’ll see how this turns out.

“Synch Video is Bad,” perhaps a new research project?

As Facebook has been reminding me far too often lately, things were quite different last year. Last fall, Annette and I both had “faculty research fellowships,” which meant that neither of us were teaching because we were working on research projects. (It also meant we did A LOT of travel, but that’s a different post). I was working on a project that was officially called “Investigating Classroom Technology bans Through the Lens of Writing Studies,” a project I always referred to as the “Classroom Tech Bans are Bullshit” project.

It was going along well, albeit slowly. I gave a conference presentation about it all in fall at the Great Lakes Writing and Rhetoric Conference  in September, and by early October, I was circulating a snowball sampling survey to students and instructors (via mailing lists, social media, etc.) about their attitudes about laptops and devices in classes. I blogged about it some in December, and while I wasn’t making as much progress as quickly as I would have preferred, I was getting together a presentation for the CCCCs and ready to ramp up the next steps of this: sorting through the results of the survey and contacting individuals for follow-up case study interviews.

Then Covid.

Then the mad dash to shove students and faculty into the emergency lifeboats of makeshift online classes, kicking students out of the dorms with little notice, and a long and troubling summer of trying to plan ahead for the fall without knowing exactly what universities were going to do about where/in what mode/how to hold classes. Millions of people got sick, hundreds of thousands died, the world economy descended into chaos. And Black Lives Matter protests, Trump descending further into madness, forest fires, etc., etc.

It all makes the debate about laptops and cell phones in classes seem kind of quaint and old-fashioned and irrelevant, doesn’t it? So now I’m mulling over starting a new different but similar project about faculty (and perhaps students) attitudes about online courses– specifically about synchronous video-conference online classes (mostly Zoom or Google Meetings).

Just to back up a step: after teaching online since about 2005, after doing a lot of research on best practices for online teaching, after doing a lot of writing and research about MOOCs, I’ve learned at least two things about teaching online:

  • Asynchronous instruction works better than synchronous instruction because of the affordances (and limitations) of the medium.
  • Video– particularly videos of professors just lecturing into a webcam while students (supposedly) sit and pay attention– is not very effective.

Now, conventional wisdom often turns out to be wrong, and I’ll get to that. Nonetheless, for folks who have been teaching online for a while, I don’t think either of these statements are remotely controversial or in dispute.

And yet, judging from what I see on social media, a lot of my colleagues who are teaching online this fall for the first time are completely ignoring these best practices: they’re teaching synchronous classes during the originally scheduled time of the course and they are relying heavily on Zoom. In many cases (again, based on what I’ve seen on the internets), instructors have no choice: that is, the institution is requiring that what were originally scheduled f2f classes be taught with synch video regardless of what the instructor wants to do, what the class is, and if it makes any sense. But a lot of instructors are doing this to themselves (which, in a lot of ways, is even worse). In my department at EMU, all but a few classes are online this fall, and as far as I can tell, many (most?) of my colleagues have decided on their own to teach their classes with Zoom and synchronously.

It doesn’t make sense to me at all. It feels like a lot of people are trying to reinvent the wheel, which in some ways is not that surprising because that’s exactly what happened with MOOCs. When the big for-profit MOOC companies like Coursera and Udacity and EdX and many others got started, they didn’t reach out to universities that were already experienced with online teaching. Instead, they reached out to themselves and peer institutions– Stanford, Harvard, UC-Berkeley, Michigan, Duke, Georgia Tech, and lots of other high profile flagships. In those early TED talks (like this one from Daphne Koller and this one from Peter Norvig), it really really seems like these people sincerely believe that they were the first ones to ever actually think about teaching online, that they had stumbled across an undiscovered country. But I digress.

I think requiring students to meet online but synchronously for a class via Zoom simply is putting a round peg into a square hole. Imagine the logical opposite situation: say I was scheduled to teach an asynchronous online class that was suddenly changed into a traditional f2f class, something that meets Tuesdays and Thursdays from 10 am to 11:45 am. Instead of changing my approach to this now different mode/medium, I decided I was going to teach the class as an asynch online class anyway. I’d require everyone to physically show up to the class on Tuesdays and Thursdays at 10 am (I have no choice about that), but instead of taking advantage of the mode of teaching f2f, I did everything all asynch and online. There’d be no conversation or acknowledgement that we were sitting in the same room. Students would only be allowed to interact with each other in the class LMS. No one would be allowed to actually talk to each other, though texting would be okay. Students would sit there for 75 minutes, silently doing their work but never allowed to speak with each other, and as the instructor, I would sit in the front of the room and do the same. We’d repeat this at all meetings the entire semester.

A ridiculous hypothetical, right? Well, because I’m pretty used to teaching online, that’s what an all Zoom class looks like like to me.

The other problem I have with Zoom is its part in policing and surveilling both students and teachers. Inside Higher Ed and the Chronicle of Higher Education both published inadvertently hilarious op-eds written to an audience of faculty about how they should maintain their own appearances and of their “Zoom backgrounds” to project professionalism and respect. And consider this post on Twitter:


I can’t verify the accuracy of these rules, but it certainly sounds like it could be true. When online teaching came up in the first department meeting of the year (held on Zoom, of course), the main concern voiced by my colleagues who had never taught online before was dealing with students who misbehave in these online forums. I’ve seen similar kinds of discussions about how to surveil students from other folks on social media. And what could possibly motivate a teacher’s need to have bodily control over what their students do in their own homes to the point of requiring them to wear fucking shoes?

This kind of “soft surveillance” is bad enough, but as I understand it, one of Zoom’s features it sells to institutions is robust data on what users do with it: who is logged in, when, for how long, etc. I need to do a little more research on this, but as I was discussing on Facebook with my friend Bill Hart-Davidson (who is in a position to know more about this both as an administrator and someone who has done the scholarship), this is clearly data that can be used to effectively police both teachers’ and students’ behavior. The overlords might have the power to make us to wear shoes at all times on Zoom after all.

On the other hand…

The conventional wisdom about teaching online asynchronously and without Zoom might be wrong, and that makes it potentially interesting to study. For example, the main reason why online classes are almost always asynchronous is the difficulty of scheduling and the flexibility helps students take classes in the first place. But if you could have a class that was mostly asynchronous but with some previously scheduled synchronous meetings as a part of the mix, well, that might be a good thing. I’ve tried to teach hybrid classes in the past that approach this, though I think Zoom might make this a lot easier in all kinds of ways.

And I’m not a complete Zoom hater. I started using it (or Google Meetings) last semester in my online classes for one-on-one conferences, and I think it worked well for that. I actually prefer our department meetings on Zoom because it cuts down on the number of faculty who just want to pontificate about something for no good reason (and I should note I am very very much one of these kind of faculty members, at least once in a while). I’ve read faculty justifying their use of Zoom based on what they think students want, and maybe that turns out to be true too.

So, what I’m imagining here is another snowball sample survey of faculty (maybe students as well) about their use of Zoom. I’d probably continue to focus on small writing classes because it’s my field and also because of different ideas about what teaching means in different disciplines. As was the case with the laptop bans are bullshit project, I think I’d want to continue to focus on attitudes about online teaching generally and Zoom in particular, mainly because I don’t have the resources or skills as a researcher to do something like an experimental design that compares the effectiveness of a Zoom lecture versus a f2f one versus an asynchronous discussion on a topic– though as I type that, I think that could be a pretty interesting experiment. Assuming I could get folks to respond, I’d also want to use the survey to recruit participants in one on one interviews, which I think would be more revealing and relevant data, at least to the basic questions I have now:

  • Why did you decide to use a lot of Zoom and do things synchronously?
  • What would you do differently next time?

What do you think, is this an idea worth pursuing?

What We Learned in the “MOOC Moment” Matters Right Now

I tried to share a link to this post, which is on a web site I set up for my book More Than a Moment, but for some reason, Facebook is blocking that– though not this site. Odd. So to get this out there, I’m posting it here as well. –Steve

I received an email from Utah State University Press the other day inviting me to record a brief video to introduce More Than a Moment to the kinds of colleagues who would have otherwise seen the book on display in the press’ booth at the now cancelled CCCCs in Milwaukee. USUP is going to be hosting a “virtual booth” on their web site in an effort to get the word out about books they’ve published recently, including my own.

So that is where this is coming from. Along with recording a bit of video, I decided I’d also write about how I think what I wrote about MOOCs matters right now, when higher education is now suddenly shifting everything online.

I don’t want to oversell this here. MOOCs weren’t a result of an unprecedented global crisis, and MOOCs are not the same thing as online teaching. Plus what faculty are being asked to do right now is more akin to getting into a lifeboat than it is to actual online teaching, a point I write about in some detail here.

That said, I do think there are some lessons learned from the “MOOC Moment” that are applicable to this moment.

Continue reading “What We Learned in the “MOOC Moment” Matters Right Now”

A Bit of Brainstorming About Holding The CCCCs (and other academic conferences) F2F and Online

I’m not that worried about getting and dying from Covid-19 (though I don’t know, maybe I should be), but I can understand why people are concerned both for themselves and for others, and I can understand why there have been travel restrictions and school closures and all the rest. So while it’s probably too late to contain coronavirus and perhaps we’ve all already been exposed to it anyway, I do get why events are getting cancelled and why potentially sick people are self-quarantining and the like.

Which brings me to this year’s annual Conference on College Composition and Communication, scheduled to take place March 25-28: perfect timing for Covid-19 to have everything cancelled and all of us home and alone and and constantly washing our hands, and not conferencing in Milwaukee. Well, potentially; and if the conference goes on as planned, I’m still planning to go. But that’s all still a big “if.”

Now, one of the things that’s come up a lot on Facebook and Twitter and the like is the idea of “just move it online.” I’ve been saying a version of that myself, though though long before coronavirus. I know first hand that “just move it online” is not something that just happens magically, quickly, easily, and for free. But I also have some ideas on how this might work, and because it came up on Facebook (Julie Lindquist, who is chair of the conference this year, asked me to share my thoughts) because I’m procrastinating from grading, I thought I’d write about that.

The TL;DR version: the conference should have a web site and allow online participants to share links to their online presentations on that web site.

A few disclaimers. First, I don’t have much of a dog in this fight because while I’ve been going to the CCCCs off and on my entire career, it’s just not that important of an event for me any more. Second, I have systematically avoided getting involved in some kind of CCCC or NCTE service and I’m not planning on starting now. Maybe that is a mistake on my part, but it is what it is. And third, I’m not talking about doing away with the face to face conference. I think that’d be a bad idea. Rather, I’m just talking about giving people the chance to participate while not actually being their physically, and I’m talking about a way of preserving and sharing presentations beyond the moment of reading a paper and pointing at a slide show in a nearly empty room at a conference hotel.

Fourth– and this is an important one– the CCCCs can’t “just move it online” in less than three weeks. It is simply not enough time. Yeah, it sucks and it sucks a lot, and maybe participants could try to use Google Hangout on their own (see below), but I think it’s too late for the CCCCs organizers to systematically create an official online presentation mode. What I’m talking about here are ideas to think about for next year and beyond because there are lots of reasons to make academic conferences more accessible beyond a pandemic.

With that, some brainstorming/ideas: Continue reading “A Bit of Brainstorming About Holding The CCCCs (and other academic conferences) F2F and Online”

Still more on the “Classroom Tech Bans are Bullshit (or not)” project, in which I go down the tangent of note-taking

I spent most of my Thanksgiving break  back in Iowa, and along the way, I chatted with my side of the family about my faculty research fellowship project, “Investigating Classroom Technology Bans Through the Lens of Writing Studies,” aka “Classroom Tech Bans are Bullshit.” It’s always interesting talking to my non-academic family-types about academic things like this.

“So, you’re on sabbatical right now?” Not exactly. I’m not teaching so I can spend more time on research, but I’m still expected to go to meetings and things like that. Though honestly, I’ve skipped some of that stuff too, and it’s generally okay.

“Is there some kind of expectation for what you are supposed to be researching? What happens if you don’t do it?” Well, it is a competitive process for getting these fellowships in the first place, and there’s of course an expectation that I’ll do what I proposed. And I have done that, more or less, and I will have to write a report about that soon. But the implications (consequences?) of not doing all of what I originally proposed are vague at best.

“So, you’re not really working right now?” No no no, that’s not true. I’m working quite a bit, actually. But I’m doing this work because I want to, though I’m doing this work mostly at home and often in pajamas and I have an extremely flexible schedule right now (which is why we’re going to Morocco in a few days, but that’s another story for later), so I can understand why you might ask that.

“Being a professor is kind of a weird job, isn’t it?” Yes, yes it is.

Anyway, since I last blogged about this project back in September, I’ve been a bit distracted by department politics (don’t ask) and by prepping for teaching in the Winter term, which for me involves some new twists on old courses and also a completely new prep. But the research continues.

Back in October, I put together and conducted a survey for students and faculty about their attitudes/beliefs on the use of laptops and cell phones in classes. Taking the advice I often give my grad students in situations like this, I did not reinvent the wheel and instead based this survey on similar work by Elena Neiterman and Christine Zaza who are both at the University of Waterloo in Ontario and who both (I think) work in that school’s Public Health program. They published two articles right up my alley for this project: “A Mixed Blessing? Students’ and Instructors’ Perspectives about Off-Task Technology Use in the Academic Classroom” and “Does Size Matter? Instructors’ and Students’ Perceptions of Students’ Use of Technology in the Classroom.” I emailed to ask if they would be willing to share their survey questions and they generously agreed, so thanks again!

I’ll be sorting through and presenting about the results of this at the CCCCs this year and hopefully in an article (or articles) eventually. But basically, I asked for participants on social media, the WPA-L mailing list (had to briefly rejoin that!), and at EMU. I ended up with 168 respondents, 57% students and 43% instructors, most of whom aren’t at EMU. The results are in the ballpark/consistent with Neiterman and Zaza (based just on percentages– I have no idea if there’s a way to legitimately claim any kind of statistically significant comparison), though I think it’s fair to say both students and instructors in my survey are more tolerant and even embracing of laptops and cellphones in the classroom. I think that’s both because these are all smaller classes (Neiterman and Zaza found that size does indeed matter and devices are more accepted in smaller classes), and also because they’re writing classes. Besides the fact that writing classes tend to be activity-heavy and lecture-light (and laptops and cell phones are important tools for writing), I think our field is a lot more accepting of these technologies and frankly a lot more progressive in its pedagogy: not “sage on the stage” but “guide on the side,” the student-centered classroom, that sort of thing. I also was able to recruit a lot of potential interviewee subjects from this survey, though I think I’m going to hold off on putting together that part of the project for the new year.

And I’ve been thinking again about note-taking, though not so much as it relates to technology. As I’ve mentioned here before, there are two basic reasons in the scholarship for banning or limiting the use of devices– particularly laptops– in college classrooms, particularly lecture halls. One reason is about the problems of distraction and multitasking, and I do think there is some legitimacy to that. The other reason (as discussed in the widely cited Mueller and Oppenheimer study) is that it’s better to take notes by longhand than by a laptop.  I think that’s complete bullshit, so I kind of set that aside.

But now I’m starting rethink/reconsider the significance of note-taking again because of the presidential impeachment hearings. Those hearings were a series of poised, intelligent, and dedicated diplomats and career federal professionals explaining how Trump essentially tried to blackmail the Ukrainian government to investigate Biden. One of the key things that made these people so credible was their continued reference to taking detailed notes where they witnessed this impeachable behavior. In contrast, EU ambassador Gordon “The Problem” Sondland seemed oddly proud that he’s never been a note-taker. As a result, a lot of Sondland’s testimony included him saying stuff like “I don’t remember the details because I don’t take notes, but if it was in that person’s notes, I have no reason to doubt it.” I thought this detail (and other things about his testimony) made Sondland look simultaneously like an extremely credible witness to events and also like a complete boob.

Anyway, this made me wonder: exactly is the definition of “good note-taking?” How do we know someone takes good (or bad) notes, and what’s the protocol for teaching/training people to take good notes?

The taking notes by hand versus on a laptop claim is shaky and (IMO) quite effectively refuted by the Kayla Morehead, John Dunlosky, and Katherine A. Rawson study, “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014).” But while that study does poke at the concept of note-taking a bit (for example, they have one group of participants not take notes at all and just closely pay attention to the TED talk lecture), everything else I’ve read seems to just take note-taking as a given. There’s broad consensus in the psych/education scholarship that taking notes is an effective way to recall information later on, be it for a test or testimony before Congress, and there also seems to be consensus that trying to write everything down is a bad note-taking strategy. But I have yet to read anything about a method or criteria for evaluating the quality of notes, nor have I read anything about a pedagogy or a protocol for teaching people how to take good notes.

I find that odd. I mean, if the basic claim that Mueller and Oppenheimer (and similar studies) are trying to make is that students take “better notes” by hand than by laptop, and if the basic claim that Morehead, Dunlosky, and Rawson (and similar studies) are tying to make is students don’t take “better notes” by hand than by laptop, shouldn’t there be at least some minimal definition of “better notes?” Without that definition, can we really say that study participants who scored higher on the test measuring success did so because they took “better notes” rather than some other factor (e.g., they were smarter, they paid better attention, they had more knowledge about the subject of the lecture before the test, etc., etc.)?

I posted about this on Facebook and tagged a few friends I have who work for the federal government, asking if there was any particular official protocol or procedure for taking notes; the answers I got back were kind of vague. On the way back home at one point, Annette and I got to talking about how we were taught to take notes. I don’t remember any sort of instruction in school, though Annette said she remembered a teacher who actually collected and I guess graded student notes. There are of course some resources out there– here’s what looks like a helpful collection of links and ideas from the blog Cult of Pedagogy— but most of these strategies seem more geared for a tutoring or learning center setting. Plus a pedagogy for teaching note taking strategies is not the same thing as research, and it certainly is not the same thing as a method for measuring the effectiveness of notes.

But clearly, I digress.

So my plan for what’s next is to do even more reading (I’m working my way back through the works cited of a couple of the key articles I’ve been working with so far), some sifting through/writing about the results, and eventually some interviews, probably via email. And maybe I’ll take up as a related project more on this question of note-taking methods. But first, there’s Morocco and next semester.

It’s been an interesting research fellowship semester for me. I’ve been quite fortunate in that in the last five years I’ve had two research fellowships and a one semester sabbatical. Those previous releases from teaching involved the specific project of my book about MOOCs, More Than A Moment (on sale now!), and thus had very specific goals/outcomes. My sabbatical was mostly about conducting interviews and securing a book contract; my last FRF was all about finishing the book.

In contrast, this project/semester was much less guided, a lot more “wondering” (I think blog posts like this one demonstrate that). It’s been a surprisingly useful time for me as a scholar, especially at a time in my career and following the intensity of getting the MOOC book done where I was feeling pretty “done” with scholarship. I’ve got to give a lot of credit to EMU for the opportunity, and I hope these keep funding these fellowships, too.

 

More on the “Classroom Tech Bans Are Bullshit (or not)” Project Before Corridors

This post is both notes on my research so far (for myself and anyone else who cares), and also a “teaser” for Corridors: the 2019 Great Lakes Writing and Rhetoric Conference.  I’m looking forward to this year’s event for a couple of different reasons, including the fact that I’ve never been on campus at Oakland University.

Here’s a link to my slides— nothing fancy.

Anyway: as I wrote about back in June, I am on leave right now to get started on a brand-new research project officially called “Investigating Classroom Technology Bans Through the Lens of Writing Studies,” but which is more informally known as the “Classroom Tech Bans Are Bullshit” project. I give a little more detail in that June post, but basically, I have been reading a variety of studies about the impact of devices– mostly laptops, but also cellphones– in classrooms (mostly lecture halls) and how they negatively impact students (mostly on tests). I’ve always thought these studies seemed kind of bullshitty, but I don’t know a lot of research in composition and rhetoric that refutes these arguments. So I wanted to read that scholarship and then try to do something to apply and replicate that scholarship in writing classrooms.

So far, I’ve mostly just been reading academic articles in psychology and education journals. It’s always challenging to step just a little outside my comfort zone and do some reading in a field that is not my own. If nothing else, it reminds me why it’s important to be empathetic with undergraduates who complain about reading academic articles: it’s hard to try figure out what’s going on in that Burkean parlor when pretty much all you can do is look through the window instead of being in the room. For me, that’s most evident in the descriptions of the statistics. I look at the explanations and squiggly lines of various formulas and just mutter “I’m gonna have to trust you on that.” And as a slight but important tangent: one of the reasons why we don’t do this kind of research in writing studies is because most people in the field feel the same about math and stats.

The other thing that has been quite striking for me is the assumptions in these articles on how the whole enterprise of higher education works. Almost all of these studies take it as a completely unproblematic given that education means a lecture hall with a professor delivering knowledge to students who are expected to (and who know how to) pay attention and who also are expected to (and who know how to) take notes on the content delivered by the lecturer. Success is measured by an end of the course (or end of the experiment) test. That’s that. In other words, most of this research assumes an approach to education that is more or less the opposite of what we assume in writing studies.

I have also figured out there are some important and subtle differences to the arguments about why laptops and cell phones ought to be banned (or at least limited) in classrooms. As I wrote back in June, the thing that perhaps motivated me the most to do this research is the argument that laptops ought to be banned from lecture halls because handwritten notes are “better.” This is the argument in the frequently cited Pam Mueller and Daniel Oppenheimer “The Pen is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking.”  I think this is complete bullshit. This is a version of the question that used to circulate in the computers and writing world, whether it was “better” for student to write by hand or to type, a question that’s been dismissed as irrelevant for a long time. But as someone who is so bad at writing things by hand, I personally resent the implication that people who have good handwriting are somehow “better.” Fortunately, I think Kayla Morehead, John Dunlosky, and Katherine A. Rawson replication of that study, “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014),” does an excellent job refuting this “handwriting is better” bullshit.

Then there’s the issue of “distraction” that results when students trying to do things right are disturbed/put off by other students fiddling around with their laptops or cellphones. This is the argument in Faria Sana, Tina Weston, Nicholas J. Cepeda “Laptop multitasking hinders classroom learning for both users and nearby peers.” They outline a clever and complicated methodology that involved arranging students so a laptop was (or wasn’t) in their line of sight and also by having some of those students acting as “confederates” in the study by purposefully doing stuff that is distracting. One issue I have with this research is it is a little dated, having been published in 2013. Maybe it’s just me, but I think laptops in classes were a little more novel (and thus distracting) a few years ago than they are now. Regardless though, one of the concluding points these folks make is that laptops shouldn’t be banned because the benefits outweigh the problems.

There are a lot of studies focusing on the multitasking and divided attention issues: that is, devices and the things students look at on those devices distract them from the class, which again typically means paying attention to the lecture. I find the subtly different degrees of multitasking kind of interesting, and there is a long history in psychology of research about attention, distraction, and multitasking. For example, Arnold L. Glass and Mengxue Kang in “Dividing attention in the classroom reduces exam performance” argue (among other things) that there’s a kind of delayed effect with students multitasking/dividing attention in a lecture hall setting. Students seem to be able to comprehend a lecture or whatever in the midst of their multitasking, but they don’t perform as well on tests at the end of the semester. 

Interestingly– and I have a feeling this is more because of what I haven’t read/studied yet– most of these studies I’ve seen on the multitasking/dividing attention angle don’t separate tasks like email or texting from social media apps. That’s something I want to read about/study more because it seems to me that there is a qualitative difference in how applications like Facebook and Twitter distract since these platforms are specifically designed to grab attention from other tasks.

And then there’s the category of research I wasn’t even aware was happening, and I guess I’d describe that as the different perceptions/attitudes about classroom technology. This is mostly based on surveys and interviews, and (maybe not surprising) students tend to believe the use of devices is no big deal and/or “a matter of personal autonomy,” while instructors have a more complex view. Interestingly, the recommendation a lot of these studies make is students and teachers ought to talk about this as a way of addressing the problem.

So, that’s what I “know” so far. Where I’m going next, I think:

  • I think the first tangible (not just reading) research part of this project is going to be to design a survey of both faculty and instructors– probably just for first year writing, but maybe beyond that– about their attitudes on using these devices. If I dig a bit, I might be able to use some of the same questions that come up in the research I’ve read.
  • We’ll see what kind of feedback/participation I get from those surveys, but my hope is also to use a survey as a way of recruiting some instructors to participate in something a little more case study/observational in the winter term, maybe even trying to replicate some of the “experimental” research on note taking in a small class setting. That would happen in Winter 2020.
  • I need to keep reading, especially about the ways in which social media specifically functions here. It’s one thing for a student (or really anyone) to be bored in a badly run lecture hall and thus allowing themselves to drift into checking their messages, email, working on homework for other classes, checking sports, etc. I think it’s a different thing for a student/any user to feel the need to check Facebook or Twitter or Instagram or whatever.
  • I can see a need to dive more deeply into thinking/writing about the ways in which this research circulates in MSM and then back into the classroom. As I wrote in my proposal and back in June, I think there are a lot of studies– done with lecture hall students in very specific experimental settings– that get badly translated into MSM articles about why people should put their laptops and cell phones away in classrooms or meetings.  Those MSM articles get read by well-meaning faculty who then apply the MSM’s misunderstanding of the original study as a justification for banning devices even though the original research doesn’t support that. Oh, and perhaps not surprising, but the tendency of the vast majority of the MSM pieces I’ve seen on tech bans is basically reinforcing the very worn theme of “the problem with the kids today.”
  • I also wonder about this attitude difference and maybe students have a point: maybe these technologies are a matter of personal autonomy and personal choice. This was an idea put into my head while chatting about all this with Derek Mueller over not very good Chinese food this summer, and I still haven’t thought it through yet, but if students have a right to their own language use in writing classrooms, do they also have a right to their own technology use? When and when not?
  • And even though this is kind of where I began this project (so I guess I’m once again showing my bias here), a lot of the solution that motivates faculty to ban laptops and devices from their classrooms in the first place really comes back to better pedagogy. Teaching students how to take notes with a laptop immediately comes to mind. I’m also reading (slowly but surly) James M. Lang’s Small Teaching: Everyday Lessons From the Science of Teaching right now, and there’s a clear connection to his advice and this project too. So much of the complaints about students being distracted by their devices really comes back to bad teaching.

Classroom Tech Bans Are Bullshit (or are they?): My next/current project

I was away from work stuff this past May– too busy with Will’s graduation from U of M followed quickly by China, plus I’m not teaching or involved in any quasi-administrative work this summer. As I have written about before,  I am no longer apologetic for taking the summer off, so mostly that’s what I’ve been doing. But now I need to get back to “the work–” at least a leisurely summer schedule of “the work.”

Along with waiting for the next step in the MOOC book (proofreading and indexing, for example), I’m also getting started on a new project. The proposal I submitted for funding (I have a “faculty research fellowship” for the fall term, which means I’m not teaching though I’m still supposed to do service and go to meetings and such) is officially called “Investigating Classroom Technology Bans Through the Lens of Writing Studies.” Unofficially, it’s called “Classroom Tech Bans are Bullshit.” 

To paraphrase: there have been a lot of studies (mostly in Education and/or Psychology) on the student use of mobile devices in learning settings (mostly lecture halls– more on that in a moment). Broadly speaking, most of these studies have concluded these technologies are bad because students take worse notes than they would with just paper and pen, and these tools make it difficult for students to pay attention.  Many of these studies have been picked up in mainstream media articles, and the conclusions of these studies are inevitably simplified with headlines like “Students are Better Off Without a Laptop In the Classroom.”

I think there are couple of different problems with this– beyond the fact that MSM misinterprets academic studies all the time. First, these simplifications trickle back into academia when those faculty who do not want these devices in their classrooms use these articles to support laptop/mobile device bans. Second, the methodologies and assumptions behind these studies are very different from the methodologies and assumptions in writing studies. We tend to study writing– particularly pedagogy– with observational, non-experimental, and mixed-method research designs, things like case studies, ethnographies, interviews, observations, etc., and also with text-based work that actually looks at what a writer did.

Now, I think it’s fair to say that those of us in Composition and Rhetoric generally and in the “subfield/specialization” of Computers and Writing (or Digital Humanities, or whatever we’re calling this nowadays) think tech bans are bad pedagogy. At the same time, I’m not aware of any scholarship that directly challenges the premise of the Education/Psychology scholarship calling for bans or restrictions on laptops and mobile devices in classrooms. There is scholarship that’s more descriptive about how students use technologies in their writing process, though not necessarily in classrooms– I’m thinking of the essay by Jessie Moore and a ton of other people called “Revisualizing Composition” and the chapter by Brian McNely and Christa Teston “Tactical and Strategic: Qualitative approaches to the digital humanities” (in Bill Hart-Davidson and Jim Ridolfo’s collection Rhetoric and the Digital Humanities.) But I’m not aware of any study that researches why it is better (or worse) for students to use things like laptops and cell phones while actually in the midst of a writing class.

So, my proposal is to spend this fall (or so) developing a study that would attempt to do this– not exactly a replication of one or more of the experimentally-driven studies done about devices and their impact on note taking, retention, and distraction, but a study that is designed to examine similar questions in writing courses using methodologies more appropriate for studying writing. For this summer and fall, my plan is to read up on the studies that have been done so far (particularly in Education and Psych), use those to design a study that’s more qualitative and observational, and recruit subjects and deal with the IRB paperwork. I’ll begin some version of a study in earnest beginning in the winter term, January 2020.

I have no idea how this is going to work out.

For one thing, I feel like I have a lot of reading to do. I think I’m right about the lack of good scholarship within the computers and writing world about this, but maybe not. As I typed that sentence in fact, I recalled a distant memory of a book Mike Palmquist, Kate Kiefer, Jake Hartvigsen, and Barbara Godlew wrote called Transitions: Teaching Writing in Computer-Supported and Traditional Classrooms. It’s been a long time since I read that (it was written in 1998), but I recall it as being a comparison between writing classes taught in a computer lab and not. Beyond reading in my own field of course, I am slowly making my way through these studies in Education and Psych, which present their own kinds of problems. For example, my math ignorance means I have to slip into  “I’m just going to have to trust you on that one” mode in the discussions about statistical significance.

One article I came across and read (thanks to this post from the Tattooed Prof, Kevin Gannon) was “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014).” As the title suggests, this study by Kayla Morehead, John Dunlosky, and Katherine A. Rawson replicates a 2014 (which is kind of the “gold standard” in the ban laptops genre) study by Pam Mueller and Daniel Oppenheimer “The Pen is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking.” The gist of these two articles is all in the titles: Mueller and Oppenheimer’s conclusions were that it was much better to take notes by hand, while Morehead, Dunlosky, and Rawson’s conclusions were not so much. Interestingly enough, the more recent study also questioned the premise of the value of note taking generally since one of their control groups didn’t take notes and did about as well on the post-test of the study.

Reading these two studies has been a quite useful way for me to start this work. Maybe I should have already known this, but there are actually two fundamentally different issues at stake with these classroom tech bans (setting aside assumptions about the lecture hall format and the value of taking notes as a way of learning).  Mueller and Oppenheimer claimed with their study handwriting was simply “better.” That’s a claim that I have always thought was complete and utter bullshit, and it’s one that I think was debunked a long time ago. Way back in the 1990s when I first got into this work, there were serious people in English and in writing studies pondering what was “better,” a writing class equipped with computers or not, students writing by hand or on computers. We don’t ask that question anymore because it doesn’t really matter which is “better;” writers use computers to write and that’s that. Happily, I think Morehead, Dunlowsky, and Rawson counter Mueller and Oppenheimer’s study rather persuasively. It’s worth noting that so far, MSM hasn’t quite gotten the word out on this.

But the other major argument for classroom tech bans– which neither of these studies addresses– is about distraction, and that’s where the “or are they?” part of my post title comes from. I still have a lot more reading to do on this (see above!), but it’s clear to me that the distraction issue deserves more attention since social media applications are specifically designed to distract and demand attention from their users. They’re like slot machines, and it’s clear that “the kids today” are not the only ones easily taken in. When I sit in the back of the room during a faculty meeting and I glance at the screens of my colleagues’ laptops in front of me, it’s pretty typical to see Facebook or Twitter or Instagram open, along with a window for checking email, grading papers– or, on rare occasion, taking notes.

Anyway, it’s a start. And if you’ve read this far and you’ve got any ideas on more research/reading or how to design a study into this, feel free to comment or email or what-have-you.