Computers and Writing 2023: Some Miscellaneous Thoughts

Last week, I attended and presented at the 2023 Computers and Writing Conference at the University of California-Davis. Here’s a link to my talk, “What Does ‘Teaching Online’ Even Mean Anymore?” Some thoughts as they occur to me/as I look at my notes:

  • The first academic conference I ever attended and presented at was Computers and Writing almost 30 years ago, in 1994. Old-timers may recall that this was the 10th C&W conference, it was held at the University of Missouri, and it was hosted by Eric Crump. I just did a search and came across this article/review written by the late Michael “Mick” Doherty about the event. All of which is to say I am old.
  • This was the first academic conference I attended in person since Covid; I think that was the case for a lot of attendees.
  • Also worth noting right off the top here: I have had a bad attitude about academic conferences for about 10 years now, and my attitude has only gotten worse. And look, I know, it’s not you, it’s me. My problem with these things is they are getting more and more expensive, most of the people I used to hang out with at conferences have mostly stopped going themselves for whatever reason, and for me, the overall “return on investment” now is pretty low. I mean, when I was a grad student and then a just starting out assistant professor, conferences were extremely important to me. They furthered my education in both subtle and obvious ways, they connected me to lots of other people in the field, and conferences gave me the chance to do scholarship that I could also list on my CV. I used to get a lot out of these events. Now? Well, after (almost) 3o years, things start to sound a little repetitive and the value of yet another conference presentation on my CV is almost zero, especially since I am at a point where I can envision retirement (albeit 10-15 years from now). Like I said, it’s not you, it’s me, but I also know there are plenty of people in my cohort who recognize and even perhaps share a similarly bad attitude.
  • So, why did I go? Well, a big part of it was because I hadn’t been to any conference in about four years– easily the longest stretch of not going in almost 30 years. Also, I had assumed I would be talking in more detail about the interviews I conducted about faculty teaching experiences during Covid, and also about the next phases of research I would be working on during a research release or a sabbatical in 2024. Well, that didn’t work out, as I wrote about here. which inevitably changed my talk to being a “big picture” summary of my findings and an explanation of why I was done.
  • This conference has never been that big, and this year, it was a more “intimate” affair. If a more normal or “robustly” attended C&W gets about 400-500 people to attend (and I honestly don’t know what the average attendance has been at this thing), then I’d guess there was about 200-250 folks there. I saw a lot of the “usual suspects” of course, and also met some new people too.
  • The organizers– Carl Whithaus, Kory Lawson Ching, and some other great people at UC-Davis– put a big emphasis on trying to make the hybrid delivery of panels work. So there were completely on-site panels, completely online (but on the schedule) panels held over Zoom, and hybrid panels which were a mix of participants on-site and online. There was also a small group of completely asynchronous panels as well. Now, this arrangement wasn’t perfect, both because of the inevitable technical glitches and also because there’s no getting around the fact that Zoom interactions are simply not equal to robust face to face interactions, especially for an event like a conference. This was a topic of discussion in the opening town hall meeting, actually.
  • That said, I think it all worked reasonably well. I went to two panels where there was one presenter participating via Zoom (John Gallgher in both presentations, actually) and that went off without (much of a) hitch, and I also attended at least part of a session where all the presenters were on Zoom– and a lot of the audience was on-site.
  • Oh, and speaking of the technology: They used a content management system specifically designed for conferences called Whova that worked pretty well. It’s really for business/professional kinds of conferences so there were some slight disconnects, and I was told by one of the organizers that they found out (after they had committed to using it!) that unlimited storage capacity would have been much more expensive. So they did what C&W folks do well: they improvised, and set up Google Drive folders for every session.
  • My presentation matched up well to my co-presenters, Rich Rice and Jenny Sheppard, in that we were all talking about different aspects of online teaching during Covid– and with no planning on our parts at all! Actually, all the presentations I saw– and I went to more than usual, both the keynotes, one and a half town halls, and four and a half panels– were really quite good.
  • Needless to say, there was a lot of AI and ChatGPT discussion at this thing, even though the overall theme was on hybrid practices. That’s okay– I am pretty sure that AI is just going to become a bigger issue in the larger field and academia as a whole in the next couple of years, and it might stay that way for the rest of my career. Most of what people talked about were essentially more detailed versions of stuff I already (sort of) knew about, and that was reassuring to me. There were a lot of folks who seemed mighty worried about AI, both in the sense of students using it to cheat and also the larger implications of it on society as a whole. Some of the big picture/ethical concerns may have been more amplified here because there were a lot of relatively local participants of course, and Silicon Valley and the Bay Area are more or less at “ground zero” for all things AI. I don’t disagree with the larger social and ethical implications of AI, but these are also things that seem completely out of all of our control in so many different ways.
  • For example, in the second town hall about AI (I arrived late to that one, unfortunately), someone in the audience had one of those impassioned “speech/questions” about how “we” needed to come up with a statement on the problems/dangers/ethical issues about AI. Well, I don’t think there’s a lot of consensus in the field about what we should do about AI at this point. But more importantly and as Wendi Sierra pointed out (she was on the panel, and she is also going to be hosting C&W at Texas Christian University in 2024), there is no “we” here. Computers and Writing is not an organization at all and our abilities to persuade are probably limited to our own institutions. Of course, I have always thought that this was one of the main problems with the Computers and Writing Conference and Community: there is no there there.
  • But hey, let me be clear– I thought this conference was great, one of the best versions of C&W I’ve been to, no question about. It’s a great campus with some interesting quirks, and everything seemed to go off right on schedule and without any glitches at all.
  • Of course, the conference itself was the main reason I went– but it wasn’t the only reason.  I mean, if this had been in, say, Little Rock or Baton Rouge or some other place I would prefer not to visit again or ever, I probably would have sat this out. But I went to C&W when it was at UC-Davis back in 2009 and I had a great time, so going back there seemed like it’d be fun. And it was– though it was a different kind of fun, I suppose. I enjoyed catching up with a lot of folks I’ve known for years at this thing and I also enjoyed meeting some new people too, but it also got to be a little too, um, “much.” I felt a little like an overstimulated toddler after a while. A lot of it is Covid of course, but a lot of it is also what has made me sour on conferences: I don’t have as many good friends at these things anymore– that is, the kind of people I want to hang around with a lot– and I’m also just older. So I embraced opting out of the social events, skipping the banquet or any kind of meet-up with a group at a bar or bowling or whatever, and I played it as a solo vacation. That meant walking around Davis (a lively college town with a lot of similarities to Ann Arbor), eating at the bar at a couple of nice restaurants, and going back to my lovely hotel room and watching things that I know Annette had no interest in watching with me (she did the same back home and at the conference she went to the week before mine). On Sunday, I spent the day as a tourist: I drove through Napa, over to Sonoma Coast Park, and then back down through San Francisco to the airport. It’s not something I would have done on my own without the conference, but like I said, I wouldn’t have gone to the conference if I couldn’t have done something like this on my own for a day.

My Talk About AI at Hope College (or why I still post things on a blog)

I gave a talk at Hope College last week about AI. Here’s a link to my slides, which also has all my notes and links. Right after I got invited to do this in January, I made it clear that I am far from an expert with AI. I’m just someone who had an AI writing assignment last fall (which was mostly based on previous teaching experiments by others), who has done a lot of reading and talking about it on Facebook/Twitter, and who blogged about it in December. So as I promised then, my angle was to stay in my lane and focus on how AI might impact the teaching of writing.

I think the talk went reasonably well. Over the last few months, I’ve watched parts of a couple of different ChatGPT/AI presentations via Zoom or as previously recorded, and my own take-away from them all has been a mix of “yep, I know that and I agree with you” and “oh, I didn’t know that, that’s cool.” That’s what this felt like to me: I talked about a lot of things that most of the folks attending knew about and agreed with, along with a few things that were new to them. And vice versa: I learned a lot too. It probably would have been a little more contentious had this taken place back when the freakout over ChatGPT was in full force. Maybe there still are some folks there who are freaked out by AI and cheating who didn’t show up. Instead, most of the people there had played around with the software and realized that it’s not quite the “cheating machine” being overhyped in the media. So it was a good conversation.

But that’s not really what I wanted to write about right now. Rather, I just wanted to point out that this is why I continue to post here, on a blog/this site, which I have maintained now for almost 20 years. Every once in a while, something I post “lands,” so to speak.

So for example: I posted about teaching a writing assignment involving AI at about the same time MSM is freaking out about ChatGPT. Some folks at Hope read that post (which has now been viewed over 3000 times), and they invited me to give this talk. Back in fall 2020, I blogged about how weird I thought it was that all of these people were going to teach online synchronously over Zoom. Someone involved with the Media & Learning Association, which is a European/Belgian organization, read it, invited me to write a short article based on that post and they also invited me to be on a Zoom panel that was a part of a conference they were having. And of course all of this was the beginning of the research and writing I’ve been doing about teaching online during Covid.

Back in April 2020, I wrote a post “No One Should Fail a Class Because of a Fucking Pandemic;” so far, it’s gotten over 10,000 views, it’s been quoted in a variety of places, and it was why I was interviewed by someone at CHE in the fall. (BTW, I think I’m going to write an update to that post, which will be about why it’s time to return to some pre-Covid requirements). I started blogging about MOOCs in 2012, which lead to a short article in College Composition and Communication and numerous more articles and presentations, a few invited speaking gigs (including TWO conferences sponsored by the University of Naples on the Isle of Capri), an edited collection and a book.

Now, most of the people I know in the field who once blogged have stopped (or mostly stopped) for one reason or another. I certainly do not post here nearly as often as I did before the arrival of Facebook and Twitter, and it makes sense for people to move on to other things. I’ve thought about giving it up, and there have been times where I didn’t post anything for months. Even the extremely prolific and smart local blogger Mark Maynard gave it all up, I suspect because of a combination of burn-out, Trump being voted out, and the additional work/responsibility of the excellent restaurant he co-owns/operates, Bellflower.

Plus if you do a search for “academic blogging is bad,” you’ll find all sorts of warnings about the dangers of it– all back in the day, of course. Deborah Brandt seemed to think it was mostly a bad idea (2014)The Guardian suggested it was too risky (2013), especially for  grad students posting work in progress. There were lots of warnings like this back then. None of them ever made any sense to me, though I didn’t start blogging until after I was on the tenure-track here. And no one at EMU has ever had anything negative to me about doing this, and that includes administrators even back in the old days of EMUTalk.

Anyway, I guess I’m just reflecting/musing now about why this very old-timey practice from the olde days of the Intertubes still matters, at least to me. About 95% of the posts I’ve written are barely read or noticed at all, and that’s fine. But every once in a while, I’ll post something, promote it a bit on social media, and it catches on. And then sometimes, a post becomes something else– an invited talk, a conference presentation, an article. So yeah, it’s still worth it.

The Problem is Not the AI

The other day, I heard the opening of this episode of the NPR call-in show 1A, “Know It All: ChatGPT In the Classroom.” It opened with this recorded comment from a listener named Kate:

“I teach freshman English at a local university, and three of my students turned in chatbot papers written this past week. I spent my entire weekend trying to confirm they were chatbot written, then trying to figure out how to confront them, to turn them in as plagiarist, because that is what they are, and how I’m going penalize their grade. This is not pleasant, and this is not a good temptation. These young men’s academic careers now hang in the balance because now they’ve been caught cheating.”

Now, I didn’t listen to the show for long beyond this opener (I was driving around running errands), and based on what’s available on the website, the discussion  also included information about incorporating ChatGPT into teaching. Also, I don’t want to be too hard on poor Kate; she’s obviously really flustered and I am guessing there were a lot of teachers listening to Kate’s story who could very personally relate.

But look, the problem is not the AI.

Perhaps Kate was teaching a literature class and not a composition and rhetoric class, but let’s assume whatever “freshman English” class she was teaching involved a lot of writing assignments. As I mentioned in the last post I had about AI and teaching with GPT-3 back in December, there is a difference between teaching writing and assigning writing. This is especially important in classes where the goal is to help students become better at the kind of writing skills they’ll need in other classes and “in life” in general.

Teaching writing means a series of assignments that build on each other, that involve brainstorming and prewriting activities, and that involve activities like peer reviews, discussions of revision, reflection from students on the process, and so forth. I require students in my first year comp/rhet classes to “show their work” through drafts that is in a way they similar to how they’d be expected to in an Algebra or Calculus course. It’s not just the final answer that counts. In contrast, assigning writing is when teachers give an assignment (often a quite formulaic one, like write a 5 paragraph essay about ‘x’) with no opportunities to talk about getting started, no consideration of audience or purpose, no interaction with the other students who are trying to do the same assignment, and no opportunity to revise or reflect.

While obviously more time-consuming and labor-intensive, teaching writing has two enormous advantages over only assigning writing. First, we know it “works” in that this approach improves student writing– or at least we know it works better than only assigning writing and hoping for the best. We know this because people in my field have been studying this for decades, despite the fact that there are still a lot of people just assigning writing, like Kate. Second, teaching writing makes it extremely difficult to cheat in the way Kate’s students have cheated– or maybe cheated. When I talk to my students about cheating and plagiarism, I always ask “why do you think I don’t worry much about you doing that in this class?” Their answer typically is “because we have to turn in all this other stuff too” and “because it would be too much work,” though I also like to believe that because of the way the assignments are structured, students become interested in their own writing in a way that makes cheating seem silly.

Let me just note that what I’m describing has been the conventional wisdom among specialists in composition and rhetoric for at least the last 30 (and probably more like 50) years. None of this is even remotely controversial in the field, nor is any of this “new.”

But back to Kate: certain that these three students turned in “chatbot papers,” she spent the “entire weekend” working to prove these students committed the crime of plagiarism and they deserve to be punished. She thinks this is a remarkably serious offense– their “academic careers now hang in the balance”– but I don’t think she’s going through all this because of some sort of abstract and academic ideal. No, this is personal. In her mind, these students did this to her and she’s going to punish them. This is beyond a sense of justice. She’s doing this to get even.

I get that feeling, that sense that her students betrayed her. But there’s no point in making teaching about “getting even” or “winning” because as the teacher, you create the game and the rules, you are the best player and the referee, and you always win. Getting even with students is like getting even with a toddler.

Anyway, let’s just assume for a moment that Kate’s suspicions are correct and these three students handed in essays created entirely by ChatGPT. First off, anyone who teaches classes like “Freshman English” should not need an entire weekend or any special software to figure out if these essays were written by an AI. Human writers– at all levels, but especially comparatively inexperienced human writers– do not compose the kind of uniform, grammatically correct, and robotically plodding prose generated by ChatGPT. Every time I see an article with a passage of text that asks “was this written by a robot or a student,” I always guess right– well, almost always I guess right.

Second,  if Kate did spend her weekend trying to find “the original” source ChatGPT used to create these essays, she certainly came up empty handed. That was the old school way of catching plagiarism cheats: you look for the original source the student plagiarized and confront the student with it, court room drama style. But ChatGPT (and other AI tools) do not “copy” from other sources; rather, the AI creates original text every time. That’s why there have been several different articles crediting an AI as a “co-author.”

Instead of wasting a weekend, what Kate should have done is called each of these students into her office or taken them aside one by one in a conference and asked them about their essays. If the students cheated,  they would not be able to answer basic questions about what they handed in, and 99 times out of 100, the confronted cheating student will confess.

Because here’s the thing: despite all the alarm out there that all students are cheating constantly, my experience has been the vast majority do not cheat like this, and they don’t want to cheat like this. Oh sure, students will sometimes “cut corners” by looking over to someone else’s answers on an exam, or maybe by adding a paragraph or two from something without citing it. But in my experience, the kind of over-the-top sort of cheating Kate is worried about is extremely rare. Most students want to do the right thing by doing the work, trying to learn something, and by trying their best– plus students don’t want to get in trouble from cheating either.

Further, the kinds of students who do try to blatantly plagiarize are not “criminal masterminds.” Far from it. Rather, students blatantly plagiarize when they are failing and desperate, and they are certainly not thinking of their “academic careers.” (And as a tangent: seems to me Kate might be overestimating the importance of her “Freshman English” class a smidge).

But here’s the other issue: what if Kate actually talked to these students, and what if it turned out they either did not realize using ChatGPT was cheating, and/or they used ChatGPT in a way that wasn’t significantly different from getting some help from the writing center or a friend? What do you do then? Because– and again, I wrote about this in December— when I asked students to use GPT-3 (OpenAI’s software before ChatGPT) to write an essay and to then reflect on that process, a lot of them described the software as being a brainstorming tool, sort of like a “coach,” and not a lot different from getting help from others in peer review or from a visit to the writing center.

So like I said, I don’t want to be too hard on Kate. I know that there are a lot of teachers who are similarly freaked out about students using AI to cheat, and I’m not trying to suggest that there is nothing to worry about either. I think a lot of what is being predicted as the “next big thing” with AI is either a lot further off in the future than we might think, or it is in the same category as other famous “just around the corner” technologies like flying cars. But no question that this technology is going to continue to improve, and there’s also no question that it’s not going away. So for the Kates out there: instead of spending your weekend on the impossible task of proving that those students cheated, why not spend a little of that time playing around with ChatGPT and seeing what you find out?

AI Can Save Writing by Killing “The College Essay”

I finished reading and grading the last big project from my “Digital Writing” class this semester, an assignment that was about the emergence of openai.com’s artificial intelligence technologies GPT-3 and DALL-E. It was interesting and I’ll probably write more about it later, but the short version for now is my students and I have spent the last month or so noodling around with software and reading about both the potentials and dangers of rapidly improving AI, especially when it comes to writing.

So the timing of of Stephen Marche’s recently published commentary with the clickbaity title “The College Essay Is Dead” in The Atlantic could not be better– or worse? It’s not the first article I’ve read this semester along these lines, that GPT-3 is going to make cheating on college writing so easy that there simply will not be any point in assigning it anymore. Heck, it’s not even the only one in The Atlantic this week! Daniel Herman’s “The End of High-School English” takes a similar tact. In both cases, they claim, GPT-3 will make the “essay assignment” irrelevant.

That’s nonsense, though it might not be nonsense in the not so distant future. Eventually, whatever comes after GPT-3 and ChatGPT might really mean teachers can’t get away with only assigning writing. But I think we’ve got a ways to go before that happens.

Both Marche and Herman (and just about every other mainstream media article I’ve read about AI) make it sound like GPT-3, DALL-E, and similar AIs are as easy as working the computer on the Starship Enterprise: ask the software for an essay about some topic (Marche’s essay begins with a paragraph about “learning styles” written by GPT-3), and boom! you’ve got a finished and complete essay, just like asking the replicator for Earl Grey tea (hot). That’s just not true.

In my brief and amateurish experience, using GPT-3 and DALL-E is all about entering a carefully worded prompt. Figuring out how to come up with a good prompt involved trial and error, and I thought it took a surprising amount of time. In that sense, I found the process of experimenting with prompts similar to the kind of  invention/pre-writing activities  I teach to my students and that I use in my own writing practices all the time.  None of my prompts produced more than about two paragraphs of useful text at a time, and that was the case for my students as well. Instead, what my students and I both ended up doing was entering in several different prompts based on the output we were hoping to generate. And my students and I still had to edit the different pieces together, write transitions between AI generated chunks of texts, and so forth.

In their essays, some students reflected on the usefulness of GPT-3 as a brainstorming tool.  These students saw the AI as a sort of “collaborator” or “coach,” and some wrote about how GPT-3 made suggestions they hadn’t thought of themselves. In that sense, GPT-3 stood in for the feedback students might get from peer review, a visit to the writing center, or just talking with others about ideas. Other students did not think GPT-3 was useful, writing that while they thought the technology was interesting and fun, it was far more work to try to get it to “help” with writing an essay than it was for the student to just write the thing themselves.

These reactions square with the results in more academic/less clickbaity articles about GPT-3. This is especially true about  Paul Fyfe’s “How to cheat on your final paper: Assigning AI for student writing.” The assignment I gave my students was very similar to what Fyfe did and wrote about– that is, we both asked students to write (“cheat”) with AI (GPT-2 in the case of Fyfe’s article) and then reflect on the experience. And if you are a writing teacher reading this because you are curious about experimenting with this technology, go and read Fyfe’s article right away.

Oh yeah, one of the other major limitations of GPT-3’s usefulness as an academic writing/cheating tool: it cannot do even basic “research.” If you ask GPT-3 to write something that incorporates research and evidence, it either doesn’t comply or it completely makes stuff up, citing articles that do not exist. Let me share a long quote from a recent article at The Verge by James Vincent on this:

This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”

I think this limitation (along with the limitation that GPT-3 and ChatGPT are not capable of searching the internet) makes using GPT-3 as a plagiarism tool in any kind of research writing class kind of a deal-breaker. It certainly would not get students far in most sections of freshman comp where they’re expected to quote from other sources.

Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre. Perhaps when Marche was still teaching Shakespeare (before he was a novelist/cultural commentator, Marche earned a PhD specializing in early English drama), he assigned his students to write an essay about one of Shakespeare’s plays. Perhaps he gave his students some basic requirements about the number of words and some other mechanics, but that was about it. This is what I mean by only assigning writing: there’s no discussion of audience or purpose, there are no opportunities for peer review or drafts, there is no discussion of revision.

Teaching writing is a process. It starts by making writing assignments that are specific and that require an investment in things like prewriting and a series of assignments and activities that are “scaffolding” for a larger writing assignment. And ideally, teaching writing includes things like peer reviews and other interventions in the drafting process, and there is at least an acknowledgment that revision is a part of writing.

Most poorly designed assigned writing tasks are good examples of prompts that you enter into GPT-3. The results are definitely impressive, but I don’t think it’s quite useful enough to produce work a would-be cheater can pass off as their own. For example, I asked ChatGPT (twice) to “write a 1000 word college essay about the theme of insanity in Hamlet” and it came up with this and this essay. ChatGPT produced some impressive results, sure, but besides the fact that both of these essays are significantly shorter than 1000 word requirement, they both kind of read like… well, like a robot wrote them. I think that most instructors who received this essay from a student– particularly in an introductory class– would suspect that the student cheated. When I asked ChatGPT to write a well researched essay about the theme of insanity in Hamlet, it managed to produce an essay that quoted from the play, but not any research about Hamlet.

Interestingly, I do think ChatGPT has some potential for helping students revise. I’m not going to share the example here (because it was based on actual student writing), but I asked ChatGPT to “revise the following paragraph so it is grammatically correct” and I then added a particularly pronounced example of “basic” (developmental, grammatically incorrect, etc.) writing. The results didn’t improve the ideas in the writing and it changed only a few words. But it did transform the paragraph into a series of grammatically correct (albeit not terribly interesting) sentences.

In any event, if I were a student intent on cheating on this hypothetical assignment, I think I’d just do a Google search for papers on Hamlet instead. And that’s one of the other things Marche and these other commentators have left out: if a student wants to complete a badly designed “college essay” assignment by cheating, there are much much better and easier ways to do that right now.

Marche does eventually move on from “the college essay is dead” argument by the end of his commentary, and he discusses how GPT-3 and similar natural language processing technologies will have a lot of value to humanities scholars. Academics studying Shakespeare now have a reason to talk to computer science-types to figure out how to make use of this technology to analyze the playwright’s origins and early plays. Academics studying computer science and other fields connected to AI will now have a reason to maybe talk with the English-types as to how well their tools actually can write. As Marche says at the end, “Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.”

Plus I have to acknowledge that I have only spent so much time experimenting with my openai.com account because I still only have the free version. That was enough access for my students and me to noodle around enough to complete a short essay composed with the assistance of GPT-3 and to generate an accompanying image with GPT-3. But that was about it. Had I signed up for openai.com’s “pay as you go” payment plan, I might learn more about how to work this thing, and maybe I would have figured out better prompts for that Hamlet assignment. Besides all that, this technology is getting better alarmingly fast. We all know whatever comes after ChatGPT is going to be even more impressive.

But we’re not there yet. And when it is actually as good as Marche fears it might be, and if that makes teachers rethink how they might teach rather than assign writing, that would be a very good thing.

Online Teaching and ‘The New Normal’: A Survey of Faculty in the Midst of an Unprecedented ‘Natural Experiment’ (or, my presentation for CWCON2022)

This blog entry/page is my online/on demand presentation for the 2022 Computers and Writing Conference at East Carolina University.

I’m disappointed that I’m not at this year’s Computers and Writing Conference in person. I haven’t been to C&W since 2018 and of course there was no conference in 2020 or 2021. So after the CCCCs prematurely pulled the plug on the face to face conference a few months ago, I was looking forward to the road trip to Greenville. Alas, my own schedule conflicts and life means that I’ll have to participate in the online/on-demand format this time around. I don’t know if that means anyone (other than me) will actually read this, so as much as anything else, this presentation/blog post– which is too long, full of not completely substantiated/documented claims, speculative, fuzzy, and so forth– is a bit of note taking and freewriting meant mostly for myself as I think about how to present this research in future articles, maybe even a book. If a few conference goers and my own blog readers find this interesting, all the better.

Because of the nature of these on-demand/online presentations generally and also because of the perhaps too long/freewriting feel of what I’m getting at here, let me start with a few “to long, didn’t read” bullet points. I’m not even going to write anything else here to explain this, but it might help you decide if it’s worth continuing to read. (Hopefully it is…)

The research I’m continuing is a project I have been calling “Online Teaching and ‘The New Normal,’” which I started in early fall 2020. Back then, I wrote a brief article and was an invited speaker at an online conference held by a group in Belgium– this after someone there saw a post I had written about Zoom on my blog, which is one of the reasons why I keep blogging after all these years. I gave a presentation (that got shuffled away into the “on demand” format) at the most recent CCCCs where I introduced some of my broad assumptions about teaching online, especially about the affordances of asynchronously versus synchronously, and where I offered a few highlights of the survey results. I also wrote an article-slash-website for Computers and Composition Online which goes into much more detail about the results of the survey. That piece is in progress, though it will be available soon. If you have the time and/or interest, I’d encourage you to check out the links to those pieces as well.

I started this project in early fall 2020 for two reasons. First, there was the “natural experiment” created by Covid. Numerous studies have claimed online courses can be just as effective as face to face courses, but one of the main criticisms of these studies is the problem of self selection: that is, because students and teachers engage in the format voluntarily, it’s not possible to have subjects randomly assigned to either a face to face course or an online course, and that kind of randomized study is the gold standard in the social sciences. The natural experiment of Covid enabled a version of that study because millions of college students and instructors had no choice but to take and teach their classes online. 

Second, I was surprised by the large number of my colleagues around the country who said on social media and other platforms that they were going to teach their online classes synchronously via a platform like Zoom rather than asynchronously. I thought this choice– made by at least 60% of college faculty across the board during the 2020-21 school year– was weird. 

Based both on my own experiences teaching some of my classes online since 2005 and the modest amount of research comparing synchronous and asynchronous modes for online courses, I think that asynchronous online courses are probably more effective than synchronous online courses. But that’s kind of beside the point, actually. The main reason why at least 90% of online courses prior to Covid were taught asynchronously is scheduling and the imperative of providing access. Prior to Covid, the primary audience for online courses and programs were non-traditional students. Ever since the days of correspondence courses, the goal of distance ed has been to help “distanced” students– that is, people who live far away from the brick and mortar campus– but also people who are past the traditional undergraduate age, who have “adult” obligations like mortgages and dependents and careers, and  people who are returning to college either to finish the degree they started some years before, or to retool and retrain after having finished a degree earlier. Asynchronous online courses are much easier to fit into busy and changing life-slash-work schedules than synchronous courses– either online ones of f2f. Sure, traditional and on-campus students often take asynchronous courses for similar scheduling reasons, but again and prior to Covid, non-traditional students were the primary audience for online courses. In fact, most institutions that primarily serve traditional students– that is, 18-22 year olds right out of high school who live on or near campus and who attend college full-time (and perhaps work part-time to pay some of the bills)– did not offer many online courses, nor was there much of a demand for online courses from students at these institutions. I’ll come back to this point later.

I conducted my IRB approved survey from December 2020 to June 2021. The survey was limited to college level instructors in the U.S. who taught at least one class completely online (that is, not in some hybrid format that included f2f instruction) during the 2020-21 school year. Using a very crude snowball sampling method, I distributed the survey via social media and urged participants to share the survey with others. I had 104 participants complete this survey, and while I was hoping to recruit participants from a wide variety of disciplines, most were from a discipline related to English studies. This survey was also my tool for recruiting interview subjects: the last question of the survey asked if participants would be interested in a follow-up interview, and 75 indicated that they would be.

One of the findings from the survey that I discussed in my CCCCs talk was that those survey participants who had no previous experience teaching online were over three times more likely to have elected to teach their online classes synchronously during Covid than those who had had previous teaching experience. As this pie chart shows, almost two-thirds of faculty with no prior experience teaching online elected to teach synchronously and only about 12% of survey participants who had no previous experience teaching online elected to teach asynchronously.

In contrast, about a third of faculty who had had previous online experience elected to teach online asynchronously and less than 18% decided to teach online synchronously. Interestingly, the amount of previous experience with teaching online didn’t seem to make much difference– that is, those who said that prior to covid they had taught over 20 sections online were about as likely to have taught asynchronously or to use both synchronous and asynchronous approaches as those who had only taught 1 to 5 sections online prior to the 2020-21 school year. 

For the forthcoming Computers and Composition Online article, I go into more detail about the results of the survey along with incorporating some of the initial impressions and feedback I’ve received from the surveys to date. 

But for the rest of this presentation, I’ll focus on the interviews I have been conducting. I started interviewing participants in January 2022, and these interviews are still in progress. Since this is the kind of conference where people do often care about the technical details: I’m using Zoom to record the interviews and then a software called Otter.ai to create a transcription. Otter.ai isn’t free– in fact, at $13 for the month to month and unlimited usage plan, it isn’t especially cheap– and there are of course other options for doing this. But this is the best and easiest approach I’ve found so far. Most of the interviews I’ve conducted so far run between 45 and 90 minutes, and what’s amazing is Otter.ai can change the Zoom audio file into a transcript that’s about 85% correct in less than 15 minutes. Again, nerdy and technical details, but for changing audio recordings into mostly correct transcripts, I cannot say enough good things about it.

To date, I’ve conducted 24 interviews, and I am guessing that I will be able to conduct between 15 and 30 more, depending on how many of the folks who originally volunteered to be interviewed are still willing.

This means I already have about 240,000 words of transcripts, and I have to say I am at something of a loss as to what to “do” with all of this text in terms of coding, analysis, and the like. The sorts of advice and processes offered by books like Geisler’s and Swarts’ Coding Streams of Language and Saldaña’s The Coding Manual for Qualitative Researchers seems more fitting for analyzing sets of texts in different genres– say an archive for an organization that consists of a mix of memos, emails, newsletters, academic essays, reports, etc.– or of a collection of ethnographic observations. So for me, it doesn’t so much feel like I am collecting a lot of qualitative data meant to be coded and analyzed based on particular word choices or sentence structures or what-have-you, and more like good old-fashioned journalism. If I had been at this conference in person or if there was a more interactive component to this online presentation, this is something I would have wanted to talk more about with the kind of scholars and colleagues involved with computers and writing because I can certainly use some thoughts on how to handle my growing collection of interviews. In any event, my current focus– probably through the end of this summer– is to keep collecting the interviews from willing participants and to figure out what to do with all of this transcript data later. Perhaps that’s what I can talk about at the computers and writing conference at UC Davis next year.

But just to give a glimpse of what I’ve found so far, I thought I’d focus on answers to two of the dozen or so questions I have been sure to ask each interviewee:

  • Why did you decide to teach synchronously (or asynchronously)?
  • Knowing what you know now and after your experience teaching online during the 2020-21 school year, would you teach online again– voluntarily– and would you prefer to do it synchronously or asynchronously?

In my survey, participants had to answer a close-ended question to indicate if they were teaching online synchronously, asynchronously, or some classes synchronously and some asynchronously. There was no “other” option for supplying a different answer. This essentially divided survey participants into two groups because I counted those who were teaching in both formats as synchronous for the other questions on the survey. Also, I excluded from the survey faculty who were teaching with a mix of online and face to face modes because I wanted to keep this as simple as possible. But early on, the interviews made it clear that the mix of modes at most universities was far more complex. One interviewee said that prior to Covid, the choices faculty had for teaching (and the choices students saw in the catalog) was simply online or on campus. Beginning in Fall 2020 though, faculty could choose “fully online asynchronous, fully online synchronous, high flex synchronous, so (the instructor) is stand in the classroom and everyone else is in WebEx of Teams, and fully in the classroom and no option… you need to be in the classroom.” 

I was also surprised at the extent to which most of my interviewees reported that their institution provided faculty a great deal of autonomy in selecting the teaching mode that worked best for their circumstances. So far, I have only interviewed two or three people who said they had no choice but to teach in the mode assigned by the institution. A number of folks said that their institution strongly encouraged faculty to teach synchronously to replicate the f2f experience, but even under those circumstances, it seems most faculty had a fair amount of flexibility to teach in a mode that best fits into the rest of their life. As one person, a non-tenure-track but full time instructor said, “basically, the university said ‘we don’t care that much, especially if you’re… a parent and your kids aren’t going to school and you have to physically be home.’” This person’s impression was that while most of their colleagues were teaching synchronous courses with Zoom, there were “a lot of individual class sessions that were moved asynchronous, and maybe even a few classes that essentially went asynchronous.”

A number of interviewees mentioned that this level of flexibility offered to faculty from their institutions was unusual; one interviewee described the flexibility offered to faculty about their preferred teaching mode a “rare win” against the administration. After all, during the summer of 2020 and when a lot of the plans for going forward with the next school year were up in the air, there were a lot of rumors at my institution (and, judging from Facebook, other institutions as well) that individual faculty who wanted to continue to teach online in Fall of 2020 because of the risks of Covid were were going to have go through a process involving the Americans with Disabilities Act. So the fact that just about everyone I talked to was allowed to teach online and in the mode that they preferred was both surprising and refreshing.

As to why faculty elected to teach in one mode or the other: I think there were basically three reasons. First, as that quote I had above just mentioned, many faculty said concerns about how Covid was impacting their own home lives shaped the decision for either for teaching synchronously or asynchronously. Though again, most of my survey and interview subjects who hadn’t taught online before taught synchronously, and, not surprisingly, some of those interviewees told stories about how their pets, children, and other family members would become regular visitors in the class Zoom sessions. In any event, the risks and dangers of Covid– especially in Fall 2020 and early in 2021 when the data on the risks of transmission in f2f classrooms was unclear and before there was a vaccine– was of course the reason why so many of us were forced into online classes during the pandemic. And while it did indeed create a natural experiment for testing the effectiveness of online courses, I wonder if Covid ended up being such an enormous factor in all of our lives that it essentially skewed or trumped the experiences of teaching online. After all, it is kind of hard for teachers and students alike to reflect too carefully on the advantages and disadvantages of online learning when the issue dominating their lives was a virus that was sickening and killing millions of people and disrupting pretty much every aspect of modern society as we know it.

Second — and perhaps this is just obvious– people did what they already knew how to do, or they did what they thought would be the path of least resistance. Most faculty who decided to teach asynchronously had previous experience teaching asynchronously– or they were already teaching online asynchronously. As one interviewee put it, “Spring 2020, I taught all three classes online. And then COVID showed up, and I was already set up for that because, I was like ‘okay, I’m already teaching online,’ and I’m already teaching asynchronously, so…” That was the situation I was in when we first went into Covid lockdown in March 2020– though in my experience, that didn’t mean that Covid was a non-factor in those already online classes.

Most faculty who decided to teach synchronously– particularly those who had not taught online before– thought teaching synchronously via Zoom would require the least amount of work to adjust from the face to face format, though few interviewees said anything so direct. I spoke with one Communications professor who, just prior to Covid, was part of the launch of an online graduate program at her institution, so she had already spent some time thinking about and developing online courses. She also had previous online teaching experience from a previous position, but at her current institution, she said “I saw a lot of senior faculty”– and she was careful in explaining she meant faculty at her new institution who weren’t necessarily a lot older but who had not taught online previously– “try to take the classroom and put it online, and that doesn’t work. Because online is a different medium and it requires different teaching practices and approaches.” She went on to explain that her current institution sees itself as a “residential university” and the online graduate courses were “targeted towards veterans, working adults, that kind of thing.” 

I think what this interviewee was implying is it did not occur to her colleagues who decided to teach synchronously to do it any other way. As a different interviewee put it, inserting a lot of pauses along the way during our discussion, “I opted for the synchronous, just because… I thought it would be more… I don’t know, better suited to my own comfort levels, I suppose.” Though to be fair, this interviewee had previously taught online asynchronously (albeit some time ago), and he said “what I anticipated– wrongly I’ll add– that what doing it synchronously would allow me to do is set boundaries on it.” This is certainly a problem since teaching asynchronously can easily expand such that it feels like you’re teaching 24/7. There are ways to address those problems, but that’s a different presentation. 

Now, a lot of my interviewees altered their teaching modes as the online experience went on. Many– I might even go so far as saying the majority– of those who started out teaching 100% synchronously with Zoom and holding these classes for the same amount of time as they would a f2f version of the same class did make adjustments. A lot of my interviewees, particularly those who teach things like first year writing, shifted activities like peer review into asynchronous modes; others described the adjustments they made to being like a “flipped classroom” where the synchronous time was devoted to specific student questions and problems with the assigned work and the other materials (videos of lectures, readings, and so forth) were all shifted to asynchronous delivery. And for at least one interviewee, the experience of teaching synchronously drove her to all asynch:

“So, my first go around with online teaching was really what we call here remote teaching. It’s what everybody was kind of forced into, and I chose to do synchronous, I guess, because I didn’t, I hadn’t really thought about the differences. I did that for one quarter. And I realized, this is terrible. I, I don’t like this, but I can see the potential for a really awesome online course, so now I only teach asynchronous and I love it.”

The third reason for the choice between synchronous versus asynchronous is what I’d describe as “for the students,” though what that meant depends entirely on the type of students the interviewee was already working with. For example, here’s a quote from a faculty member who taught a large lecture class in communications at a regional university that puts a high priority on the residential experience for undergraduates:

“A lot of our students were asking for the synchronous class. I mean… when I look back at my student feedback, people that I literally wouldn’t know if they walked in the room because all I had (from them) was a black (Zoom) screen with their name on it, (these students said) ‘really enjoyed your enthusiasm, it made it easy to get out of bed every morning,’ you know, those kind of things. So I think they were wanting punctuation to just not an endless sea of due dates, but an actual event to go to.”

Of course, the faculty who had already been teaching online and were teaching asynchronously said something similar: that is, they explained that one of the reasons why they kept teaching asynchronously was because they had students all over the world and it was not possible to find a time where everyone could meet synchronously, that the students were working adults who needed the flexibility of the asynchronous format, and so forth. I did have an interviewee– one who was experienced at teaching online asynchronously– comment on the challenge students had in adjusting to what was for them a new format:

“What I found the following semester (that is, fall 2020 and after the emergency remote teaching of spring 2020) was I was getting a lot of students in my class who probably wouldn’t have picked online, or chosen it as a way of learning. This has continued. I’ve found that the students I’m getting now are not as comfortable online as the students I was getting before Covid…. It’s not that they’re not comfortable with technology…. But they’re not comfortable interacting in an online way, maybe especially in an asynchronous way… so I had some struggles with that last year that were really weird. I had the best class I’ve had online, probably ever. And the other (section) was absolutely the worst, but I run them with the same assignments and stuff.”

Let me turn to the second question I wanted to discuss here:  “Knowing what you know now and after your experience teaching online during the 2020-21 school year, would you teach online again– voluntarily– and would you prefer to do it synchronously or asynchronously?” It’s an interesting example of how the raw survey results become more nuanced as a result of both parsing those results a bit and conducting these interviews. Taken as a whole, about 58% of all respondents agreed or strongly agreed with the statement “In the future and after the pandemic, I am interested in continuing to teach at least some of my courses online.” My sense– and it is mostly just a sense– that prior to Covid, a much smaller percentage of faculty would have any interest in online teaching. But clearly, Covid has changed some minds. As one interviewee said about talking to faculty new to online teaching at her institution: 

“A lot of them said, ‘you know,  this isn’t as onerous as I thought, this isn’t as challenging as I thought.’ There is one faculty member who started teaching college in 1975, so she’s been around for a while. And she picked it up and she’s like ‘You know, it took a little time to get used to everything, but I like it. I can do the same things, I can reach students and feel comfortable.’ And in some ways, that’s good because it will prolong some people’s careers. And in some ways, it’s not good because it will prolong some people’s careers. It’s a double-edged sword, right?”

My interviewee who I quoted earlier about making the switch from synchronous to asynchronous was certainly sold. She said that she was nearing a point in her career “where I thought I’m just gonna quit teaching and find another job, I don’t know, go back to trade school and become a plumber.” Now, she is an enthusiastic advocate of online courses at her institution, describing herself as a “convert.”

“I use that word intentionally. I gave a presentation to some graduate students in our teacher training class, I was invited as a guest speaker, and I had big emoji that said ‘hallelujah’ and there were doves, and I’m like this is literally me with online teaching. The scales have fallen from my eyes, I am reborn. I mean, I was raised Catholic so I’m probably relying too much on these religious metaphors, but that’s how it feels. It really feels like a rebirth as an instructor.”

Needless to say, not everyone is quite that enthusiastic about the prospect of teaching online again. This chart, which is part of the article I’m writing for Computers and Composition Online, indicates the different answers to this question based on previous experience. While almost 70% of faculty who had online teaching experience prior to Covid strongly agreed or agreed about teaching online again and after the pandemic, only about 40% of faculty with no online teaching experience prior to Covid felt the same way. If anything, I think this chart indicates mostly ambivalent feelings among folks new to online teaching during covid about teaching online again: while more positive than negative, it’s worth noting that most faculty who had no prior online teaching experience neither agreed nor disagreed about wanting to teach online in the future.

For example, here are a couple of responses that I think suggest that ambivalence: 

“Um, I would do it again… even though I would imagine a lot of students would say they didn’t have very positive experiences for all different kinds of reasons over the last two years, but now that we have integrated this kind of experience into our lives in a way that, you know, will evolve, but I don’t think it will go away…. I’d have to be motivated (to teach online again), you know, more than just do it for the heck of it. Like if I could just as well teach the class on campus, I still feel like face to face in person conversation is a better modality. I mean, maybe it will evolve and we’ll learn how to do this better.”

And this response:

“The synchronous teaching online is far more exhausting than in person synchronous teaching, and… I don’t think we cover as much material. So my tendency is to say for my current classes, I would be hesitant to teach them online at my institution, because of a whole bunch of different factors. So I would tend to be in the probably not category, if the pandemic was gone. If the pandemic is ongoing, then no, please let’s stay online.”

And finally this passage, which is also closer to where I personally am with a lot of this:

“If people know what they’re getting into, and their expectations are met, then asynchronous or synchronous online instruction, whether delivery or dialogic, it can work, so long as there is a set of shared expectations. And I think that was the hardest thing about the transition: people who did not want to do distance education on both sides, students and instructors.”

That issue of expectations is critical, and I don’t think it’s a point that a lot of people thought a lot about during the shift online. Again, this research is ongoing, and I feel like I am still in the note-taking/evidence-gathering phase, but I am beginning to think that this issue of expectations is really what’s critical here.

Ten or so years ago, when I would have discussions with colleagues skeptical about teaching online, the main barrier or learning curve issue for most seemed to be the technology itself. Nowadays, that no longer seems to be the problem for most. At my institution (and I think this is common at other colleges as well), almost all instructors now use the Learning Management System (for us, that’s Canvas) to manage the bureaucracy of assignments, grades, tests, collecting and distributing student work, and so forth. We all use the institution’s websites to handle turning in grades for our students and checking on employment things like benefits and paychecks. And of course we also all use the library’s websites and databases, not to mention Google. I wouldn’t want to suggest there is no technological learning curve at all, but that doesn’t seem to me to be the main problem faculty have had with teaching online during the 2020-21 school year.

Rather, I think the challenges have been more conceptual than that. I think a lot of faculty have a difficult time understanding how it is possible to teach a class where students aren’t all paying attention to them, the teacher, at the same time and instead they are participating in the class at different times and in different places, and not really paying attention to the teacher much at all. I think a lot of faculty– especially those new to online teaching– define the act of teaching as standing in front of a classroom and leading students through some activity or by lecturing to them, and of course, this is not how asynchronous courses work at all. So I would agree that the expectations of both students and teachers needs to better align with the mode of delivery for online courses to work, particularly asynchronous ones.

The other issue though is in the assumption about the kind of students we have at different institutions. When I first started this project, the idea of teaching an online class synchronously seemed crazy to me– and I still think asynchronous delivery does a better job of taking advantage of the affordances of the mode– but that was also because of the students I have been working with. Faculty who were used to working almost exclusively with traditional college students tended to put a high emphasis on replicating as best as possible the f2f college experience of classes scheduled and held at a specific time (and many did this at the urging of their institutions and of their students). Faculty like me who have been teaching online classes designed for nontraditional students for several years before Covid were actively trying to avoid replicating the f2f experience of synchronous classes. Those rigidly scheduled and synchronous courses are one of the barriers most of the students in my online courses are trying to circumvent so they could finish their college degrees while also working to support themselves and often a family. In effect, I think Covid revealed more of a difference between the needs and practices of these different types of students and how we as faculty try to reach them. Synchronous courses delivered via Zoom to traditional students were simply not the same kind of online course experience as the asynchronous courses delivered to nontraditional students.

Well, this has gone on for long enough, and if you actually got to this last slide after reading through all that came before, I thank you. Just to sum up and repeat my “too long, didn’t read” slide: 

I think the claims I make here about why faculty decided to teach synchronously or asynchronously during Covid are going to turn out to be consistent with some of the larger surveys and studies about the remarkable (in both terrible and good ways) 2020-21 school year now appearing in the scholarship. I think the experience most faculty had teaching online convinced many (but not all) of the skeptics that an online course can work as well as a f2f course– but only if the course is designed for the format and only if students and faculty understand the expectations and how the mode of an online class is different from a f2f class. In a sense, I think the “natural experiment” of online teaching during Covid suggests that there is some undeniable self-selection bias in terms of measuring the effectiveness of online delivery compared to f2f delivery. What remains to be seen is how significant that self-selection bias is. Is the bias so significant that online courses for those who do not prefer the mode are demonstrably worse than a similar f2f course experience? Or is the bias more along the lines to my own “bias” against taking or teaching a class at 8 am, or a class that meets on a Friday? I don’t know, but I suspect there will be more research emerging out of this time that attempts to dig into that.

Finally, I think what was the previous point of resistance for teaching online– the complexities of the technology– have largely disappeared as the tools have become easier to use and also as faculty and students have become more familiar with those tools in their traditional face to face courses. As a result of that, I suspect that we will continue to see more of a melding of synchronous and asynchronous tools in all courses, be they traditional and on-campus courses or non-traditional and distance education courses. 

 

 

Classroom Tech Bans Are Bullshit (or are they?): My next/current project

I was away from work stuff this past May– too busy with Will’s graduation from U of M followed quickly by China, plus I’m not teaching or involved in any quasi-administrative work this summer. As I have written about before,  I am no longer apologetic for taking the summer off, so mostly that’s what I’ve been doing. But now I need to get back to “the work–” at least a leisurely summer schedule of “the work.”

Along with waiting for the next step in the MOOC book (proofreading and indexing, for example), I’m also getting started on a new project. The proposal I submitted for funding (I have a “faculty research fellowship” for the fall term, which means I’m not teaching though I’m still supposed to do service and go to meetings and such) is officially called “Investigating Classroom Technology Bans Through the Lens of Writing Studies.” Unofficially, it’s called “Classroom Tech Bans are Bullshit.” 

To paraphrase: there have been a lot of studies (mostly in Education and/or Psychology) on the student use of mobile devices in learning settings (mostly lecture halls– more on that in a moment). Broadly speaking, most of these studies have concluded these technologies are bad because students take worse notes than they would with just paper and pen, and these tools make it difficult for students to pay attention.  Many of these studies have been picked up in mainstream media articles, and the conclusions of these studies are inevitably simplified with headlines like “Students are Better Off Without a Laptop In the Classroom.”

I think there are couple of different problems with this– beyond the fact that MSM misinterprets academic studies all the time. First, these simplifications trickle back into academia when those faculty who do not want these devices in their classrooms use these articles to support laptop/mobile device bans. Second, the methodologies and assumptions behind these studies are very different from the methodologies and assumptions in writing studies. We tend to study writing– particularly pedagogy– with observational, non-experimental, and mixed-method research designs, things like case studies, ethnographies, interviews, observations, etc., and also with text-based work that actually looks at what a writer did.

Now, I think it’s fair to say that those of us in Composition and Rhetoric generally and in the “subfield/specialization” of Computers and Writing (or Digital Humanities, or whatever we’re calling this nowadays) think tech bans are bad pedagogy. At the same time, I’m not aware of any scholarship that directly challenges the premise of the Education/Psychology scholarship calling for bans or restrictions on laptops and mobile devices in classrooms. There is scholarship that’s more descriptive about how students use technologies in their writing process, though not necessarily in classrooms– I’m thinking of the essay by Jessie Moore and a ton of other people called “Revisualizing Composition” and the chapter by Brian McNely and Christa Teston “Tactical and Strategic: Qualitative approaches to the digital humanities” (in Bill Hart-Davidson and Jim Ridolfo’s collection Rhetoric and the Digital Humanities.) But I’m not aware of any study that researches why it is better (or worse) for students to use things like laptops and cell phones while actually in the midst of a writing class.

So, my proposal is to spend this fall (or so) developing a study that would attempt to do this– not exactly a replication of one or more of the experimentally-driven studies done about devices and their impact on note taking, retention, and distraction, but a study that is designed to examine similar questions in writing courses using methodologies more appropriate for studying writing. For this summer and fall, my plan is to read up on the studies that have been done so far (particularly in Education and Psych), use those to design a study that’s more qualitative and observational, and recruit subjects and deal with the IRB paperwork. I’ll begin some version of a study in earnest beginning in the winter term, January 2020.

I have no idea how this is going to work out.

For one thing, I feel like I have a lot of reading to do. I think I’m right about the lack of good scholarship within the computers and writing world about this, but maybe not. As I typed that sentence in fact, I recalled a distant memory of a book Mike Palmquist, Kate Kiefer, Jake Hartvigsen, and Barbara Godlew wrote called Transitions: Teaching Writing in Computer-Supported and Traditional Classrooms. It’s been a long time since I read that (it was written in 1998), but I recall it as being a comparison between writing classes taught in a computer lab and not. Beyond reading in my own field of course, I am slowly making my way through these studies in Education and Psych, which present their own kinds of problems. For example, my math ignorance means I have to slip into  “I’m just going to have to trust you on that one” mode in the discussions about statistical significance.

One article I came across and read (thanks to this post from the Tattooed Prof, Kevin Gannon) was “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014).” As the title suggests, this study by Kayla Morehead, John Dunlosky, and Katherine A. Rawson replicates a 2014 (which is kind of the “gold standard” in the ban laptops genre) study by Pam Mueller and Daniel Oppenheimer “The Pen is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking.” The gist of these two articles is all in the titles: Mueller and Oppenheimer’s conclusions were that it was much better to take notes by hand, while Morehead, Dunlosky, and Rawson’s conclusions were not so much. Interestingly enough, the more recent study also questioned the premise of the value of note taking generally since one of their control groups didn’t take notes and did about as well on the post-test of the study.

Reading these two studies has been a quite useful way for me to start this work. Maybe I should have already known this, but there are actually two fundamentally different issues at stake with these classroom tech bans (setting aside assumptions about the lecture hall format and the value of taking notes as a way of learning).  Mueller and Oppenheimer claimed with their study handwriting was simply “better.” That’s a claim that I have always thought was complete and utter bullshit, and it’s one that I think was debunked a long time ago. Way back in the 1990s when I first got into this work, there were serious people in English and in writing studies pondering what was “better,” a writing class equipped with computers or not, students writing by hand or on computers. We don’t ask that question anymore because it doesn’t really matter which is “better;” writers use computers to write and that’s that. Happily, I think Morehead, Dunlowsky, and Rawson counter Mueller and Oppenheimer’s study rather persuasively. It’s worth noting that so far, MSM hasn’t quite gotten the word out on this.

But the other major argument for classroom tech bans– which neither of these studies addresses– is about distraction, and that’s where the “or are they?” part of my post title comes from. I still have a lot more reading to do on this (see above!), but it’s clear to me that the distraction issue deserves more attention since social media applications are specifically designed to distract and demand attention from their users. They’re like slot machines, and it’s clear that “the kids today” are not the only ones easily taken in. When I sit in the back of the room during a faculty meeting and I glance at the screens of my colleagues’ laptops in front of me, it’s pretty typical to see Facebook or Twitter or Instagram open, along with a window for checking email, grading papers– or, on rare occasion, taking notes.

Anyway, it’s a start. And if you’ve read this far and you’ve got any ideas on more research/reading or how to design a study into this, feel free to comment or email or what-have-you.

Pre-CCCCs 2017

I’m heading to Portland, Oregon next week for the annual Conference for College Composition and Communication. My involvement this year is kind of in the “alternative” category of things. On Wednesday, I’ll be participating in the Research Network Forum for the first time. On Thursday, I’ll be participating in the Digital Praxis Poster sessions and I just finished creating the stuff I’ll have for my bit, “The Semester of Social Media Project.”

It’s a pretty straight-forward “show and tell” about an assignment I give in Writing for the World Wide Web where I ask students to “inhabit” some different social media platforms and to write about it. It’s not the fanciest of slideshows– maybe its even a little too simple to share in something called a “Digital Poster Session”– but my hope is that someone finds it kind of interesting and useful.

A less than complete recap/blog post about #cwcon 2016

I was in Rochester, New York last weekend for the annual Computers and Writing Conference at St. John Fisher College. This was not my first rodeo. I think I have about 20 different presentations at probably about 12 different meetings, maybe more. I have a love/hate relationship with the conference. C&W will always have a place in my heart because it was the first conference where I ever presented– back in 1994– and maybe because of that (and also because I’ve always thought of it as the conference that most closely aligns with my research and teaching interests), I have found the whole thing kind of frustrating in recent years.

A better and more complete (albeit more chaotic) way to get a sense of what happened this year is to go to Twitter and search for #cwcon. I tried to make a Storify of all the tweets, but the limit is 1000, so I only was able to get tweets from Sunday and some of Saturday. If I get around to it, maybe I’ll make another Storify or two– unless there’s a better/easier way to capture all those tweets.

Anyway, a recap from my POV:

  • Bill Hart-Davidson and I drove there together. Bill and I have known each other since 1993 (he was on that panel with me at C&W in Columbia, Missouri back in 1994, and he claimed on this trip that Cindy Selfe either chaired our session or was in the audience, I can’t remember which) and we both like to talk a lot, so there and back was pretty much seven straight hours of the Bill and Steve talk show. It’s a good thing no one else was with us.
  • We got there Wednesday late in the afternoon and played a quick nine holes with Nick Carbone before meeting up with a bunch of Ride2CW and conference goers at a lovely place called Tap & Mallet.
  • Bill and I stayed in the dorms at St. John Fisher. Dorms are a staple of #cwcon, but this is the first time I’ve actually stayed in them mainly because ew, dorms. But given that the hotels for the conference were almost 10 miles away and St. John Fisher is small private college, we both opted for the calculated risk that these dorms would be okay. And that risk paid off, too. The only things missing from the room were a television and, oddly, a garbage can.
  • Bill ran a workshop Thursday, so I ended up hanging out with Doug Walls (soon to be faculty at NC State, congrats to him) for a lot of the day and then working on school stuff back in the garbage can-less dorm.
  • Had kind of a weird Friday morning because I woke up for no reason at 5 am or so, went back to sleep thinking that I’d wake up at 8-ish and I ended up sleeping until 10 am. So, with the morning of the first day of presentations thoroughly trashed, I went to the George Eastman Museum instead. Pretty cool, actually.
  • Friday afternoon, I saw some presentations– a good one from Alex Reid, and an interesting/odd session from some folks at East Carolina called “Object-Oriented Research Methods and Methodologies for Open, Participatory Learning” which was not at all what I was expecting. It ended up mostly being about using fortune tellers/cootie catchers as a sort of heuristic for writing research. Showed up a little late for a panel where Bradley Dilger and crew were talking about the Corpus & Repository of Writing project. Interestingly, there were a number of talks/presentations/workshops on methods for capturing and/or mining a lot of “big data” in writing– well, big for our field at least. What I didn’t see much of was what all this mining and corpus-building gets us. Maybe the results will come eventually.
  • Went to the banquet/awards/Grabill keynote. More on the awards thing in a moment, but to kick off the after-eating festivities, there was a tribute video to Cindy and Dickie Selfe who are retiring this year. The set-up for the banquet made watching the video pretty impossible, but it’s on YouTube and it’s definitely worth a watch. Both of them have been such giants in the field, and it really is a lovely send-off/tribute.
  • Jeff Grabill gave a good talk– it’s right here, actually. I think he thought that he was being more confrontational than he actual was, but that’s another story. Alex Reid has a good blog post about this and one of the other key things going this year at the conference, which has to do with what I think I would describe as a sort of question of naming and identity.
  • Speaking of which: my session was on Saturday morning. My presentation was about correspondence schools and how they foreshadow and/or set the groundwork for MOOCs. It was okay, I guess. It was a sort of mash-up version of a part of the first chapter/section of this book I’ve been working on for far too long (which is also one of the reasons why I’m not going to post it here for now) and I think it’s good stuff, but it wasn’t really that dynamic of a presentation. I ended up being paired up with Will Hochman, and his approach was much more of an interactive brainstorming session on trying to come up with a new name for the conference. I don’t know if we “solved” the problem or not, but it was a fun discussion. Lauren Rae Hall and she created a cool little conference name generator based on stuff we talked about.
  • Walls made me skip the lunch keynote to get pizza (twisted my arm, I tell you!) and then I went to the town hall session where Bill was on the panel. Alex Reid blogged a bit about this (and other things) in this post; while I suppose it was interesting, it was another example of a session that is advertised/intended as one where there is going to be a lot of audience discussion and where, after the many people on the panel all said their bits, we were pretty much out of time. And then we drove home.

So it was all good. Well done, St. John Fisher people! Though I can’t end this without beating the drum on three reoccurring themes for me, the hate dislike/grumpy side of me with #cwcon:

First, I think the work at reconsidering the name of the conference is perhaps symptomatic of the state of affairs with the general theme of the conference. “Computers and Writing” is a bit anachronistic since the definition of both “computers” and “writing” have been evolving, but it wouldn’t be the first name of an organization that seems out of date with what it is– the “Big Ten” with its 14 teams immediately comes to mind. So maybe the identity issues about the name of the conference have more to do with the fact that the subject of the conference is no longer a comparatively marginalized sub-discipline within composition and rhetoric.

Take that with a significant grain of salt. I was on a roundtable about the “end” of computers and writing in 2001 and we’re still chugging along. But that video honoring Cindy and Dickie Selfe featured some other senior members in the C&W community remembering the “old days” of the 1980s and even early 90s (really not that long ago, relatively speaking) when anyone in an English department working with a computer was considered a “freak.” Scholarship and teaching about technology and materiality (not to mention “multimodality” which often enough implies computers) might not be at the center of the field, but it’s not on the “lunatic fringes” of it anymore either. That’s good– it means folks like the Selfes were “right”– but it also makes #cwcon a little less “special.” I can go to the CCCCs and see lots of the same kinds of presentations I saw this last weekend, not to mention HASTAC.

Second, I really wish there was a way to hold this conference more regularly some place easy to get to. Rochester wasn’t bad (though still a regional airport), but in recent years, #cwcon has been in Menomonie, Wisconsin; Pullman, Washington; and Frostburg, Maryland. And next year, it’s going to be Findlay, Ohio, which is good for me because that’s only 90 miles away but not exactly easy for anyone planning on not driving there.

And third, there’s still the lack of basic infrastructure. As Bill and I discussed in our 14+ hours of car time, HASTAC specifically and Digital Humanities generally have their own organizational problems, but at least there are web sites and organizations out there. We’re a committee buried in a large subset (CCCCs) of an even larger organization (NCTE), and as far as a web site goes, um, no, not so much. During the many awards, I tweeted that it sure would be nice if there was a page of winners of various things posted somewhere. Someone who will remain nameless said it was all they could do to not tweet back something snarky about “where.”

If I get the time or energy to track that info down, I’ll post it here or somewhere else….

What’s the difference between HASTAC and CWCON? Organization and a web site

I went to the HASTAC conference this week/weekend instead of the Computers and Writing conference (also this week/weekend) mostly because of geography. HASTAC was at Michigan State, which is about an hour drive from my house. Computers and Writing (let’s call it CWCON for the rest of this post) was at the University of Wisconsin-Stout, which is in the middle of freakin’ nowhere in Menomonie, Wisconsin, which is a small town a little more than an hour drive from Minneapolis. I also have some bad memories from the job market about UW-Stout, but hey, those are my own problems, and I’m pretty sure that all of the folks associated with those problems are long gone.

Anyway, I’ve been to CWCON about every other year or so (give or take) since 1994, so my guiding question for much of this conference was how would I compare HASTAC to CWCON? The short answer is they are very similar: that is, there was little going on at HASTAC that would have been out of place at CWCON, and vice versa. Both are about the intersections of the digital (e.g., “computer stuff,” technology, emerging media, etc.) and the humanities, though “humanities” probably includes more disciplines at HASTAC, whereas at CWCON, most participants identify in some fashion with composition and rhetoric.

Granted, my HASTAC experience was skewed because I attended panels that were writing studies-oriented (more on that after the jump), but I didn’t see much of anything on the program that would have been completely out of place at CWCON.  HASTAC had about as much about pedagogy on the program as I’ve seen before at CWCON. Both of the keynotes I saw were ones that would be welcome at CWCON, particularly the second one by rootoftwo (I missed the third, unfortunately). Both conferences were about the same size, mid-300s or so. Both are organizations that have been promoted and propelled by prominent women scholars in the field– Cindy Selfe and Gail Hawisher for CWCON, and Cathy Davidson for HASTAC.

So, what was different? There were more grad students and younger folks at HASTAC, but (I was told) that is mostly because the conference and its origins are more grad student-focused. CWCON is arguably a little more geeky and “fun,” with things like bowling night and karaoke and the like, though maybe there was some of that stuff at HASTAC and I just didn’t know about it. I think there is housing in the dorms at HASTAC, though I stayed at the very affordable and convenient Kellogg Center. And of course I know more people who go to CWCON.

But at the end of the day, I think the most significant difference between these two groups boil down to organization and a web site.

Computers and Writing, as I have complained about before, has neither. It is a loosely formed neo-socialist anarchist collective committee organized under the umbrella of the CCCCs (which itself is technically a group organized under the umbrella of NCTE) that meets at the CCCCs mainly to figure out where the next conference is going to be– and often enough, deciding on where the next conference is going to be is tricky. The web site, computersandwriting.org, is mostly non-functional.

The Humanities, Arts, Science, and Technology Alliance and Collaboratory (aka HASTAC) is an organized community that has an executive board, a steering committee, council of advisors, a staff (at least of sorts), lots of related groups, affiliated organizations, and (of course) a web site. According to the web site, HASTAC is an “alliance of nearly 13,000,” though I don’t quite know what that means. Before she introduced the first keynote of the conference on Thursday, Cathy Davidson took a moment to talk about the upcoming revisions to the HASTAC web site, which she claimed was the oldest (and I think most active?) “social media” web site for academics. I might be getting some of that wording wrong, but it was something along those lines.

Does any of this matter? Maybe not. I mean, “bigger” is not automatically “better.” So what if HASTAC has 13,000 in their “alliance,” if “Digital Humanities” is the term of art (in the sense that the National Endowment for the Humanities has an Office of Digital Humanities and not an Office of Computers and Writing), if CWCON remains the small conference of a sub-specialization within composition and rhetoric, a discipline that many also view (and the MLA wishes this were the case) as a sub-field of “English?” What do we care? In thinking about this post, I revisited some of the discussion on tech-rhet last year about the decay of the computersandwriting.org web site. Back then, I stirred the pot/rattled the cage a bit by suggesting that a) maybe we need an actual organization, and b) maybe we need a robust web site. Both of those ideas were more or less poo-poo-ed, in part because I think a lot of people like the way things are. CWCON has always been a “non-organization” organization that has had a groovy and rebellious feel to it, and I mean all that as a positive. And given that the conference has now been put on 31 times (I think?), it’s hard to dispute the success of this approach.

On the other hand, if folks associated with CWCON want to be taken seriously by academics outside of that community, I think it matters a great deal.

A big theme amongst the CWCON crowd in recent years (and I include myself in this) has been being miffed/angered/hurt/etc. about how scholars in the “Digital Humanities” have ignored the decades of work we’ve done in comp/rhet generally, particularly folks who identify with CWCON. Cheryl Ball wrote a pointed editorial in Kairos about this (though she was taking on the PMLA more specifically), and I believe in her keynote at this year’s CWCON (I wasn’t there, just judging from Twitter), she again expressed frustration about how comp/rhet scholars doing DH work (CWCON, Kairos, etc.) are ignored, how “we” have been doing this work for a lot longer and better, and so forth.

I share that frustration, believe me. But at the end of the day, the CWCON community can’t have it both ways. It can’t be both a free-wheeling, non-organized “happening” of a group and be miffed/angered/hurt/etc. when the rest of academia interested in DH either doesn’t know we exist or ignores us because we’re not organized and visible to anyone outside of the group.

All of which is to say I have three general take-aways from HASTAC:

  • HASTAC was good, I would go again, and I am generally interested in seeking out/attending other DH conferences with the confidence that yes indeed, the kinds of things I might propose for CWCON would probably be welcome in the realm of DH. The one caveat to that is my general resistance to academic conferences of all sorts, but that’s another issue.
  • HASTAC could learn a lot from CWCON, sure, but CWCON could learn a lot from HASTAC too. I don’t know how much of this was the MSU location and how much of it was HASTAC generally, but I liked the presentation formats and I also thought they had some creative ways for getting people to know each other, like “sign-ups” for particular restaurants to go to as a group.
  • I’m not interested in starting an organization (that takes way too much work and isn’t something I can do alone), but I’m thinking very seriously about creating a web site that could be what I’d like to see computersandwriting.org be, a repository for comp/rhet things relevant to DH things, and vice-versa. I found out that computersandwriting.net is actually available, but that would be a little too snarky, and besides, I think the move should be to make connections with the DH community. So I thought maybe writinganddh.org or writing-dh.org maybe something like ws-dh.org (where I mean “writing studies”). If you have any ideas and/or thoughts on pitching in (I mean to write– I’ll fund it out of my own pocket, at least for a year), let me know.

More specifically about what I did at HASTAC after the jump:

Continue reading “What’s the difference between HASTAC and CWCON? Organization and a web site”

“Rhetoric and the Digital Humanities,” Edited by Jim Ridolfo and Bill Hart-Davidson

I’ve blogged about “the Digital Humanities” several times before. Back in 2012, I took some offense at the MLA’s “discovery” of “digital scholarship” because they essentially ignored the work of anyone other than literature scholars– in other words, comp/rhet folks who do things with technology need not apply. Cheryl Ball had an editorial comment in Kairos back then I thought was pretty accurate– though it’s also worth noting in the very same issue of Kairos, Ball also praised the MLA conference for its many “digital humanities” presentations.

Almost exactly a year ago, I had a post here called “If you can’t beat ’em and/or embracing my DH overlords and colleagues,” in which I was responding to a critique by Adam Kirsch that Marc Bousquet had written about. Here’s a long quote from myself that I think is all the more relevant now:

I’ve had my issues with the DH movement in the past, especially as it’s been discussed by folks in the MLA– see here and especially here.  I have often thought that a lot of the scholars in digital humanities are really literary period folks trying to make themselves somehow “marketable,” and I’ve seen a lot of DH projects that don’t seem to be a whole lot more complicated than putting stuff up on the web. And I guess I resent and/or am annoyed with the rise of digital humanities in the same way I have to assume the folks who first thought up MOOCs (I’m thinking of the Stephen Downes and George Siemens of the world) way before Coursera and Udacity and EdX came along are annoyed with the rise of MOOCs now. All the stuff that DH-ers talk about as new has been going on in the “computers and writing”/”computers and composition” world for decades and for these folks to come along now and to coin these new terms for old practices– well, it feels like a whole bunch of work of others has been ignored and/or ripped off in this move.

But like I said, if you can’t beat ’em, join ’em. The “computers and writing” world– especially vis a vis its conference and lack of any sort of unifying “organization”– seems to me to be fragmenting and/or drifting into nothingness at the same time that DH is strengthening to the point of eliciting backlash pieces in a middle-brow publication like the New Republic. Plenty of comp/rhet folk have already made the transition, at least in part. Cheryl Ball has been doing DH stuff at MLA lately and had an NEH startup grant on multimedia publication editing; Alex Reid has had a foot in this for a few years now; Collin Brooke taught what was probably a fantastic course this past spring/winter, “Rhetoric, Composition, and Digital Humanities;” and Bill Hart-Davidson and Jim Ridolfo are editing a book of essays that will come out in the fall (I think) called Rhetoric and the Digital Humanities. There’s an obvious trend here.

And this year, I’m going to HASTAC instead of the C&W conference (though this mostly has to do with the geographic reality that HASTAC is being hosted just up the road from me at Michigan State University) and I’ll be serving as the moderator/host of a roundtable session about what the computers and writing crowd can contribute to the DH movement.

In other words, I went into reading Jim and Bill’s edited collection Rhetoric and the Digital Humanities with a realization/understanding that “Digital Humanities” has more or less become the accepted term of art for everyone outside of computers and writing, and if the C&W crowd wants to have any interdisciplinary connection/relevance to the rest of academia, then we’re going to have to make connections with these DH people. In the nutshell, that’s what I think Jim and Bill’s book is about. (BTW and “full disclosure,” as they say: Jim and Bill are both friends of mine, particularly Bill, who I’ve known from courses taken together, conferences, project collaborations, dinners, golf outings, etc., etc., etc. for about 23 or so years).

Continue reading ““Rhetoric and the Digital Humanities,” Edited by Jim Ridolfo and Bill Hart-Davidson”