What Counts as Cheating? And What Does AI Smell Like?

Cheating is at the heart of the fear too many academics have about ChatGPT, and I’ve seen a lot of hand-wringing articles from MSM posted on Facebook and Twitter. One of the more provocative screeds on this I’ve seen lately was in the Chronicle of Higher Education, “ChatGPT is a Plagiarism Machine” by Joseph M. Keegin. In the nutshell, I think this guy is unhinged, but he’s also not alone.

Keegin claims he and his fellow graduate student instructors (he’s a PhD candidate in Philosophy at Tulane) are encountering loads of student work that “smelled strongly of AI generation,” and he and some of his peers have resorted to giving in-class handwritten tests and oral exams to stop the AI cheating. “But even then,” Keegin writes, “much of the work produced in class had a vague, airy, Wikipedia-lite quality that raised suspicions that students were memorizing and regurgitating the inaccurate answers generated by ChatGPT.”

(I cannot help but to recall one of the great lines from [the now problematically icky] Woody Allen in Annie Hall: “I was thrown out of college for cheating on a metaphysics exam; I looked into the soul of the boy sitting next to me.” But I digress.)

If Keegin is exaggerating in order to rattle readers and get some attention, then mission accomplished. But if he’s being sincere– that is, if he really believes his students are cheating everywhere on everything all the time and the way they’re cheating is by memorizing and then rewriting ChatGPT responses to Keegin’s in-class writing prompts– then these are the sort of delusions which should be discussed with a well-trained and experienced therapist. I’m not even kidding about that.

Now, I’m not saying that cheating is nothing to worry about at all, and if a student were to turn in whatever ChatGPT provided for a class assignment with no alterations, then a) yes, I think that’s cheating, but b) that’s the kind of cheating that’s easy to catch, and c) Google is a much more useful cheating tool for this kind of thing. Keegin is clearly wrong about ChatGPT being a “Plagiarism Machine” and I’ve written many many many different times about why I am certain of this. But what I am interested in here is what Keegin thinks does and doesn’t count as cheating.

The main argument he’s trying to make in this article is that administrators need to step in to stop this never ending-battle against the ChatGPT plagiarism. Universities should “devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.”

Keegin doesn’t define what he means by cheating (though he does give some examples that don’t actually seem like cheating to me), but I think we can figure it out by reading what he means by a “meaningful education.” He writes (I’ve added the emphasis) “A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment.”

So, I think Keegin sees education as an activity where students labor alone at mastering the material delivered by the instructor. Knowledge is not something shared or communal, and it certainly isn’t created through interactions with others. Rather, students receive knowledge, do the work they are asked to do by the instructor, they do that work alone, and then students reproduce that knowledge investment provided by the instructor– with interest. So any work a student might do that involves anyone or anything else– other students, a tutor, a friend, a google search, and yes ChatGPT– is an opportunity for cheating.

More or less, this what Paulo Freire meant by the ineffective and unjust  “banking model of education” which he wrote about over 50 years ago in Pedagogy of the Oppressed. Friere’s work remains very important in many fields specifically interested in pedagogy (including writing studies), and Pedagogy of the Oppressed is one of the most cited books in the social sciences. And yet, I think a lot of people in higher education– especially in STEM fields, business-oriented and other technical majors, and also in disciplines in the humanities that have not been particularly invested in pedagogy (philosophy, for example)– are okay with this system. These folks think education really is a lot like banking and “investing,” and they don’t see any problem with that metaphor. And if that’s your view of education, then getting help from anyone or anything that is not from the teacher is metaphorically like robbing a bank.

But I think it’s odd that Keegin is also upset with “credentialing” in higher education. That’s a common enough complaint, I suppose, especially when we talk about the problems with grading. But if we were to do away with degrees and grades as an indication of successful learning (or at least completion) and if we instead decided students should learn solely for the intrinsic value of learning, then why would it even matter if students cheated or not? That’d be completely their problem. (And btw, if universities did not offer credentials that have financial, social, and cultural value in the larger society, then universities would cease to exist– but that’s a different post).

Perhaps Keegin might say “I don’t have a problem with students seeking help from other people in the writing center or whatever. I have a problem with students seeking help from an AI.” I think that’s probably true with a lot of faculty. Even when professors have qualms about students getting a little too much help from a tutor, they still generally do see the value and usually encourage students to take advantage of support services, especially for students at the gen-ed levels.

But again, why is that different? If a student asks another human for help brainstorming a topic for an assignment, suggesting some ideas for research, creating an outline, suggesting some phrases to use, and/or helping out with proofreading, citation, and formatting, how is that not cheating when this help comes from a human but it is cheating when it comes from ChatGPT? And suppose a student instead turns to the internet and consults things like CliffsNotes, Wikipedia, Course Hero, other summaries and study guides, etc. etc.; is that cheating?

I could go on, but you get the idea. Again, I’m not saying that cheating in general and with ChatGPT in particular is nothing at all to worry about. And also to be fair to Keegin, he even admits “Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right.” But the more of these paranoid and shrill commentaries I read about “THE END” of writing assignments and how we have got to come up with harsh punishments for students so they stop using AI, the more I think these folks are just scared that they’re not going to be able to give students the same bullshitty non-teaching writing assignments that they’ve been doing for years.

Okay, Now Some Students Should Fail (or, resuming “normal” expectations post-pandemic)

In April 2020, I wrote a post with the headline “No One Should Fail a Class Because of a Fucking Pandemic.” This, of course, was in the completely bonkers early days of the pandemic when everyone everywhere suddenly sheltered in place, when classes suddenly went online, and when the disease was disrupting all of our lives– not to mention the fact that millions of people were getting very sick, and a lot of them were dying. Covid hit many of my students especially hard, which in hindsight is not that surprising since a lot of the students at EMU (and a lot of the students I was teaching back then) come from working poor backgrounds, or they are themselves adult (aka “non-traditional”) students with jobs, sig-Os, houses, kids, etc.

As I wrote back then, before Covid and when it came to things like attendance and deadlines, I was kind of a hard-ass. I took attendance every day for f2f classes and I also had an attendance policy of sorts for online classes. There was no such thing as an excused absence; I allowed students to miss up to the equivalent of two weeks of classes with no questions asked, but there are no exceptions for things like funerals or illness. Unless a student worked out something with me before an assignment was due, late work meant an automatic grade deduction. I’ve been doing it this way since I started as a graduate assistant because it was the advice I was given by the first WPA/professor who supervised and taught me (and my fellow GAs) how to teach. I continued to run a tight ship like this for two reasons: first, I need students to do their job and turn stuff in on time so I can do my job of teaching by responding to their writing. Second, my experience has been that if instructors don’t give clear and unwavering rules about attendance and deadlines, then a certain number of students will chronically not attend and miss deadlines. That just sets these students up to fail and it also creates more work for me.

Pretty much all of this went out the window in Winter 2020 when Covid was raging. EMU allowed students to convert classes they were enrolled in from a normal grading scheme to a “pass/fail” grade, which meant that a lot of my students who would have otherwise failed (or with bad grades) ended up passing because of this, and also because I gave people HUGE breaks. My “lighten up” approach continued through the 2020-21 and the 2021-22 school year, though because all of my teaching was online and asynchronous, the definition of “attend” was a bit more fuzzy. I kept doing this because Covid continued to be a problem– not as big of a problem as it was in April 2020, but lots of people were still getting infected and people were still dying, especially people who were stupid enough to not get the vaccine.

By the end of the 2021-22 school year, things were returning to normal. Oh sure, there was still plenty of nervousness about the virus around campus and such, but the end of the pandemic was near. The most serious dangers of the disease had passed because of a weaker version of the virus, vaccinations, and herd immunity. So I was ready for a return to “normal” for the 2022-23 school year.

But my students weren’t quite ready– or maybe a better way of putting it is Covid’s side-effects continued.

In fall 2022, I taught a f2f section of first year writing, the first f2f section for me since before the pandemic. Most of the students had been in all (or mostly) online classes since March 2020, meaning that this was most of their first semesters back f2f too. Things got off to a rough start with many students missing simple deadlines, blowing off class, and/or otherwise checked out in the first couple of weeks. I felt a bit the same way– not so much blowing stuff off, but after not teaching in real time in front of real people for a couple of years, I was rusty. It felt a bit like getting back on a bicycle after not riding at all for a year or two: I could still do it, but things started out rocky.

So I tried to be understanding and cut students some slack, but I also wanted to get them back on track. It still wasn’t going great. Students were still not quite “present.” I remember at one point, maybe a month into the semester, a student asked quite earnestly “Why are you taking attendance?” It took a bit for me to register the question, but of course! If you’ve been in nothing but online classes for the last two years, you wouldn’t have had a teacher who took attendance because they’d just see the names on Zoom!

There came a point just before the middle of the term when all kinds of students were crashing and burning, and I put aside my plans for the day and just asked “what’s going on?” A lot of students suddenly became very interested in looking at their shoes. “You’re not giving us enough time in class to do the assignments.” That’s what homework is for, I said. “This is just too much work!” No, I said, it’s college. I’ve been doing this for a long time, and it’s not too much, I assure you.

Then I said “Let me ask you this– and no one really needs to answer this question if you don’t want to. How many of you have spent most of the last two years getting up, logging into your Zoom classes, turning off the camera, and then going on to do whatever else you wanted?” Much nodding and some guilty-look smiles. “Oh, I usually just went back to bed” one student said too cheerfully.

Now, look: Covid was hard on everyone for all kinds of different reasons. I get it. A lot of sickness and death, a lot of trauma, a lot of remaining PTSD and depression. Everyone struggled. But mostly blowing off school for two years? On the one hand, that’s on the students themselves because they had to know that it would turn out badly. On the other hand, how does a high school or college teacher allow that to happen? How does a teacher– even a totally burnt-out and overworked one– just not notice that a huge percentage of their students are not there at all?

The other major Covid side-effect I saw last school year was a steep uptick in device distraction. Prior to Covid, my rule for cell phones was to leave them silenced/don’t let them be a distraction, and laptop use was okay for class activities like taking notes, peer review or research. Students still peeked at text messages or Facebook or whatever, but because they had been socialized in previous high school and college f2f classes, students also knew that not paying attention to your peers or the teacher in class because you are just staring at your phone is quite rude. Not to mention the fact that you can’t learn anything if you’re not paying attention at all.

But during Covid, while these students were sort of sitting through (or sleeping through) Zoom classes with their cameras turned off, they also lost all sense of the norms of how to behave with your devices in a setting like a classroom or a workplace. After all, if you can “attend” a class by yourself in the privacy of your own home without ever being seen by other students or the instructor and also without ever having to say anything, what’s the problem of sitting in class and dorking around with your phone?

I noticed this a lot during the winter 2023 semester, maybe because of what I assigned. For the first time in over 30 years of teaching first year writing, I assigned an actual “book” for the class (not a textbook, not a coursepack, but a widely available and best-selling trade book) by Johann Hari called Stolen Focus: Why You Can’t Pay Attention– and How to Think Deeply Again. This book is about “attention” in many different ways and it discusses many different causes for why (according to Hari) we can’t pay attention: pollution, ADHD misdiagnoses, helicopter parenting, stress and exhaustion, etc. But he spends most of his time discussing what I think is the most obvious drain on our attention, which are cell phones and social media. So there I was, trying to lead a class discussion about a chapter from this book describing in persuasive detail why and how cell phone addiction is ruining all of us, while most of the students were staring into their cell phones.

One day in that class (and only once!), I tried an activity I would have never done prior to Covid. After I arrived and set up my things, I asked everyone to put all their devices– phones, tablets, laptops– on a couple of tables at the front of the classroom. Their devices would remain in sight but out of reach. There was a moment where the sense of panic was heavy in the air and more than a few students gave me a “you cannot be serious” look. But I was, and they played along, and we proceeded to have what I think was one of the best discussions in the class so far.

And then everyone went back to their devices for the rest of the semester.

So things this coming fall are going to be different. For both the f2f and online classes I’m scheduled to teach, I’ll probably begin with a little preamble along the lines of this post: this is where we were, let us acknowledge the difficulty of the Covid years, and, for at least while we are together in school (both f2f and online), let us now put those times behind us and return to some sense of normalcy.

In the winter term and for my f2f classes, I tried a new approach to attendance that I will be doing again next year. The policy was the same as I had before– students who miss more than two weeks of class risk failing– but I phrased it a bit differently. I told students they shouldn’t miss any class, but because unexpected things come up, they had four excused absences. I encouraged them to think of this as insurance in case something goes wrong and not as justification for blowing off class. Plus I also gave students who didn’t miss any classes a small bonus for “perfect attendance.” I suppose it was a bit like offering “extra credit” in that the only students who ever do these assignments are the same students who don’t need extra credit, but a few student earned about a half-letter boost to their final grade. And yes, I also had a few students who failed because they missed too much class.

As for devices: The f2f class I’m teaching in the fall is first year writing and I am once again going to have students read (and do research about) Hari’s Stolen Focus. I am thinking about starting the term by collecting everyones’ devices, at least for the first few meetings and discussions of the book. Considering that Hari begins by recalling his own experiences of “unplugging” from his cell phone and social media for a few months, going for 70 or so minutes without being able to touch the phone might help some students understand Hari’s experiences a bit better.

I’m not doing this– returning to my hard-ass ways– just because I want things to be like the were in the before-times or out of some sense of addressing a problem with “the kids” today. I feel like lots of grown-ups (including myself) need to rethink their relationships with the devices and media platforms that fuel surveillance capitalism. At the same time, I think the learning in college– especially in first year writing, but this is true for my juniors and seniors as well– should also include lessons in “adulting,” in preparing for the world beyond the classroom. And in my experience, the first two things anyone has got to do to succeed at anything is to show up and to pay attention.

My Talk About AI at Hope College (or why I still post things on a blog)

I gave a talk at Hope College last week about AI. Here’s a link to my slides, which also has all my notes and links. Right after I got invited to do this in January, I made it clear that I am far from an expert with AI. I’m just someone who had an AI writing assignment last fall (which was mostly based on previous teaching experiments by others), who has done a lot of reading and talking about it on Facebook/Twitter, and who blogged about it in December. So as I promised then, my angle was to stay in my lane and focus on how AI might impact the teaching of writing.

I think the talk went reasonably well. Over the last few months, I’ve watched parts of a couple of different ChatGPT/AI presentations via Zoom or as previously recorded, and my own take-away from them all has been a mix of “yep, I know that and I agree with you” and “oh, I didn’t know that, that’s cool.” That’s what this felt like to me: I talked about a lot of things that most of the folks attending knew about and agreed with, along with a few things that were new to them. And vice versa: I learned a lot too. It probably would have been a little more contentious had this taken place back when the freakout over ChatGPT was in full force. Maybe there still are some folks there who are freaked out by AI and cheating who didn’t show up. Instead, most of the people there had played around with the software and realized that it’s not quite the “cheating machine” being overhyped in the media. So it was a good conversation.

But that’s not really what I wanted to write about right now. Rather, I just wanted to point out that this is why I continue to post here, on a blog/this site, which I have maintained now for almost 20 years. Every once in a while, something I post “lands,” so to speak.

So for example: I posted about teaching a writing assignment involving AI at about the same time MSM is freaking out about ChatGPT. Some folks at Hope read that post (which has now been viewed over 3000 times), and they invited me to give this talk. Back in fall 2020, I blogged about how weird I thought it was that all of these people were going to teach online synchronously over Zoom. Someone involved with the Media & Learning Association, which is a European/Belgian organization, read it, invited me to write a short article based on that post and they also invited me to be on a Zoom panel that was a part of a conference they were having. And of course all of this was the beginning of the research and writing I’ve been doing about teaching online during Covid.

Back in April 2020, I wrote a post “No One Should Fail a Class Because of a Fucking Pandemic;” so far, it’s gotten over 10,000 views, it’s been quoted in a variety of places, and it was why I was interviewed by someone at CHE in the fall. (BTW, I think I’m going to write an update to that post, which will be about why it’s time to return to some pre-Covid requirements). I started blogging about MOOCs in 2012, which lead to a short article in College Composition and Communication and numerous more articles and presentations, a few invited speaking gigs (including TWO conferences sponsored by the University of Naples on the Isle of Capri), an edited collection and a book.

Now, most of the people I know in the field who once blogged have stopped (or mostly stopped) for one reason or another. I certainly do not post here nearly as often as I did before the arrival of Facebook and Twitter, and it makes sense for people to move on to other things. I’ve thought about giving it up, and there have been times where I didn’t post anything for months. Even the extremely prolific and smart local blogger Mark Maynard gave it all up, I suspect because of a combination of burn-out, Trump being voted out, and the additional work/responsibility of the excellent restaurant he co-owns/operates, Bellflower.

Plus if you do a search for “academic blogging is bad,” you’ll find all sorts of warnings about the dangers of it– all back in the day, of course. Deborah Brandt seemed to think it was mostly a bad idea (2014)The Guardian suggested it was too risky (2013), especially for  grad students posting work in progress. There were lots of warnings like this back then. None of them ever made any sense to me, though I didn’t start blogging until after I was on the tenure-track here. And no one at EMU has ever had anything negative to me about doing this, and that includes administrators even back in the old days of EMUTalk.

Anyway, I guess I’m just reflecting/musing now about why this very old-timey practice from the olde days of the Intertubes still matters, at least to me. About 95% of the posts I’ve written are barely read or noticed at all, and that’s fine. But every once in a while, I’ll post something, promote it a bit on social media, and it catches on. And then sometimes, a post becomes something else– an invited talk, a conference presentation, an article. So yeah, it’s still worth it.

Is AI Going to be “Something” or “Everything?”

Way back in January, I applied for release time from teaching for one semester next year– either a sabbatical or what’s called here a “faculty research fellowship” (FRF)– in order to continue the research I’ve been doing about teaching online during Covid. This is work I’ve been doing since fall 2020, including a Zoom talk at a conference in Europe, a survey I ran for about six months, and from that survey, I was able to recruit and interview a bunch of faculty about their experiences. I’ve gotten a lot out of this work already: a couple conference presentations (albeit in the kind of useless “online/on-demand” format), a website (which I had to code myself!) article, and, just last year, I was on one of those FRFs.

Well, a couple weeks ago, I found out that I will not be on sabbatical or FRF next year. My proposal, which was about seeking time to code and analyze all of the interview transcripts I collected last year, got turned down. I am not complaining about that: these awards are competitive, and I’ve been fortunate enough to receive several of these before, including one for this research. But not getting release time is making me rethink how much I want to continue this work, or if it is time for something else.

I think studying how Covid impacted faculty attitudes about online courses is definitely something important worth doing. But it is also looking backwards, and it feels a bit like an autopsy or one of those commissioned reports. And let’s be honest: how many of us want to think deeply about what happened during the pandemic, recalling the mistakes that everyone already knows they made? A couple years after the worst of it, I think we all have a better understanding now why people wanted to forget the 1918 pandemic.

It’s 20/20 hindsight, but I should have put together a sabbatical/research leave proposal about AI. With good reason, the committee that decides on these release time awards tends to favor proposals that are for things that are “cutting edge.” They also like to fund releases for faculty who have book contracts who are finishing things up, which is why I have been lucky enough to secure these awards both at the beginning and end of my MOOC research.

I’ve obviously been blogging about AI a lot lately, and I have casually started amassing quite a number of links to news stories and other resources related to Artificial Intelligence in general, ChatGPT and OpenAI in particular. As I type this entry in April 2023, I already have over 150 different links to things without even trying– I mean, this is all stuff that just shows up in my regular diet of social media and news. I even have a small invited speaking gig about writing and AI, which came about because of a blog post I wrote back in December— more on that in a future post, I’m sure.

But when it comes to me pursuing AI as my next “something” to research, I feel like I have two problems. First, it might already be too late for me to catch up. Sure, I’ve been getting some attention by blogging about it, and I had a “writing with GPT-3” assignment in a class I taught last fall, which I guess kind of puts me at least closer to being current with this stuff in terms of writing studies. But I also know there are already folks in the field (and I know some of these people quite well) who have been working on this for years longer than me.

Plus a ton of folks are clearly rushing into AI research at full speed. Just the other day, the CWCON at Davis organizers sent around a draft of the program for the conference in June. The Call For Proposals they released last summer describes the theme of this year’s event, “hybrid practices of engagement and equity.” I skimmed the program to get an idea of the overall schedule and some of what people were going to talk about, and there were a lot of mentions of ChatGPT and AI, which makes me think a lot of people are likely to be not talking about the CFP theme at all.

This brings me to the bigger problem I see with researching and writing about AI: it looks to me like this stuff is moving very quickly from being “something” to “everything.” Here’s what I mean:

A research agenda/focus needs to be “something” that has some boundaries. MOOCs were a good example of this. MOOCs were definitely “hot” from around 2012 to 2015 or so, and there was a moment back then when folks in comp/rhet thought we were all going to be dealing with MOOCs for first year writing. But even then, MOOCs were just a “something”  in the sense that you could be a perfectly successful writing studies scholar (even someone specializing in writing and technology) and completely ignore MOOCs.

Right now, AI is a myriad of “somethings,” but this is moving very quickly toward “everything.” It feel to me like very soon (five years, tops), anyone who wants to do scholarship in writing studies is going to have to engage with AI. Successful (and even mediocre) scholars in writing studies (especially someone specializing in writing and technology) are not going to be able to ignore AI.

This all reminds me a bit about what happened with word processing technology. Yes, this really was something people studied and debated way back when. In the 1980s and early 1990s, there were hundreds of articles and presentations about whether or not to use word processing to teach writing— for example, “The Word Processor as an Instructional Tool: A Meta-Analysis of Word Processing in Writing Instruction” by Robert L. Bangert-Drowns, or “The Effects of Word Processing on Students’ Writing Quality and Revision Strategies” by Ronald D. Owston, Sharon Murphy, Herbert H. Wideman. These articles were both published in the early 1990s and in major journals, and both are trying to answer the question which one is “better.” (By the way, most but far from all of these studies concluded that word processing is better in the sense it helped students generate more text and revise more frequently. It’s also worth mentioning that a lot of this research overlaps with studies about the role of spell-checking and grammar-checking with writing pedagogy).

Yet in my recollection of those times, this comparison between word processing and writing by hand was rendered irrelevant because everyone– teachers, students, professional writers (at least all but the most stubborn, as Wendell Berry declares in his now cringy and hopelessly dated short essay “Why I Am not Going to Buy a Computer”)– switched to word processing software on computers to write. When I started teaching as a grad student in 1988, I required students to hand in typed papers and I strongly encouraged them to write at least one of their essays with a word processing program. Some students complained because they were never asked to type anything in high school. By the time I started my PhD program five years later in 1993, students all knew they needed to type their essays on a computer and generally with MS Word.

Was this shift a result of some research consensus that using a computer to type texts was better than writing texts out by hand? Not really, and obviously, there are still lots of reasons why people still write some things by hand– a lot of personal writing (poems, diaries, stories, that kind of thing) and a lot of note-taking. No, everyone switched because everyone realized word processing made writing easier (but not necessarily better) in lots and lots of different ways and that was that. Even in the midst of this panicky moment about plagiarism and AI, I have yet to read anyone seriously suggest that we make our students give up Word or Google Docs and require them to turn in handwritten assignments. So, as a researchable “something,” word processing disappeared because (of course) everyone everywhere who writes obviously uses some version of word processing, which means the issue is settled.

One of the other reasons why I’m using word processing scholarship as my example here is because both Microsoft and Google have made it clear that they plan on integrating their versions of AI into their suites of software– and that would include MS Word and Google Docs. This could be rolling out just in time for the start of the fall 2023 semester, maybe earlier. Assuming this is the case, people who teach any kind of writing at any kind of level are not going to have time to debate if AI tools will be “good” or “bad,” and we’re not going to be able to study any sorts of best practices either. This stuff is just going to be a part of the everything, and for better or worse, that means the issue will soon be settled.

And honestly, I think the “everything” of AI is going to impact, well, everything. It feels to me a lot like when “the internet” (particularly with the arrival of web browsers like Mosaic in 1993) became everything. I think the shift to AI is going to be that big, and it’s going to have as big of an impact on every aspect of our professional and technical lives– certainly every aspect that involves computers.

Who the hell knows how this is all going to turn out, but when it comes to what this means for the teaching of writing, as I’ve said before, I’m optimistic. Just as the field adjusted to word processing (and spell-checkers and grammar-checkers, and really just the whole firehouse of text from the internet), I think we’ll be able to adjust to this new something to everything too.

As far as my scholarship goes though: for reasons, I won’t be able to eligible for another release from teaching until the 2025-26 school year. I’m sure I’ll keep blogging about AI and related issues and maybe that will turn into a scholarly project. Or maybe we’ll all be on to something entirely different in three years….

 

What Would an AI Grading App Look Like?

While a whole lot of people (academics and non-academics alike) have been losing their minds lately about the potential of students using ChatGPT to cheat on their writing assignments, I haven’t read/heard/seen much about the potential of teachers using AI software to read, grade, and comment on student writing. Maybe it’s out there in the firehose stream of stories about AI I see every day (I’m trying to keep up a list on pinboard) and I’ve just missed it.

I’ve searched and found some discussion of using ChatGPT to grade on Reddit (here and here), and I’ve seen other posts about how teachers might use the software to do things other than grading, but that’s about it. In fact, the reason I’m thinking about this again now is not because of another AI story but because I watched a South Park episode about AI called “Deep Learning.” South Park has been a pretty uneven show for several years, but if you are fan and/or if you’re interested in AI, this is a must-see. A lot happens in this episode, but my favorite reaction about ChatGPT comes from the kids’ infamous teacher, Mr. Garrison. While complaining about grading a stack of long and complicated essays (which the students completed with ChatGPT), Rick (Garrison’s boyfriend) tells him about ChatGPT, and Mr. Garrison has far too honest of a reaction: “This is gonna be amazing! I can use it to grade all my papers and no one will ever know! I’ll just type the title of the essay in, it’ll generate a comment, and I don’t even have to read the stupid thing!”

Of course, even Mr. Garrison knows that would be “wrong” and he must keep this a secret. That probably explains why I still haven’t come across much about an AI grading app. But really though: shouldn’t we be having this discussion? Doesn’t Mr. Garrison have a point?

Teacher concerns about grading/scoring writing with computers are not new, and one of the nice things about having kept a blog so long is I can search and “recall” some of these past discussions. Back in 2005, I had a post about NCTE coming out against the SAT writing test and machine scoring of those tests. There was also a link in that post to an article about a sociologist at the University of Missouri named Edward Brent who had developed a way of giving students feedback on their writing assignments. I couldn’t find the original article, but this one from the BBC in 2005 covers the same story. It seems like it was a tool developed very specifically for the content of Brent’s courses and I’m guessing it was quite crude by today’s standards. I do think Brent makes a good point on the value of these kinds of tools: “It makes our job more interesting because we don’t have to deal so much with the facts and concentrate more on thinking.”

About a decade ago, I also had a couple of other posts about machine grading, both of which were posts that grew out of discussions from the now mostly defunct WPA-L. There was this one from 2012, which included a link to a New York Times article about Educational Testing Service’s product “e-rater,” “Facing a Robo-Grader? Just Keep Obfuscating Mellifluously.” The article features Les Perelman, who was the director of writing at MIT, demonstrating ways to fool e-rater with nonsense and inaccuracies. At the time, I thought Perelman was correct, but also a good argument could be made that if a student was smart enough to fool e-rater, maybe they deserved the higher score.

Then in 2013, there was another kerfuffle on WPA-L about machine grading that involved a petition drive at the website humanreaders.org against machine grading. In my post back then, I agreed with the main goal of the petition,  that “Machine grading software can’t recognize things like a sense of humor or irony, it tends to favor text length over conciseness, it is fairly easy to circumvent with gibberish kinds of writing, it doesn’t work in real world settings, it fuels high stakes testing, etc., etc., etc.” But I also had some questions about all that. I made a comparison between these new tools and the initial resistance to spell checkers, and then I also wrote this:

As a teacher, my least favorite part of teaching is grading. I do not think that I am alone in that sentiment. So while I would not want to outsource my grading to someone else or to a machine (because again, I teach writing, I don’t just assign writing), I would not be against a machine that helps make grading easier. So what if a computer program provided feedback on a chunk of student writing automatically, and then I as the teacher followed behind those machine comments, deleting ones I thought were wrong or unnecessary, expanding on others I thought were useful? What if a machine printed out a report that a student writer and I could discuss in a conference? And from a WPA point of view, what if this machine helped me provide professional development support to GAs and part-timers in their commenting on students’ work?

By the way, an ironic/odd tangent about that post: the domain name humanreaders.org has clearly changed hands. In 2013, it looked like this (this link is from the Internet Archive): basically, a petition form. The current site domain humanreaders.org redirects to this page on some content farm website called we-heart.com. This page, from 2022, is a list of the “six top online college paper writing websites today.”

Anyway, let me state the obvious: I’m not suggesting an AI application for replacing all teacher feedback (as Mr. Garrison is suggesting) at all. Besides the fact that it wouldn’t be “right” no matter how you twist the ethics of it, I don’t think it would work well– yet. Grading/commenting on student writing is my least favorite part of the job, so I understand where Mr. Garrison is coming from. Unfortunately though, reading/ grading/ commenting on student writing is essential to teaching writing. I don’t know how I can evaluate a student’s writing without reading it, and I also don’t know how to help students think about how to revise their writing (and, hopefully, learn how to apply these lessons and advice to writing these students do beyond my class) without making comments.

However, this is A LOT of work that takes A LOT of time. I’ve certainly learned some things that make grading a bit easier than it was when I started. For example, I’ve learned that less is more: marking up every little mistake or thing in the paper and then writing a really long end comment is a waste of time because it confuses and frustrates students and it literally takes longer. But it still takes me about 15-20 minutes to read and comment on each long-ish student essay, which are typically a bit shorter than this blog post. So in a full (25 students) writing class, it takes me 8-10 hours to completely read, comment on, and grade all of their essays; multiply that by two or three or more (since I’m teaching three writing classes a term), and it adds up pretty quickly. Plus we’re talking about student writing here. I don’t mind reading it and students often have interesting and inspiring observations, but by definition, these are writers who are still learning and who often have a lot to learn. So this isn’t like reading The New Yorker or a long novel or something you can get “lost” in as a reader. This ain’t reading for fun– and it’s also one of the reasons why, after reading a bunch of student papers in a day, I’m much more likely to just watch TV at night.

So hypothetically, if there was a tool out there that could help me make this process faster, easier, and less unpleasant, and if this tool also helped students learn more about writing, why wouldn’t I want to use it?

I’ve experimented a bit with ChatGPT with prompts along the lines of “offer advice on how to revise and improve the following text” and then paste in a student essay. The results are mix of (IMO) good, bad, and wrong, and mostly written in the robotic voice typical of AI writing. I think students would have a hard time sorting through these mixed messages. Plus I don’t think there’s a way (yet) for ChatGPT to comment on specific passages in a piece of student writing: that is, it can provide an overall end comment, but it cannot comment on individual sentences and paragraphs and have those comments appear in the margins like the comment feature in Word or Google Docs. Like most writing teachers, that’s a lot of the commenting I do, so an AI that can’t do that (yet) at all just isn’t that useful to me.

But the key phrase there is “yet,” and it does not take a tremendous amount of imagination to figure out how this could work in the near future. For example, what if I could train my own grading AI by feeding it a few classes worth of previous student essays with my comments? I don’t logistically know how that would work, but I am willing to bet that with enough training, a Krause-centric version of ChatGPT would anticipate most of the comments I would make myself on a student writing project. I’m sure it would be far from perfect, and I’d still want to do my own reading and evaluation. But I bet this would save me a lot of time.

Maybe, some time in the future, this will be a real app. But there’s another use of ChatGPT I’ve been playing around with lately, one I hesitate on trying but one that would both help some of my struggling students and save me time on grading. I mentioned this in my first post about using ChatGPT to teach way back in December. What I’ve found in my ChatGPT noodling (so far) is if I take a piece of writing that has a ton of errors in it (incomplete sentences, punctuation in the wrong place, run-on/meandering sentences, stuff like that– all very common issues, especially for first year writing students) and prompt ChatGPT to revise the text so it is grammatically correct, it does a wonderful job.It doesn’t change the meaning or argument of the writing– just the grammar. It generally doesn’t make different word choices and it certainly doesn’t make the student’s argument “smarter”; it just arranges everything so it’s correct.

That might not seem like much, but for a lot of students who struggle with getting these basics right, using ChatGPT like this could really help. And to paraphrase Edward Brent from way back in 2005, if students could use a tool like this to at least deal with basic issues like writing more or less grammatically correct sentences, then I might be able to spend more time concentrating more on the student’s analysis, argument, use of evidence, and so forth.

And yet– I don’t know, it even feels to me like a step too far.

I have students who have diagnosed learning difficulties of one sort or another who show me letters of accommodation from the campus disability resource center which specifically tell me I should allow students to use Grammarly in their writing process. I encourage students to go to the writing center all the time, in part because I want my students– especially the struggling ones– to sit down with a consultant who will help them go through their essays so they can revise and improve it. I never have a problem with students wanting to get feedback on their work from a parent or a friend who is “really good” at writing.

So why does it feel like encouraging students to try this in ChatGPT is more like cheating than it does for me to encourage students to be sure to spell check and to check out the grammar suggestions made by Google Docs? Is it too far? Maybe I’ll find out in class next week.

The Problem is Not the AI

The other day, I heard the opening of this episode of the NPR call-in show 1A, “Know It All: ChatGPT In the Classroom.” It opened with this recorded comment from a listener named Kate:

“I teach freshman English at a local university, and three of my students turned in chatbot papers written this past week. I spent my entire weekend trying to confirm they were chatbot written, then trying to figure out how to confront them, to turn them in as plagiarist, because that is what they are, and how I’m going penalize their grade. This is not pleasant, and this is not a good temptation. These young men’s academic careers now hang in the balance because now they’ve been caught cheating.”

Now, I didn’t listen to the show for long beyond this opener (I was driving around running errands), and based on what’s available on the website, the discussion  also included information about incorporating ChatGPT into teaching. Also, I don’t want to be too hard on poor Kate; she’s obviously really flustered and I am guessing there were a lot of teachers listening to Kate’s story who could very personally relate.

But look, the problem is not the AI.

Perhaps Kate was teaching a literature class and not a composition and rhetoric class, but let’s assume whatever “freshman English” class she was teaching involved a lot of writing assignments. As I mentioned in the last post I had about AI and teaching with GPT-3 back in December, there is a difference between teaching writing and assigning writing. This is especially important in classes where the goal is to help students become better at the kind of writing skills they’ll need in other classes and “in life” in general.

Teaching writing means a series of assignments that build on each other, that involve brainstorming and prewriting activities, and that involve activities like peer reviews, discussions of revision, reflection from students on the process, and so forth. I require students in my first year comp/rhet classes to “show their work” through drafts that is in a way they similar to how they’d be expected to in an Algebra or Calculus course. It’s not just the final answer that counts. In contrast, assigning writing is when teachers give an assignment (often a quite formulaic one, like write a 5 paragraph essay about ‘x’) with no opportunities to talk about getting started, no consideration of audience or purpose, no interaction with the other students who are trying to do the same assignment, and no opportunity to revise or reflect.

While obviously more time-consuming and labor-intensive, teaching writing has two enormous advantages over only assigning writing. First, we know it “works” in that this approach improves student writing– or at least we know it works better than only assigning writing and hoping for the best. We know this because people in my field have been studying this for decades, despite the fact that there are still a lot of people just assigning writing, like Kate. Second, teaching writing makes it extremely difficult to cheat in the way Kate’s students have cheated– or maybe cheated. When I talk to my students about cheating and plagiarism, I always ask “why do you think I don’t worry much about you doing that in this class?” Their answer typically is “because we have to turn in all this other stuff too” and “because it would be too much work,” though I also like to believe that because of the way the assignments are structured, students become interested in their own writing in a way that makes cheating seem silly.

Let me just note that what I’m describing has been the conventional wisdom among specialists in composition and rhetoric for at least the last 30 (and probably more like 50) years. None of this is even remotely controversial in the field, nor is any of this “new.”

But back to Kate: certain that these three students turned in “chatbot papers,” she spent the “entire weekend” working to prove these students committed the crime of plagiarism and they deserve to be punished. She thinks this is a remarkably serious offense– their “academic careers now hang in the balance”– but I don’t think she’s going through all this because of some sort of abstract and academic ideal. No, this is personal. In her mind, these students did this to her and she’s going to punish them. This is beyond a sense of justice. She’s doing this to get even.

I get that feeling, that sense that her students betrayed her. But there’s no point in making teaching about “getting even” or “winning” because as the teacher, you create the game and the rules, you are the best player and the referee, and you always win. Getting even with students is like getting even with a toddler.

Anyway, let’s just assume for a moment that Kate’s suspicions are correct and these three students handed in essays created entirely by ChatGPT. First off, anyone who teaches classes like “Freshman English” should not need an entire weekend or any special software to figure out if these essays were written by an AI. Human writers– at all levels, but especially comparatively inexperienced human writers– do not compose the kind of uniform, grammatically correct, and robotically plodding prose generated by ChatGPT. Every time I see an article with a passage of text that asks “was this written by a robot or a student,” I always guess right– well, almost always I guess right.

Second,  if Kate did spend her weekend trying to find “the original” source ChatGPT used to create these essays, she certainly came up empty handed. That was the old school way of catching plagiarism cheats: you look for the original source the student plagiarized and confront the student with it, court room drama style. But ChatGPT (and other AI tools) do not “copy” from other sources; rather, the AI creates original text every time. That’s why there have been several different articles crediting an AI as a “co-author.”

Instead of wasting a weekend, what Kate should have done is called each of these students into her office or taken them aside one by one in a conference and asked them about their essays. If the students cheated,  they would not be able to answer basic questions about what they handed in, and 99 times out of 100, the confronted cheating student will confess.

Because here’s the thing: despite all the alarm out there that all students are cheating constantly, my experience has been the vast majority do not cheat like this, and they don’t want to cheat like this. Oh sure, students will sometimes “cut corners” by looking over to someone else’s answers on an exam, or maybe by adding a paragraph or two from something without citing it. But in my experience, the kind of over-the-top sort of cheating Kate is worried about is extremely rare. Most students want to do the right thing by doing the work, trying to learn something, and by trying their best– plus students don’t want to get in trouble from cheating either.

Further, the kinds of students who do try to blatantly plagiarize are not “criminal masterminds.” Far from it. Rather, students blatantly plagiarize when they are failing and desperate, and they are certainly not thinking of their “academic careers.” (And as a tangent: seems to me Kate might be overestimating the importance of her “Freshman English” class a smidge).

But here’s the other issue: what if Kate actually talked to these students, and what if it turned out they either did not realize using ChatGPT was cheating, and/or they used ChatGPT in a way that wasn’t significantly different from getting some help from the writing center or a friend? What do you do then? Because– and again, I wrote about this in December— when I asked students to use GPT-3 (OpenAI’s software before ChatGPT) to write an essay and to then reflect on that process, a lot of them described the software as being a brainstorming tool, sort of like a “coach,” and not a lot different from getting help from others in peer review or from a visit to the writing center.

So like I said, I don’t want to be too hard on Kate. I know that there are a lot of teachers who are similarly freaked out about students using AI to cheat, and I’m not trying to suggest that there is nothing to worry about either. I think a lot of what is being predicted as the “next big thing” with AI is either a lot further off in the future than we might think, or it is in the same category as other famous “just around the corner” technologies like flying cars. But no question that this technology is going to continue to improve, and there’s also no question that it’s not going away. So for the Kates out there: instead of spending your weekend on the impossible task of proving that those students cheated, why not spend a little of that time playing around with ChatGPT and seeing what you find out?

The Year That Was 2022 (turning some corners?)

If 2020 was horrible and 2021 was, I don’t know, what?, then I think the best description of 2022 was “shows improvement.”

My first prediction of what was to come in 2022 (I made in that last post of 2021) turned out to be wrong: we did not go to the MLA convention in Washington, D.C. because Covid numbers (oh hi, Omicron!) were through the roof. MLA’s approach to dealing with Covid was remarkably reasonable. As I understand it (from what my wife said since she was the one participating), the conference organizers told folks if they still wanted to present f2f they could (because it was too late for MLA to cancel the whole thing), but if people wanted to present electronically and via synchronous conferencing software, then they could do that instead. All the panel chairs/organizers had to do was give the MLA a link to how they were going to do it. In my opinion, that was a smart way to schedule and adjust a conference during Covid: let presenters figure out their own synchronous conferencing software instead of putting all the presentations and materials in a junky content management system behind a firewall. I wish my field’s conferences had taken this approach. Anyway, Annette did her presentation via Zoom with a typical conference audience; D.C. would have to come later.

January was the start of Annette’s and my own faculty research fellowships, and for me, that meant doing a whole lot of interviews of folks who had earlier participated in my “Online Teaching and the ‘New Normal'” survey, which is about the experiences of teaching online during Covid. I ended up doing around 37 or so of these interviews, and I’m still trying to figure out how I’m going to analyze the pile of transcripts I’ve got. The sun rose and I took a picture. Travel included Annette going on a trip with friends to Puerto Rico and about at the same time, I went down to Orange Beach, Alabama where I met up with my parents and my sisters to celebrate my father’s 80th birthday. Movies included the kind of forgettable Midnight Alley and the rest of The Beatles documentary Get Back!

February was work stuff– interviews and also some other writing, but also working off and on on my CCCCs presentation. I had been very much looking forward to going to the f2f conference in Chicago in March 2022, but that was (prematurely and wrongly, IMO) cancelled. I continued to make bread. Did more interviews. Saw (among many other things) Licorice Pizza and The Big Lebowski for about the 90th time.

March was the CCCs Online, which was, um, unpleasant. I think this post from Mike Edwards (where he does quote me, actually) sums up things fairly well. Here’s also a link to my first and second posts about the conference. I won’t be attending this year because (for like the fifth time in a row) the theme for the conference has nothing to do with the kind of research and scholarship I do. But that’s okay. Maybe I’ll go again someday, maybe I won’t.

March also took us on the road to the Charleston, South Carolina area to do something that got us out of the too cold for at least a while. We stopped in Charleston, West Virginia on the way (gross) and then spent a night in Durham, North Carolina to catch up with Rachel and Collin and a lovely meal out at a French restaurant they like. Then we spent a week at a condo on Seabrook Island. It was a pretty good get-away: we got some work done (we both did a lot of reading and writing things), went into Charleston a couple times (meh, it was nice I guess), went on a cool plantation tour, I attended (via Zoom) a department meeting while walking on the beach one nice day, and we did have some good food here and there too. It was all nice enough and I don’t rule out going again, but it wasn’t quite our thing, I don’t think. I started working on this Computers and Composition Online article based on my online teaching survey (more on that later too). Among other things, watched Painting With John on HBO, another season of Survivor, rewatched The French Dispatch.

April and more interviewing, more working on the CCO piece, and starting to work on the Computers and Writing Conference session. I was originally going to go to that (it was in Greenville, NC), but life/home plans got in the way. So once again I was online, and also once again, it was “on demand,” which is to say that I also ended up presenting to the online equivalent of an empty room– not the first time I’ve done that, but still, a group like computers and writing should do better. I posted my “talk” here. I’m afraid I will probably not be able to be there face to face for the 2023 CWCON at UC-Irvine; that trip is still TBA, though those organizers seem more committed to hosting a viable online experience.  In April, I saw probably the best movie I’ve seen this year, Everything Everywhere All At Once, and listened to (or started listening to) a book by Johann Hari called Stolen Focus which I’m going to assign in WRTG 121 this coming winter term. Started doing yard stuff, Annette got a kayak, I baked still more bread. Oh, also saw a movie called Jesus Shows You the Way to the Highway that was bonkers.

May and more interviewing, more working o the CCO piece, the CWCON 22 happened (I wasn’t as involved as much as I should have been, but I did poke around at some other “on demand” materials that were interesting), started planting stuff in the garden, started golfing some, ate a fair amount of asparagus, etc. And then at the end of the month, we went up north to stay at a fantastic house on Big Glen Lake. We were planning on going back there in 2023, but after a series of events I don’t understand (was the house sold? is there a problem with the rental company? something else?), we’re staying someplace different. Stay tuned for early June 2023. Among other things, we watched Gog.

By June, I started having some “interesting” discussions with the editors of the CCO about my article. Let’s just say that the reviewer involved in the process was “problematic” and leave it at that. Eventually, I think the editors were able to give me some good direction that helped me make this into a good piece (IMO), but it wasn’t easy. More interviews, but that was the last of them. There was more gardening, more going out for lunch while Will was visiting, more of “the work,” seeing movies, etc.

July was a lot of travel. We went to D.C.– I suppose because the trip in January was scrubbed– and then to New Haven to see Will, then to New York City via train for a couple of nights (saw our friend Annette, a kind of off-Broadway production of Little Shop of Horrors, and went walking on the high line park and to the Whitney museum), then to Portland, Maine (only for a night– I’d go back for sure), and then to Bar Harbor and Arcadia National Park. It was a really lovely trip. I think I am more fond of the grand “road trip” than Annette is, but she played along. After the cruise (see below), I believe I have two states left on my “having at least passed through” list: Rhode Island (which I figure we can tick off the next time we go out to visit Will) and North Dakota, which might require a more purposeful trip. Among other things this month, watched at least one Vincent Price movie.

August was more travel– and getting ready to teach too. We went to Iowa to celebrate my mother’s 80th birthday party, and then (of all things!) we went on a cruise to Alaska. Among the lessons learned from that trip are if you are going to take a cruise for Alaska, you need to go for longer than just the 5 nights we went. Highlights include actually touching ground in Ketchikan, Alaska (briefly) and a stop in the delightful town of Victoria, British Columbia. Then back here and getting ready for teaching again– for the first time in eight months.

September and EMU started up again– at least for about a week. Then the faculty went on strike, which was the first time that’d happened around here since 2006. I blogged about some of this back here. It was interesting being one of the old hands around here this time around. I got here in 1998, and by 2006, I think we had been on strike or close to it twice before, and the 2006 strike was “the big one.” So 16 years between strikes was a long time. It was disruptive and chaotic and frustrating, but also necessary and probably the most justifiable strike I’ve experienced, and we did end up getting a better deal than we would have otherwise. Oh, and I need to note this here (since I will someday look back at this post and go “oh yeah, that’s right!”): One of the things that really seemed to make the administration want to settle things up is that Michael Tew, who was a vice provost and one of the four or five people who run stuff at EMU, was busted for masturbating while he was driving around naked in Dearborn with all of the doors and the roof off of his Jeep. Classy. Anyway, there was teaching on either side of the few days we had off on striking, and it was kind of a rough start of the term for me. I have said and written this elsewhere: it was like getting back on a bike after having not ridden one in a long time in that I remembered how to do it, but I wasn’t quite sure how to go too fast or to turn too quickly or whatever. My students in my f2f class (first year writing) seemed to feel mostly the same way. Among other things, we watched Shakes the Clown.

A word about Covid here: by the end of the first month or so of the semester, and after a summer of travel that included a LOT of potentially infectious places like crowded museums, restaurants, planes, trains, and a cruise ship, and I still haven’t had Covid– or if I have had it, I never knew it (and that’s perhaps most likely). I’m not saying it is “over” or it’s nothing at all to worry about, and I’m fully vaxxed up (and I got a flu shot too). But for the most part, it feels like Covid is mostly over.

October was more work stuff with a trip up north in the middle of the month. It was both nice and not: “nice” because it’s always good to get-away, we caught up with friends who live up there, saw some pretty leaves, had a Chubby Mary, etc., but “not” because the hot tub at the place we rented didn’t work (and look, that was the point of renting that place) and it was cold and rainy and even snowy. And as is so often the case in Michigan, it was stunningly beautiful weather for like 10 days after our trip, both up there and down here.  Also in a note of not being over with Covid but just not worrying about it a lot anymore: Halloween was back to full-on trick or treating– no delivery tubes, for example.

November started off with politics, and that turned out great in Michigan, pretty okay everywhere else. Yeah, the Republicans didn’t do as well as they should have, but they still control the House– well, they have more votes. I don’t think there’s going to be a lot of “control” in the next year or so. Lots of teaching stuff and work stuff, some pie making, and then to Iowa for the Krause Thanksgiving-Christmas get-together.

December and things got a little more interesting around here. I blogged some about ChatGPT and having my students in a class use GPT-3 for an assignment. That post got a lot of hits. If I wasn’t already kind of committed to working on the transcripts of the interviews of people teaching online during Covid, I might very well spend some time and effort on researching this stuff. It’s quite interesting, and given the completely unnecessary and goofy level of freak-out I’ve seen on social media about, it’s also necessary work. Oh, and that Computers and Composition Online article finally came out. I’ll have to read some of the other articles in this issue, too. Then the semester was over and it was time for a trip to the in-laws, who moved into a smaller place. So new adventures for them, and for us too: we stayed at a pretty nice airbnb, actually rented a car, explored new restaurants and dressy dining rooms. And still a fair amount of damage from Ian.

Well, that’s it– at least the stuff I’m willing to write down here.

AI Can Save Writing by Killing “The College Essay”

I finished reading and grading the last big project from my “Digital Writing” class this semester, an assignment that was about the emergence of openai.com’s artificial intelligence technologies GPT-3 and DALL-E. It was interesting and I’ll probably write more about it later, but the short version for now is my students and I have spent the last month or so noodling around with software and reading about both the potentials and dangers of rapidly improving AI, especially when it comes to writing.

So the timing of of Stephen Marche’s recently published commentary with the clickbaity title “The College Essay Is Dead” in The Atlantic could not be better– or worse? It’s not the first article I’ve read this semester along these lines, that GPT-3 is going to make cheating on college writing so easy that there simply will not be any point in assigning it anymore. Heck, it’s not even the only one in The Atlantic this week! Daniel Herman’s “The End of High-School English” takes a similar tact. In both cases, they claim, GPT-3 will make the “essay assignment” irrelevant.

That’s nonsense, though it might not be nonsense in the not so distant future. Eventually, whatever comes after GPT-3 and ChatGPT might really mean teachers can’t get away with only assigning writing. But I think we’ve got a ways to go before that happens.

Both Marche and Herman (and just about every other mainstream media article I’ve read about AI) make it sound like GPT-3, DALL-E, and similar AIs are as easy as working the computer on the Starship Enterprise: ask the software for an essay about some topic (Marche’s essay begins with a paragraph about “learning styles” written by GPT-3), and boom! you’ve got a finished and complete essay, just like asking the replicator for Earl Grey tea (hot). That’s just not true.

In my brief and amateurish experience, using GPT-3 and DALL-E is all about entering a carefully worded prompt. Figuring out how to come up with a good prompt involved trial and error, and I thought it took a surprising amount of time. In that sense, I found the process of experimenting with prompts similar to the kind of  invention/pre-writing activities  I teach to my students and that I use in my own writing practices all the time.  None of my prompts produced more than about two paragraphs of useful text at a time, and that was the case for my students as well. Instead, what my students and I both ended up doing was entering in several different prompts based on the output we were hoping to generate. And my students and I still had to edit the different pieces together, write transitions between AI generated chunks of texts, and so forth.

In their essays, some students reflected on the usefulness of GPT-3 as a brainstorming tool.  These students saw the AI as a sort of “collaborator” or “coach,” and some wrote about how GPT-3 made suggestions they hadn’t thought of themselves. In that sense, GPT-3 stood in for the feedback students might get from peer review, a visit to the writing center, or just talking with others about ideas. Other students did not think GPT-3 was useful, writing that while they thought the technology was interesting and fun, it was far more work to try to get it to “help” with writing an essay than it was for the student to just write the thing themselves.

These reactions square with the results in more academic/less clickbaity articles about GPT-3. This is especially true about  Paul Fyfe’s “How to cheat on your final paper: Assigning AI for student writing.” The assignment I gave my students was very similar to what Fyfe did and wrote about– that is, we both asked students to write (“cheat”) with AI (GPT-2 in the case of Fyfe’s article) and then reflect on the experience. And if you are a writing teacher reading this because you are curious about experimenting with this technology, go and read Fyfe’s article right away.

Oh yeah, one of the other major limitations of GPT-3’s usefulness as an academic writing/cheating tool: it cannot do even basic “research.” If you ask GPT-3 to write something that incorporates research and evidence, it either doesn’t comply or it completely makes stuff up, citing articles that do not exist. Let me share a long quote from a recent article at The Verge by James Vincent on this:

This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”

I think this limitation (along with the limitation that GPT-3 and ChatGPT are not capable of searching the internet) makes using GPT-3 as a plagiarism tool in any kind of research writing class kind of a deal-breaker. It certainly would not get students far in most sections of freshman comp where they’re expected to quote from other sources.

Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre. Perhaps when Marche was still teaching Shakespeare (before he was a novelist/cultural commentator, Marche earned a PhD specializing in early English drama), he assigned his students to write an essay about one of Shakespeare’s plays. Perhaps he gave his students some basic requirements about the number of words and some other mechanics, but that was about it. This is what I mean by only assigning writing: there’s no discussion of audience or purpose, there are no opportunities for peer review or drafts, there is no discussion of revision.

Teaching writing is a process. It starts by making writing assignments that are specific and that require an investment in things like prewriting and a series of assignments and activities that are “scaffolding” for a larger writing assignment. And ideally, teaching writing includes things like peer reviews and other interventions in the drafting process, and there is at least an acknowledgment that revision is a part of writing.

Most poorly designed assigned writing tasks are good examples of prompts that you enter into GPT-3. The results are definitely impressive, but I don’t think it’s quite useful enough to produce work a would-be cheater can pass off as their own. For example, I asked ChatGPT (twice) to “write a 1000 word college essay about the theme of insanity in Hamlet” and it came up with this and this essay. ChatGPT produced some impressive results, sure, but besides the fact that both of these essays are significantly shorter than 1000 word requirement, they both kind of read like… well, like a robot wrote them. I think that most instructors who received this essay from a student– particularly in an introductory class– would suspect that the student cheated. When I asked ChatGPT to write a well researched essay about the theme of insanity in Hamlet, it managed to produce an essay that quoted from the play, but not any research about Hamlet.

Interestingly, I do think ChatGPT has some potential for helping students revise. I’m not going to share the example here (because it was based on actual student writing), but I asked ChatGPT to “revise the following paragraph so it is grammatically correct” and I then added a particularly pronounced example of “basic” (developmental, grammatically incorrect, etc.) writing. The results didn’t improve the ideas in the writing and it changed only a few words. But it did transform the paragraph into a series of grammatically correct (albeit not terribly interesting) sentences.

In any event, if I were a student intent on cheating on this hypothetical assignment, I think I’d just do a Google search for papers on Hamlet instead. And that’s one of the other things Marche and these other commentators have left out: if a student wants to complete a badly designed “college essay” assignment by cheating, there are much much better and easier ways to do that right now.

Marche does eventually move on from “the college essay is dead” argument by the end of his commentary, and he discusses how GPT-3 and similar natural language processing technologies will have a lot of value to humanities scholars. Academics studying Shakespeare now have a reason to talk to computer science-types to figure out how to make use of this technology to analyze the playwright’s origins and early plays. Academics studying computer science and other fields connected to AI will now have a reason to maybe talk with the English-types as to how well their tools actually can write. As Marche says at the end, “Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.”

Plus I have to acknowledge that I have only spent so much time experimenting with my openai.com account because I still only have the free version. That was enough access for my students and me to noodle around enough to complete a short essay composed with the assistance of GPT-3 and to generate an accompanying image with GPT-3. But that was about it. Had I signed up for openai.com’s “pay as you go” payment plan, I might learn more about how to work this thing, and maybe I would have figured out better prompts for that Hamlet assignment. Besides all that, this technology is getting better alarmingly fast. We all know whatever comes after ChatGPT is going to be even more impressive.

But we’re not there yet. And when it is actually as good as Marche fears it might be, and if that makes teachers rethink how they might teach rather than assign writing, that would be a very good thing.

Higher Education Didn’t Cause the Rise of MAGA Conservatism and It is a Major Part of the Only Possible Solution

As a college professor who also follows politics fairly closely, I’ve been noticing a lot of commentaries about how universities are making the political divide in America worse. I think that’s ridiculous (and the tl;dr version of this post is college educated people are leaving the Republican party not because college “makes” people into Democrats, but because the party has gone crazy). I guess these ideas have been in the air for a couple years now, though it’s gotten a bit more intense lately.

The version of this most in my mind now is Will Bunch’s After the Ivory Tower Falls: How College Broke the American Dream and Blew Up Our Politics—and How to Fix It, which I finished listening to a couple ago. There’s a lot to unpack in that book about things he got right and wrong (IMO), and I completely agree with this review in The New York Times. But in broad terms, Bunch argues higher education is the primary cause of political division and the rise of “MAGA” conservatism in the United States. Universities perpetuate a rigged meritocracy, they’ve grown increasingly liberal (I guess), and they have become horrifically expensive, all of which puts college out of reach for a lot of the same working class/working poor people who show up at Trump rallies.

This kind of thing seems to be in the air nowadays. For example, there’s this recent article from New York magazine, “How the Diploma Divide Is Remaking American Politics” by Eric Levitz. There’s no question that there have been shifts in how education aligns with political parties. Levitz notes that Kennedy lost the college-educated vote by a two-to-one margin, while Biden lost the non-college-educated vote by a two-to one margin. Levitz goes on to argue, with fairly convincing evidence, that higher education as an experience does tend to present people with similar ideas and concepts about things like science, art, ethics, and the like, and those tend to be the ideas and concepts embraced by people who identify as Democrats.

Or at least identify more as Democrats now– because as both Bunch and Levitz point out, college graduates were about equally split between the two parties until about 2004. In fact, as this 2015 article from the Pew Research Center discusses, more college graduates identified as Republicans between 1992 (where the data in that article begins) and 2004. And I’m old enough to vividly remember the presidential campaign between Al Gore and George W. Bush in 2000 and how one of the common complaints among undecided voters was Bush and Gore held the same positions on most of the major issues. How times have changed.

Anyway, U.S. universities did not tell state legislatures and voters during the Regan administration to cut funding to what once were public universities; politicians and voters did that. Higher education did not tell corporate America that a bachelors degree should be the required credential to apply for an entry-level white collar position, even when there seems little need for that kind of credential. That standard was put in place by corporate America itself, and corporate America is lead by the same people who said we shouldn’t support higher education with taxes. In other words, the systematic defunding of public higher education has been a double-whammy on poor people. The costs of college are putting it financially out of the reach of the kinds of students who could most benefit from a degree, and at the same time, it makes it easier for parents with plenty of money to send their kids (even the ones who did poorly in high school) to college so they can go on to a nice and secure white collar job.

I’m not saying that higher education isn’t a part of the problem. It is, and by definition, granting students credentials perpetuates a division between those who have a degree and those who do not. Universities have nothing to do with company polices that require salaried employees to have a bachelors degree in something, but universities are also very happy to admit all those students who have been told their entire lives that this is the only option they have.

But the main cause of the political division in this country? I’m not even sure if it’s in the top five. For starters– and Bunch acknowledges this– the lack of decent health care and insurance are at least as responsible for the divide between Americans as anything happening in higher education. A lot of Americans have student loan debt of course, but even more have crippling medical debt. Plus our still unfair and broken health care system enables/causes political division in “spin-off” ways like deaths and ruined lives from opioids and the Covid pandemic, both of which impact people who lack a college degree and who are poor at a higher rate. Plus the lack of access to both health care and higher education for so many poor people is both a symptom and a result of an even larger cause of political division in the U.S., which is the overall gap between rich and poor.

Then there’s been the changes in manufacturing in the U.S. A lot of good factory jobs that used to employ the people Bunch talks about–including white guys with just a high school diploma who voted for Obama twice and then Trump– moved to China, and/or disappeared because of technical innovations. One particular example from Bunch’s book is of a guy who switched from an Obama voter to a wildly enthusiastic MAGA Trump-type. Bunch wants to talk about how he became disillusioned with a Democratic party catering to educated and elite voters. That’s part of it, sure, but the fact that this guy used to work for a factory that made vinyl records and music CDs probably was a more significant factor in his life. I could go on, but you get the idea.

But again, I think these arguments that higher ed has caused political polarization because there are now more Democrats with college degrees than Republicans are backwards. The reason why there are fewer Republicans with college degrees now than there used to be is because the GOP, which has been moving steadily right since Bush II, has gone completely insane under Trump.

There have been numerous examples of what I’m talking about since around 2015 or so, but we don’t need to look any further than the current events of when I’m writing this post. Paul Pelosi, who is the husband of Nancy Pelosi, the Speaker of the House of Representatives, was violently attacked and nearly killed by a man who broke into the Pelosi’s San Francisco home. The intruder, who is clearly deranged in a variety of different ways, appears to have been inspired to commit this attack from a variety of conspiracies popular with the MAGA hardcore, including the idea that the election was fixed and that the leaders in the Democratic party in the US are intimately involved in an international child sex ring.

US Senate minority leader Mitch McConnell and House minority leader Kevin McCarthy condemned the attacks after they happened on Friday, but just a few days later, Republicans started to make false claims about the attack. For example, one theory has it that the guy who attacked Paul Pelosi was actually a male prostitute and it was a deal gone wrong. Others said the story just “didn’t add up,” and used it as an example of how Democrats are soft on crime. Still other Republicans– including GOP candidate for governor in Arizona Kari Lake and current Virginia Governor Glenn Youngkin— made jokes about what was a violent assault on the campaign trail. And of course, Trump is fueling these wacko theories as well.

Now, I’m not saying that college graduates are “smarter” than those who don’t have college degrees, and most of us who are college graduates still have a relatively narrow amount of knowledge and expertise. But besides providing expertise that leads to professions– like being an engineer or a chemist of an elementary school teacher or a writer or whatever– higher education also provides students at least some sense of cultural norms (as Levitz argues) about things like “Democracy,” the value of science and expertise, ethics, history, and art, and it equips students with the basic critical thinking skills that allows people to be better able to spot the lies, cons, and deceptions that are at the heart of MAGA conservatism.

So right now, I think people who are registered Republicans (I’m not talking about independents who lean conservative– I’ll come back to that in a moment) basically fall into three categories. There are people who still proudly declare they are Republicans but who are also “never Trumpers,”  though never Trumpers no longer have any candidates representing their views. Then there are those Republicans who actually believe all this stuff, and I think most of these people are white men (and their families) who have a high school degree and who were working some kind of job (a factory making records, driving trucks, mining coal, etc.) that has been “taken away” from them. These people have a lot of anger and Trump taps into all that, stirs it up even more, and he enables the kind of conspiracy thinking and racism that makes people not loyal to the Republican party but loyal to Trump as a charismatic leader. It’s essentially a cult, and the cult leaders are a whole lot more culpable than the followers they brain-washed.

Then there are Republicans who know all the conspiracies about the 2020 election and everything else are just bullshit but they just “go along with it,” maybe because they still agree with most of the conservative policies and/or maybe they’re just too attached to the party leave. But at the same time, it’s hard to know what these people actually believe. Does Trump believe his own bullshit? Hard to say. How about Rudy Giuliani or Lindsey Graham or  Kevin McCarthy? Sometimes, I think they know it’s all a con, and sometimes I don’t.

Either way, that’s why college grads aren’t joining the Republican party– and actually, why membership in the Republican party as a whole has gone down, even among people without a college degree. It certainly isn’t because people like me, Democrat-voting college professors, have “indoctrinated” college students or something. Hell, as many academic-types have said long before me, I can’t even get my students to routinely read the syllabus and complete assignments correctly; you think that I have the power to convince them that the Democrats are always right? I wish!

In other words, these would-be Republicans are not becoming Democrats; rather, they are contributing the growing number of independent voters, though ones who tend to vote for Republican candidates. I’ve seen this shift in my extended family as my once Republican in-laws and such talk about how they are no longer in the party. My more conservative relatives didn’t vote for Trump in 2020 and probably won’t in 2024 either, but that doesn’t mean they are going to vote for Biden.

One last thing: I’m not going to pretend to have the answer for how we get out of the political polarization that’s going on in this country, and I have no idea how we can possibly “un-brainwash” the hardcore MAGA and Qanon-types. I think these people are a lost cause, and I don’t think any of this division is going away as long as Trump is a factor. But there is no way we are ever going to get back to something that seems like “normal” without more education, and part of that means college.

On the Eve of a (Possible) Strike, Thinking Back on the Strike of 2006

We started classes here at EMU on Monday, August 29, and we might be halting them– at least all the ones taught by faculty– on Thursday, September 1, because that’s when the EMU-AAUP faculty union contract expires. Here’s a link to a story about all this on the Detroit NBC affiliate’s web site which kind of gets it right, but not quite.

I think the main sticking point right now is trying to figure out a way to give everyone a modest raise but that also covers a steep increase in health insurance. That is not an easy problem to solve at all because there are so many variables in play. For example, our only son is turning 25 and thus just about done with being eligible for our insurance anyway, and both my wife and I are in the “senior faculty” category and thus a lot more secure and settled in our positions. So for me, a contract that pays 3-4% a year plus some money to offset the increase in insurance premiums is fine. But for someone without that level of seniority (and the pay raises that accompany that) or who has many more dependents, especially if some of those children, spouses, other insured family members have some kind of condition that requires more elaborate (and expensive) insurance, the deal that EMU administration is proposing– even as they characterize it as an “up to 8% raise for most faculty”– really could be a pay cut for a lot of folks.

Anyway, I was thinking about some of that on my first day of teaching Tuesday and as I explained to my students that I might be on strike on Thursday, and I realized that the last time the EMU faculty went on strike was way back in the fall of 2006. This was before things like Facebook or Twitter were much of a thing, and I spent most of the energy I now spend on social media just on blogging here. And back during the strike, I blogged about it A LOT.

I don’t even know how many posts I wrote about all this and labeled The Strike of 2006— maybe 40? Maybe more? The chronology is a bit wonky here, so the “beginning” (back in August 2006) starts on the bottom of page 5 of this archive. It’s not worth rehashing all of it, but there are some interesting things. Once again, healthcare costs were the sticking point, which also once again reminds me that if we had a version of the kind of universal/government run health care program that’s available in most of the other countries in the world, or if we could just extend Medicare to everyone and not just people over 65, we probably would not have gone on strike back then, and we certainly wouldn’t go on strike now. But I digress.

More problematically perhaps, the other similarity between then and now seems to be the approach to negotiations taken by the administration. They have once again hired Dykema’s James P. Greene, who was even before the 2006 strike known around EMU as a “union busting” lawyer. I think he was the administration’s main negotiator before 2006 (and I recall being on strike a couple times before 2006 when I believe Greene was in charge), and that ended up being the ugliest strike in my time here. Back in 2006, there were complaints from both sides at the table similar to what we have now: a lack of willingness to actually negotiate, a lot of sketchy numbers being presented (mostly by the administration), a lot of “we almost have a deal” until we don’t, mediators, etc.

Hopefully, things will not turn as ugly as they did in 2006. For example, after being out on strike for four days, EMU (from then president Jim Fallon and BoR chair Karen Valvo) issued an ultimatum demanding (basically) that the faculty give up their childish strike and accept the administration’s terms by 10 PM on September 6 “or else.” Here’s my blog post about that, and (thanks to the Wayback Machine) here’s the administration’s original press release on all this. Well, that move (IMO) backfired on the administration badly. Before that, a lot of faculty– including me– were starting to say to each other that maybe it’d be best to settle and get on with the school year. But that threat really pissed people off, and (a long story made much shorter) we ended up staying out on strike for about two weeks, we “suspended” the strike and went back to work while the university and the union went through a “fact finding” and arbitration process that didn’t get resolved until the following spring. We actually ended up with a deal that was closer to what the faculty was originally asking for, but like I said, I’d just as soon avoid that.

One other difference I’m noticing this time around, at least in myself: I think the union/faculty is even more right this time around. As I wrote here way back when, I thought both sides of the table were playing pretty “fast and loose” with some of the facts in the name of a pissing contest that they both hoped to win. There’s still some of that going on, no question. But I think the administration is the one that’s prolonging this thing this time.

I guess we’ll see what the next 24 or so hours brings. Hopefully we’ll have a deal because a strike is not a “win” for anyone, not for our students of course, but not for the administration or the faculty either. Hopefully, the administration does recall that the last time they tried these tough guy bullshit tactics.