Why I Use Google Docs to Teach Writing, Especially in the Age of AI

I follow a couple different Facebook groups about AI, each of which have become a firehose of posts lately, a mix of cool new things and brand new freakouts. A while back, someone in one of these groups posted about an app to track the writing process in a student’s document as a way of proving that the text was not AI. My response to this was “why not just use Google docs?”

I wish I could be more specific than this, but I can’t find the original post or my comment to it; maybe it was deleted. Anyway, this person asked “what did I mean?” and I explained it briefly, but then I said I was thinking about writing a blog post about it. Here is that post.

For those interested in the tl;dr version: I think the best way to discourage students from handing in work they didn’t create (be that from a papermill, something copied and pasted from websites, or AI) is to teach writing rather than merely assigning writing. That’s not “my” idea; that’s been the mantra in writing studies for at least 50 years. Also not a new idea and one you already know if you use and/or teach with Google docs: it is a great tool for teaching writing because it helps with peer review and collaborative writing, and the version history feature helps me see a student’s writing process, from the beginning of the draft through revisions. And if a student’s draft goes from nothing to complete in one revision, well, then that student and I have a chat.

Continue reading “Why I Use Google Docs to Teach Writing, Especially in the Age of AI”

TALIA? This is Not the AI Grading App I Was Searching For

(My friend Bill Hart-Davidson unexpectedly died last week. At some point, I’ll write more about Bill here, probably. In the meantime, I thought I’d finish this post I started a while ago about the webinar about Instructify’s AI grading app. Bill and I had been texting/talking more about AI lately, and I wish I would have had a chance to text/talk more about this. Or anything else).

In March 2023, I wrote a blog post titled “What Would an AI Grading App Look Like?” I was inspired by what I still think is one of the best episodes of South Park I have seen in years, “Deep Learning.”  Follow this link for a detailed summary or look at my post from last year, but in the nutshell, the kids start using ChatGPT to write a paper assignment and Mr. Garrison figures out how to use ChatGPT to grade those papers. Hijinks ensue.

Well, about a month ago and at a time when I was up to my eyeballs in grading, I saw a webinar presentation from Instructify about their AI product called TALIA. The title of the webinar was “How To Save Dozens of Hours Grading Essays Using AI.” I missed the live event, but I watched the recording– and you can too, if you want— or at least you could when I started writing this. Much more about it after the break, but the tl;dr version is this AI grading tool is not the one I am looking for (not surprisingly), and I think it would be a good idea for these tech startups to include people with actual experience with teaching writing on their development teams.

Continue reading “TALIA? This is Not the AI Grading App I Was Searching For”

Workin’ 9 to 5 (sort of) and Other Adventures of All FY Writing/All the Time!

As I blogged about earlier this year, I’m doing something this semester that I have never done as a tenure-track professor: I’m teaching a full load (three sections) of first year writing. I’ve had semesters where I’ve taught multiple sections of the same class, but I think the last time I did that was in the early 2000s where I taught two sections of a 300-level course while also having a course release to  do quasi-administrative work. As I explained earlier, my current schedule is a fluke based on the circumstances this semester and I jumped at the chance to just teach first year writing. In other words, this was my idea: I wanted to have one prep for a change of pace, and I also like to teach first year writing.

(Incidentally, when I was hired at EMU in 1998, my primary teaching assignments were an earlier version of this 300-level course and a graduate course on teaching with computers. Times and curriculums have changed and I haven’t taught that 300 level class in eight years and that grad course in at least 15 years, maybe more).

Having only one course to prepare– as opposed to three different classes– has been nice, and it’s especially nice that it’s first year composition, a course that I have literally been teaching regularly in my dreams for most of my life at this point. I’ve been able to keep all three different classes on the same schedule, so with a bit of tweaking and customization for each section, it still is one prep. And not surprisingly, one prep is easier than three.

The downsides? Well, all three of sections are f2f (as is the case with all of the first year writing courses at EMU) and all three sections are on Tuesdays and Thursdays. Now, I haven’t taught three f2f classes since I started teaching online for part of my load, and that was almost 20 years ago. I also haven’t taught this early for a while (my first section is at 9:30 in the morning), and I haven’t taught back-to-back sections with no break between in a long time either. So on Tuesdays and Thursdays, I am in the office by 9 am and working pretty steadily until I’m done at 5 pm.

Because those days end up being nothing but teaching and preparing for teaching, I have also had to come into the office a lot more on other days during the week. I ran into an especially intense stretch in late January/early February when I had conferences with all 60 (or so) of my students– along with having a bunch of other “life” appointments and family stuff. I was on campus and mostly in my office for just about two weeks back then, and almost all day each of those days.

I realize this isn’t a work schedule most people would complain about– and I’m not complaining, at least not exactly. It’s just a very different rhythm from teaching a mix of f2f and online. The upside of teaching a mix of f2f and online is it gives me a lot more scheduling flexibility for when I do things. I do most of my online teaching while at home and in pajamas or sweats, plus I can take a break once in a while to do laundry or something else that needs to be done around the house.

But if I’m not disciplined about scheduling myself about when I do the work– planning, grading, and interacting with the class discussion boards– teaching asynchronously online can become an all day/all night thing where I’m constantly working in a not so efficient multitasking kind of way. So while teaching f2f means I’m spending a lot more time on campus, it does create at least more separation between life and work. That’s a good thing.

And I do like teaching f2f– not really more than teaching online (I like doing that too), but I like it. I like the live performance of f2f teaching and after having taught a zillion sections of first year writing, I have a refined schtick. I like putting on the show three times a day right in a row.

I’ve also been struck by the differences in these three sections. It’s not news to me that different groups of students taking the same course can have very different personalities, dynamics, and responses to readings and assignments. But teaching the same thing to three different classes (back to back to back) makes this very visible. Without getting into any details, it’s pretty clear that these different sections are not equally capable.

It does get a little boring doing the same thing three times in a row. If I’m scheduled to teach three sections of first year writing again like this, I would probably be okay with it. But I think I’d prefer two preps with an online class in the mix. Get back with me when I’m at the end of the semester to see if I feel the same way.

 

Starting 2024 With All First Year Writing/All the Time!

This coming winter term (what every other university calls spring term), I’m going to be doing something I have never done in my career as a tenure-track professor. I’m going to be teaching first year composition and only first year composition.  It’ll be quite a change.

When I came to EMU in 1998, my office was right next to a very senior colleague, Bob Kraft. Bob, who retired from EMU in 2004 and who passed away in December 2022, came back to the department to teach after having been in some administrative positions for quite a while. His office was right next to mine and we chatted with each other often about teaching, EMU politics, and other regular faculty chit-chat. He was a good guy; used to call me “Steve-O!”

Bob taught the same three courses every semester: three sections of a 300-level course called Professional Writing. It was a class he was involved in developing back in the early 1980s and I believe he assigned a course pack that had the complete course in it– and I mean everything: all the readings, in-class worksheets, the assignments, rubrics, you name it. Back in those days and before a university shift to “Writing Intensive” courses within majors, this was a class that was a “restricted elective” in lots of different majors, and we offered plenty of sections of it and similar classes. (In retrospect, the shift away from courses like this one to a “writing in the disciplines” approach/philosophy was perhaps a mistake both because of the way these classes have subsequently been taught in different disciplines and because it dramatically reduced the credit hour production in the English department– but all this is a different topic).

Anyway, Bob essentially did exactly the same thing three times a semester every semester, the same discussions, the same assignments, and the same kinds of papers to grade. Nothing– or almost nothing– changed. I’m pretty sure the only prep Bob had to do was change the dates on the course schedule.

I thought “Jesus, that’d be so boring! I’d go crazy with that schedule.” I mean, he obviously liked the arrangement and I have every reason to believe it was a good class and all, but the idea of teaching the same class the same way every semester for years just gave me hives. Of course, I was quite literally in the opposite place in my career: rather than trying to make the transition into retirement, I was an almost freshly-minted PhD who was more than eager to develop and teach new classes and do new things.

For my first 20 years at EMU (give or take), my work load was a mix of advanced undergraduate writing classes, a graduate course almost every semester, and various quasi-administrative duties. I occasionally have had semesters where I taught two sections of the same course, but most semesters, I taught three different courses– or two different ones plus quasi-admin stuff. I rarely taught first year composition during the regular school year (though I taught it in the summer for extra money while our son Will was still at home) because I was needed to teach the advanced undergrad and MA-level writing classes we had. And this was all a good thing: I got to teach a lot of different courses, I got a chance to do things like help direct the first year writing program or to coordinate our major and grad program, and I had the opportunity to work closely with a lot of MA students who have gone on to successful careers of their own.

But around six or seven years ago, the department (the entire university, actually) started to change and I started to change as well. Our enrollments have fallen across the board, but especially for upper-level undergraduate and MA level courses, which means instead of a grad course every semester, I tend to teach one a school year, along with fewer advanced undergrad writing classes, and now I teach first year writing every semester. One of the things I’ve come to appreciate about this arrangement is the students I work with in first year composition are different from the students I work with on their MA projects– but they’re really not that different, in the big picture of things.

And of course, as I move closer to thinking about retirement myself, Bob’s teaching arrangement seems like a better and better idea. So, scheduling circumstances being what they are, when it became clear I’d have a chance to just teach three sections of first year comp this coming winter, I took it.

We’ll see what happens. I’m looking forward to greatly reducing my prep time because this is the only course I’m teaching this semester (just three times), and also because first year writing is something I’ve taught and thought about A LOT. I’m also looking forward to experimenting with requiring students to use ChatGPT and other AI tools to at least brainstorm and copy-edit– maybe more. What I’m not looking forward to is kind of just repeating the same thing three times in a row each day I teach. Along these lines, I am not looking forward to teaching three classes all on the same days (Tuesdays and Thursdays) and all face to face. I haven’t done that in a long time (possibly never) because I’ve either taught two and been on reassigned time, or I have taught at least a third of my load online. And I’m also worried about keeping all three of these classes in synch. If one group falls behind for some reason, it’ll mess up my plans (this is perhaps inevitable).

What I’m not as worried about is all the essays I’ll have to read and grade. I’m well-aware that the biggest part of the work for anyone teaching first year writing is all the reading and commenting and grading student work, and I’ve figured out a lot over the years about how to do it. Of course, I might be kidding myself with this one….

So, What About AI Now? (A talk and an update)

A couple of weeks ago, I gave a talk/lead a discussion called “So, What About AI Now?” That’s a link to my slides. The talk/discussion was for a faculty development program at Washtenaw Community College, a program organized by my friend, colleague, and former student, Hava Levitt-Phillips.

I covered some of the territory I’ve been writing about here for a while now and I thought both the talk and discussion went well. I think most of the people at this thing (it was over Zoom, so it was a little hard to read the room) had seen enough stories like this one on 60 Minutes the other night: Artificial Intelligence is going to at least be as transformative of a technology as “the internet,” and there is not a zero percent chance that it could end civilization as we know it. All of which is to say we probably need to put the dangers of a few college kids using AI (badly) to cheat on poorly designed assignments into perspective.

I also talked about how we really need to question some of the more dubious claims in the MSM about the powers of AI, such as the article in the Chronicle of Higher Education this past summer, “GPT-4 Can Already Pass Freshman Year at Harvard.”  I blogged about that nonsense a couple months ago here, but the gist of what I wrote there is that all of these claims of AI being able to pass all these tests and freshman year at Harvard (etc.) are wrong. Besides the fact that the way a lot of these tests are run make the claims bogus (and that is definitely the case with this CHE piece), students in our classes still need to show up– and I mean that for both f2f and online courses.

And as we talked about at this session, if a teacher gives students some kind of assignment (an essay, an exam, whatever) that can be successfully completed without ever attending class, then that’s a bad assignment.

So the sense that I got from this group– folks teaching right now the kinds of classes where (according to a lot of the nonsense that’s been in MSM for months) the cheating with ChatGPT et al was going to just make it impossible to assign writing anymore, not in college and not in high school— is it hasn’t been that big of a deal. Sure, a few folks talked about students who tried to cheat with AI who were easily caught, but for the most part it hadn’t been much of a problem. The faculty in this group seemed more interested in trying to figure out a way to make use of AI in their teaching than they were in cheating.

I’m not trying to suggest there’s no reason to worry about what AI means for the future of… well, everything, including education. Any of us who are “knowledge workers”– that is, teachers, professors, lawyers, scientists, doctors, accountants, etc. etc.– needs to pay attention to AI because there’s no question this shit is going to change the way we do our jobs. But my sense from this group (and just the general vibe I get on campus and in social media) is that the freak-out about AI is over, which is good.

One last thing though:  just the other day (long after this talk), I saw what I believe to be my first case of a student trying to cheat with ChatGPT– sort of. I don’t want to go into too many details since this is a student in one of my classes right now. But basically, this student (who is struggling quite a bit) turned in a piece of writing that was first and foremost not the assignment I gave, and it also just happened this person used ChatGPT to generate a lot of the text. So as we met to talk about what the actual assignment was and how this student needed to do it again, etc., I also started asking about what they turned in.

“Did you actually write this?” I asked. “This kind of seems like ChatGPT or something.”

“Well, I did use it for some of it, yes.”

“But you didn’t actually read this book ChatGPT is citing here, did you?”

“Well, no…”

And so forth.  Once again, a good reminder that students who resort to cheating with things like AI are far from criminal masterminds.

A Belated “Beginning of the School Year” Post: Just Teaching

I don’t always write a “beginning of the school year” post and when I do, it’s usually before school starts, some time in August, and not at the end of the second week of classes. But here we are, at what seasonally always feels to me a lot more like the start of the new year than January.

This is the start of my 25th year at EMU. This summer, I selected another one of those goofy “thanks for your service” gifts they give out in five year increments. Five years ago, I picked out a pretty nice casserole dish; this time, I picked out a globe, one which lights up.

I wrote a new school year post like this was in 2021, and back then, I (briefly) contemplated the faculty buyout offer. “Briefly” because as appealing as it was at the time to leave my job behind, there’s just no way I could afford it and I’m not interested in starting some kind of different career. But here in 2023, I’m feeling good about getting back to work. Maybe it’s because I had a busy summer with lots of travel, some house guests, and a touch of Covid. After all of that, it’s just nice to have a change of pace and get back to a job. Or maybe it’s because (despite my recent case) we really are “past” Covid in the sense that EMU (like everywhere else) is no longer going through measures like social distancing, check-ins noting you’re negative, vax cards, free testing, etc. etc. This is not to say Covid is “over” of course because it’s still important for people to get vaxxed and to test.  And while I know the people I see all the time who are continuing to wear masks everywhere think lowering our defenses to Covid is foolish and it is true that cases right now are ticking up, the reality is Covid has become something more or less like the flu: it can potentially kill you, sure, but it is also one of those things we have to live with.

Normally in these kinds of new school year posts, I mention various plans and resolutions for the upcoming year. I have a few personal and not unusual ones– lose weight, exercise more, read more, and so on– but I don’t have any goals that relates to work. I’m not involved in any demanding committees or other service things, and I’d kind of like to keep it that way. I’m also not in the midst of any scholarly projects, and I can’t remember the last time that was the case. And interestingly (at least for me), I don’t know if I’ll be doing another scholarly project at this point. Oh, I will go to conferences that are in places I want to visit, and I’ll keep blogging about AI and other academic-like things I find interesting. That’s a sort of scholarship, I suppose. I’d like to write more commentaries for outlets like IHE or CHE, maybe also something more MSM. But writing or editing another book or article? Meh.

(Note that this could all change on a dime.)

So that leaves teaching as my only focus as far as “the work” goes. I suppose that isn’t that unusual since even when I’ve got a lot going on in terms of scholarly projects and service obligations, teaching is still the bulk of my job. I’ll have plenty to do this semester because I’ve got three different classes (with three different preps), and one of them is a new class I’m sort of/kind of making up as I go.

Still, it feels a little different. I’ve always said that if being a professor just involved teaching my classes– that is, no real service or scholarly obligations– then that wouldn’t be too hard of a job. I guess I’ll get to test that this term.

No, an AI could not pass “freshman year” in college

I am fond of the phrase/quote/mantra/cliché “Ninety percent of success in life is just showing up,” which is usually attributed to Woody Allen. I don’t know if Woody was “the first” person to make this observation (probably not, and I’d prefer if it was someone else), but in my experience, this is very true.

This is why AIs can’t actually pass a college course or their freshmen year or law school or whatever: they can’t show up. And it’s going to stay that way, at least until we’re dealing with advanced AI robots.

This is on my mind because my friend and colleague in the field, Seth Kahn, posted the other day on Facebook about this recent article from The Chronicle of Higher Education by Maya Bodnick, “GPT-4 Can Already Pass Freshman Year at Harvard.” (Bodnick is an undergraduate student at Harvard). It is yet another piece claiming that the AI is smart enough to do just fine on its own at one of the most prestigious universities in the world.

I agreed with all the other comments I saw on Seth’s post. In my comment (which I wrote before I actually read this CHE article), I repeated three points I’ve written about here or on social media before. First, ChatGPT and similar AIs can’t evaluate and cite academic research at even the modest levels I expect in a first year writing class. Second, while OpenAI proudly lists all the “simulated exams” where ChatGPT has excelled (LSAT, SAT, GRE, AP Art History, etc.), you have to click the “show more exams” button on that page to see that none of the versions of their AI has managed better than a “2” on the AP English Language (and also Literature) and Composition exams. It takes a “3” on this exam to get any credit at EMU, and probably a “4” at a lot of other universities.

Third, I think mainstream media and all the rest of us really need to question these claims of AIs passing whatever tests and classes and whatnot much MUCH more carefully than I think most of us have to date.  What I was thinking about when I made that last comment was another article published in CHE and in early July, “A Study Found That AI Could Ace MIT. Three MIT Students Beg to Differ,” by Tom Bartlett. In this article, Bartlett discusses  a study (which I don’t completely understand because it’s too much math and details) conducted by 3 MIT students (class of 2024) who researched the claim that an AI could “ace” MIT classes. The students determined this was bullshit. What were the students’ findings (at least the ones I could understand)? In some of the classes where the AI supposedly had a perfect score, the exams include unsolvable problems, so it’s not even possible to get a perfect score. In other examples, the exam questions the AI supposedly answered correctly did not provide enough information for that to be possible either. The students posted their results online and at least some of the MIT professors who originally made the claims agreed and backtracked.

But then I read this Bodnick article, and holy-moly, this is even more bullshitty than I originally thought. Let me quote at length Bodnick describing her “methodology”:

Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)

Here are the prompts with links to the essays, the names of instructors, and the grades each essay received:

  • Microeconomics and Macroeconomics (Jason Furman and David Laibson): Explain an economic concept creatively. (300-500 words for Micro and 800-1000 for Macro). Grade: A-
  • Latin American Politics (Steven Levitsky): What has caused the many presidential crises in Latin America in recent decades? (5-7 pages) Grade: B-
  • The American Presidency (Roger Porter): Pick a modern president and identify his three greatest successes and three greatest failures. (6-8 pages) Grade: A
  • Conflict Resolution (Daniel Shapiro): Describe a conflict in your life and give recommendations for how to negotiate it. (7-9 pages). Grade: A
  • Intermediate Spanish (Adriana Gutiérrez): Write a letter to activist Rigoberta Menchú. (550-600 words) Grade: B
  • Freshman Seminar on Proust (Virginie Greene): Close read a passage from In Search of Lost Time. (3-4 pages) Grade: Pass

I told these instructors that each essay might have been written by me or the AI in order to minimize response bias, although in fact they were all written by GPT-4, the recently updated version of the chatbot from OpenAI.

In order to generate these essays, I inputted the prompts (which were much more detailed than the summaries above) word for word into GPT-4. I submitted exactly the text GPT-4 produced, except that I asked the AI to expand on a couple of its ideas and sequenced its responses in order to meet the word count (GPT-4 only writes about 750 words at a time). Finally, I told the professors and TAs to grade these essays normally, except to ignore citations, which I didn’t include.

Not only can GPT-4 pass a typical social science and humanities-focused freshman year at Harvard, but it can get pretty good grades. As shown in the list above, GPT-4 got all A’s and B’s and one Pass.

JFC. Okay, let’s just think about this for a second:

  • We’re talking about three “essays” that are less than 1000 words and another three that are slightly longer, and based on this work alone, GPT-4 “passed” a year of college at Harvard. That’s all it takes. Really; really?! That’s it?
  • I would like to know more about what Bodnick means when she says that the writing prompts were “much more detailed than the summaries above” because those details matter a lot. But as summarized, these are terrible assignments. They aren’t connected with the context of the class or anything else.  It would be easy to try to answer any of these questions with a minimal amount of Google searching and making educated guesses. I might be going out on a limb here, but I don’t think most writing assignments at Harvard or any other college– even badly assigned ones– are as simplistic as these.
  • It wasn’t just ChatGPT: she had to do some significant editing to put together ChatGPT’s short responses into longer essays. I don’t think the AI could have done that on its own. Unless it hired a tutor.
  • Asking instructors to not pay any attention to the lack of citation (and I am going to guess the need for sources to back up claims in the writing) is giving the AI way WAAAAYYY too much credit, especially since ChatGPT (and other AIs) usually make shit up hallucinate when citing evidence. I’m going to guess that even at Harvard, handing in hallucinations would result in a failing grade. And if the assignment required properly cited sources and the student didn’t do that, then that student would also probably fail.
  • It’s interesting (and Bodnick points this out too) that the texts that received the lowest grades are ones that ask students to “analyze” or to provide their opinions/thoughts, as opposed to assignments that were asking for an “information dump.” Again, I’m going to guess that, even at Harvard, there is a higher value placed on students demonstrating with their writing that they thought about something.

I could go on, but you get the idea. This article is nonsense. It proves literally nothing.

But I also want to return to where I started, the idea that a lot of what it means to succeed in anything (perhaps especially education) is showing up and doing the work. Because after what seems like the zillionth click-bait headline about how ChatGPT could graduate from college or be a lawyer or whatever because it passed a test (supposedly), it finally dawned on me what has been bothering me the most about these kinds of articles: that’s just not how it works! To be a college graduate or a lawyer or damn near anything else takes more than passing a test; it takes the work of showing up.

Granted, there has been a lot more interest and willingness in the last few decades to consider “life experience” credit as part of degrees, and some of these places are kind of legitimate institutions– Southern New Hampshire and the University of Phoenix immediately come to mind. But “life experience” credit is still considered mostly bullshit and the approach taken by a whole lot of diploma mills, and real online universities (like SNHU and Phoenix) still require students to mostly take actual courses, and that requires doing more than writing a couple papers and/or taking a couple of tests.

And sure, it is possible to become a lawyer in California, Vermont, Virginia and Washington without a law degree, and it is also possible to become a lawyer in New York or Maine with just a couple years of law school or an internship. But even these states still require some kind of experience with a law office, most states do require attorneys to have law degrees, and it’s not exactly easy to pass the bar without the experience you get from earning a law degree. Ask Kim Kardashian. 

Bodnick did not ask any of the faculty who evaluated her AI writing examples if it would be possible for a student to pass that professor’s class based solely on this writing sample because she already knew the answer: of course not.

Part of the grade in the courses I teach is based on attendance, participation in the class discussions and peer review, short responses to readings, and so forth. I think this is pretty standard– at least in the humanities. So if some eager ChatGPT enthusiast came to one of my classes– especially one like first year writing, where I post all of the assignments at the beginning of the semester (mainly because I’ve taught this course at least 100 times at this point)– and said to me “Hey Krause, I finished and handed in all the assignments! Does that mean I get an A and go home now?” Um, NO! THAT IS NOT HOW IT WORKS! And of course anyone familiar with how school works knows this.

Oh, and before anyone says “yeah, but what about in an online class?” Same thing! Most of the folks I know who teach online have a structure where students have to regularly participate and interact with assignments, discussions, and so forth. My attendance and participation policies for online courses are only slightly different from my f2f courses.

So please, CHE and MSM in general: stop. Just stop. ChatGPT can (sort of) pass a lot of tests and classes (with A LOT of prompting from the researchers who really really want ChatGPT to pass), but until that AI robot walks/rolls into  a class or sets up its profile on Canvas all on its own, it can’t go to college.

What Counts as Cheating? And What Does AI Smell Like?

Cheating is at the heart of the fear too many academics have about ChatGPT, and I’ve seen a lot of hand-wringing articles from MSM posted on Facebook and Twitter. One of the more provocative screeds on this I’ve seen lately was in the Chronicle of Higher Education, “ChatGPT is a Plagiarism Machine” by Joseph M. Keegin. In the nutshell, I think this guy is unhinged, but he’s also not alone.

Keegin claims he and his fellow graduate student instructors (he’s a PhD candidate in Philosophy at Tulane) are encountering loads of student work that “smelled strongly of AI generation,” and he and some of his peers have resorted to giving in-class handwritten tests and oral exams to stop the AI cheating. “But even then,” Keegin writes, “much of the work produced in class had a vague, airy, Wikipedia-lite quality that raised suspicions that students were memorizing and regurgitating the inaccurate answers generated by ChatGPT.”

(I cannot help but to recall one of the great lines from [the now problematically icky] Woody Allen in Annie Hall: “I was thrown out of college for cheating on a metaphysics exam; I looked into the soul of the boy sitting next to me.” But I digress.)

If Keegin is exaggerating in order to rattle readers and get some attention, then mission accomplished. But if he’s being sincere– that is, if he really believes his students are cheating everywhere on everything all the time and the way they’re cheating is by memorizing and then rewriting ChatGPT responses to Keegin’s in-class writing prompts– then these are the sort of delusions which should be discussed with a well-trained and experienced therapist. I’m not even kidding about that.

Now, I’m not saying that cheating is nothing to worry about at all, and if a student were to turn in whatever ChatGPT provided for a class assignment with no alterations, then a) yes, I think that’s cheating, but b) that’s the kind of cheating that’s easy to catch, and c) Google is a much more useful cheating tool for this kind of thing. Keegin is clearly wrong about ChatGPT being a “Plagiarism Machine” and I’ve written many many many different times about why I am certain of this. But what I am interested in here is what Keegin thinks does and doesn’t count as cheating.

The main argument he’s trying to make in this article is that administrators need to step in to stop this never ending-battle against the ChatGPT plagiarism. Universities should “devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.”

Keegin doesn’t define what he means by cheating (though he does give some examples that don’t actually seem like cheating to me), but I think we can figure it out by reading what he means by a “meaningful education.” He writes (I’ve added the emphasis) “A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment.”

So, I think Keegin sees education as an activity where students labor alone at mastering the material delivered by the instructor. Knowledge is not something shared or communal, and it certainly isn’t created through interactions with others. Rather, students receive knowledge, do the work they are asked to do by the instructor, they do that work alone, and then students reproduce that knowledge investment provided by the instructor– with interest. So any work a student might do that involves anyone or anything else– other students, a tutor, a friend, a google search, and yes ChatGPT– is an opportunity for cheating.

More or less, this what Paulo Freire meant by the ineffective and unjust  “banking model of education” which he wrote about over 50 years ago in Pedagogy of the Oppressed. Friere’s work remains very important in many fields specifically interested in pedagogy (including writing studies), and Pedagogy of the Oppressed is one of the most cited books in the social sciences. And yet, I think a lot of people in higher education– especially in STEM fields, business-oriented and other technical majors, and also in disciplines in the humanities that have not been particularly invested in pedagogy (philosophy, for example)– are okay with this system. These folks think education really is a lot like banking and “investing,” and they don’t see any problem with that metaphor. And if that’s your view of education, then getting help from anyone or anything that is not from the teacher is metaphorically like robbing a bank.

But I think it’s odd that Keegin is also upset with “credentialing” in higher education. That’s a common enough complaint, I suppose, especially when we talk about the problems with grading. But if we were to do away with degrees and grades as an indication of successful learning (or at least completion) and if we instead decided students should learn solely for the intrinsic value of learning, then why would it even matter if students cheated or not? That’d be completely their problem. (And btw, if universities did not offer credentials that have financial, social, and cultural value in the larger society, then universities would cease to exist– but that’s a different post).

Perhaps Keegin might say “I don’t have a problem with students seeking help from other people in the writing center or whatever. I have a problem with students seeking help from an AI.” I think that’s probably true with a lot of faculty. Even when professors have qualms about students getting a little too much help from a tutor, they still generally do see the value and usually encourage students to take advantage of support services, especially for students at the gen-ed levels.

But again, why is that different? If a student asks another human for help brainstorming a topic for an assignment, suggesting some ideas for research, creating an outline, suggesting some phrases to use, and/or helping out with proofreading, citation, and formatting, how is that not cheating when this help comes from a human but it is cheating when it comes from ChatGPT? And suppose a student instead turns to the internet and consults things like CliffsNotes, Wikipedia, Course Hero, other summaries and study guides, etc. etc.; is that cheating?

I could go on, but you get the idea. Again, I’m not saying that cheating in general and with ChatGPT in particular is nothing at all to worry about. And also to be fair to Keegin, he even admits “Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right.” But the more of these paranoid and shrill commentaries I read about “THE END” of writing assignments and how we have got to come up with harsh punishments for students so they stop using AI, the more I think these folks are just scared that they’re not going to be able to give students the same bullshitty non-teaching writing assignments that they’ve been doing for years.

Okay, Now Some Students Should Fail (or, resuming “normal” expectations post-pandemic)

In April 2020, I wrote a post with the headline “No One Should Fail a Class Because of a Fucking Pandemic.” This, of course, was in the completely bonkers early days of the pandemic when everyone everywhere suddenly sheltered in place, when classes suddenly went online, and when the disease was disrupting all of our lives– not to mention the fact that millions of people were getting very sick, and a lot of them were dying. Covid hit many of my students especially hard, which in hindsight is not that surprising since a lot of the students at EMU (and a lot of the students I was teaching back then) come from working poor backgrounds, or they are themselves adult (aka “non-traditional”) students with jobs, sig-Os, houses, kids, etc.

As I wrote back then, before Covid and when it came to things like attendance and deadlines, I was kind of a hard-ass. I took attendance every day for f2f classes and I also had an attendance policy of sorts for online classes. There was no such thing as an excused absence; I allowed students to miss up to the equivalent of two weeks of classes with no questions asked, but there are no exceptions for things like funerals or illness. Unless a student worked out something with me before an assignment was due, late work meant an automatic grade deduction. I’ve been doing it this way since I started as a graduate assistant because it was the advice I was given by the first WPA/professor who supervised and taught me (and my fellow GAs) how to teach. I continued to run a tight ship like this for two reasons: first, I need students to do their job and turn stuff in on time so I can do my job of teaching by responding to their writing. Second, my experience has been that if instructors don’t give clear and unwavering rules about attendance and deadlines, then a certain number of students will chronically not attend and miss deadlines. That just sets these students up to fail and it also creates more work for me.

Pretty much all of this went out the window in Winter 2020 when Covid was raging. EMU allowed students to convert classes they were enrolled in from a normal grading scheme to a “pass/fail” grade, which meant that a lot of my students who would have otherwise failed (or with bad grades) ended up passing because of this, and also because I gave people HUGE breaks. My “lighten up” approach continued through the 2020-21 and the 2021-22 school year, though because all of my teaching was online and asynchronous, the definition of “attend” was a bit more fuzzy. I kept doing this because Covid continued to be a problem– not as big of a problem as it was in April 2020, but lots of people were still getting infected and people were still dying, especially people who were stupid enough to not get the vaccine.

By the end of the 2021-22 school year, things were returning to normal. Oh sure, there was still plenty of nervousness about the virus around campus and such, but the end of the pandemic was near. The most serious dangers of the disease had passed because of a weaker version of the virus, vaccinations, and herd immunity. So I was ready for a return to “normal” for the 2022-23 school year.

But my students weren’t quite ready– or maybe a better way of putting it is Covid’s side-effects continued.

In fall 2022, I taught a f2f section of first year writing, the first f2f section for me since before the pandemic. Most of the students had been in all (or mostly) online classes since March 2020, meaning that this was most of their first semesters back f2f too. Things got off to a rough start with many students missing simple deadlines, blowing off class, and/or otherwise checked out in the first couple of weeks. I felt a bit the same way– not so much blowing stuff off, but after not teaching in real time in front of real people for a couple of years, I was rusty. It felt a bit like getting back on a bicycle after not riding at all for a year or two: I could still do it, but things started out rocky.

So I tried to be understanding and cut students some slack, but I also wanted to get them back on track. It still wasn’t going great. Students were still not quite “present.” I remember at one point, maybe a month into the semester, a student asked quite earnestly “Why are you taking attendance?” It took a bit for me to register the question, but of course! If you’ve been in nothing but online classes for the last two years, you wouldn’t have had a teacher who took attendance because they’d just see the names on Zoom!

There came a point just before the middle of the term when all kinds of students were crashing and burning, and I put aside my plans for the day and just asked “what’s going on?” A lot of students suddenly became very interested in looking at their shoes. “You’re not giving us enough time in class to do the assignments.” That’s what homework is for, I said. “This is just too much work!” No, I said, it’s college. I’ve been doing this for a long time, and it’s not too much, I assure you.

Then I said “Let me ask you this– and no one really needs to answer this question if you don’t want to. How many of you have spent most of the last two years getting up, logging into your Zoom classes, turning off the camera, and then going on to do whatever else you wanted?” Much nodding and some guilty-look smiles. “Oh, I usually just went back to bed” one student said too cheerfully.

Now, look: Covid was hard on everyone for all kinds of different reasons. I get it. A lot of sickness and death, a lot of trauma, a lot of remaining PTSD and depression. Everyone struggled. But mostly blowing off school for two years? On the one hand, that’s on the students themselves because they had to know that it would turn out badly. On the other hand, how does a high school or college teacher allow that to happen? How does a teacher– even a totally burnt-out and overworked one– just not notice that a huge percentage of their students are not there at all?

The other major Covid side-effect I saw last school year was a steep uptick in device distraction. Prior to Covid, my rule for cell phones was to leave them silenced/don’t let them be a distraction, and laptop use was okay for class activities like taking notes, peer review or research. Students still peeked at text messages or Facebook or whatever, but because they had been socialized in previous high school and college f2f classes, students also knew that not paying attention to your peers or the teacher in class because you are just staring at your phone is quite rude. Not to mention the fact that you can’t learn anything if you’re not paying attention at all.

But during Covid, while these students were sort of sitting through (or sleeping through) Zoom classes with their cameras turned off, they also lost all sense of the norms of how to behave with your devices in a setting like a classroom or a workplace. After all, if you can “attend” a class by yourself in the privacy of your own home without ever being seen by other students or the instructor and also without ever having to say anything, what’s the problem of sitting in class and dorking around with your phone?

I noticed this a lot during the winter 2023 semester, maybe because of what I assigned. For the first time in over 30 years of teaching first year writing, I assigned an actual “book” for the class (not a textbook, not a coursepack, but a widely available and best-selling trade book) by Johann Hari called Stolen Focus: Why You Can’t Pay Attention– and How to Think Deeply Again. This book is about “attention” in many different ways and it discusses many different causes for why (according to Hari) we can’t pay attention: pollution, ADHD misdiagnoses, helicopter parenting, stress and exhaustion, etc. But he spends most of his time discussing what I think is the most obvious drain on our attention, which are cell phones and social media. So there I was, trying to lead a class discussion about a chapter from this book describing in persuasive detail why and how cell phone addiction is ruining all of us, while most of the students were staring into their cell phones.

One day in that class (and only once!), I tried an activity I would have never done prior to Covid. After I arrived and set up my things, I asked everyone to put all their devices– phones, tablets, laptops– on a couple of tables at the front of the classroom. Their devices would remain in sight but out of reach. There was a moment where the sense of panic was heavy in the air and more than a few students gave me a “you cannot be serious” look. But I was, and they played along, and we proceeded to have what I think was one of the best discussions in the class so far.

And then everyone went back to their devices for the rest of the semester.

So things this coming fall are going to be different. For both the f2f and online classes I’m scheduled to teach, I’ll probably begin with a little preamble along the lines of this post: this is where we were, let us acknowledge the difficulty of the Covid years, and, for at least while we are together in school (both f2f and online), let us now put those times behind us and return to some sense of normalcy.

In the winter term and for my f2f classes, I tried a new approach to attendance that I will be doing again next year. The policy was the same as I had before– students who miss more than two weeks of class risk failing– but I phrased it a bit differently. I told students they shouldn’t miss any class, but because unexpected things come up, they had four excused absences. I encouraged them to think of this as insurance in case something goes wrong and not as justification for blowing off class. Plus I also gave students who didn’t miss any classes a small bonus for “perfect attendance.” I suppose it was a bit like offering “extra credit” in that the only students who ever do these assignments are the same students who don’t need extra credit, but a few student earned about a half-letter boost to their final grade. And yes, I also had a few students who failed because they missed too much class.

As for devices: The f2f class I’m teaching in the fall is first year writing and I am once again going to have students read (and do research about) Hari’s Stolen Focus. I am thinking about starting the term by collecting everyones’ devices, at least for the first few meetings and discussions of the book. Considering that Hari begins by recalling his own experiences of “unplugging” from his cell phone and social media for a few months, going for 70 or so minutes without being able to touch the phone might help some students understand Hari’s experiences a bit better.

I’m not doing this– returning to my hard-ass ways– just because I want things to be like the were in the before-times or out of some sense of addressing a problem with “the kids” today. I feel like lots of grown-ups (including myself) need to rethink their relationships with the devices and media platforms that fuel surveillance capitalism. At the same time, I think the learning in college– especially in first year writing, but this is true for my juniors and seniors as well– should also include lessons in “adulting,” in preparing for the world beyond the classroom. And in my experience, the first two things anyone has got to do to succeed at anything is to show up and to pay attention.

What Would an AI Grading App Look Like?

While a whole lot of people (academics and non-academics alike) have been losing their minds lately about the potential of students using ChatGPT to cheat on their writing assignments, I haven’t read/heard/seen much about the potential of teachers using AI software to read, grade, and comment on student writing. Maybe it’s out there in the firehose stream of stories about AI I see every day (I’m trying to keep up a list on pinboard) and I’ve just missed it.

I’ve searched and found some discussion of using ChatGPT to grade on Reddit (here and here), and I’ve seen other posts about how teachers might use the software to do things other than grading, but that’s about it. In fact, the reason I’m thinking about this again now is not because of another AI story but because I watched a South Park episode about AI called “Deep Learning.” South Park has been a pretty uneven show for several years, but if you are fan and/or if you’re interested in AI, this is a must-see. A lot happens in this episode, but my favorite reaction about ChatGPT comes from the kids’ infamous teacher, Mr. Garrison. While complaining about grading a stack of long and complicated essays (which the students completed with ChatGPT), Rick (Garrison’s boyfriend) tells him about ChatGPT, and Mr. Garrison has far too honest of a reaction: “This is gonna be amazing! I can use it to grade all my papers and no one will ever know! I’ll just type the title of the essay in, it’ll generate a comment, and I don’t even have to read the stupid thing!”

Of course, even Mr. Garrison knows that would be “wrong” and he must keep this a secret. That probably explains why I still haven’t come across much about an AI grading app. But really though: shouldn’t we be having this discussion? Doesn’t Mr. Garrison have a point?

Teacher concerns about grading/scoring writing with computers are not new, and one of the nice things about having kept a blog so long is I can search and “recall” some of these past discussions. Back in 2005, I had a post about NCTE coming out against the SAT writing test and machine scoring of those tests. There was also a link in that post to an article about a sociologist at the University of Missouri named Edward Brent who had developed a way of giving students feedback on their writing assignments. I couldn’t find the original article, but this one from the BBC in 2005 covers the same story. It seems like it was a tool developed very specifically for the content of Brent’s courses and I’m guessing it was quite crude by today’s standards. I do think Brent makes a good point on the value of these kinds of tools: “It makes our job more interesting because we don’t have to deal so much with the facts and concentrate more on thinking.”

About a decade ago, I also had a couple of other posts about machine grading, both of which were posts that grew out of discussions from the now mostly defunct WPA-L. There was this one from 2012, which included a link to a New York Times article about Educational Testing Service’s product “e-rater,” “Facing a Robo-Grader? Just Keep Obfuscating Mellifluously.” The article features Les Perelman, who was the director of writing at MIT, demonstrating ways to fool e-rater with nonsense and inaccuracies. At the time, I thought Perelman was correct, but also a good argument could be made that if a student was smart enough to fool e-rater, maybe they deserved the higher score.

Then in 2013, there was another kerfuffle on WPA-L about machine grading that involved a petition drive at the website humanreaders.org against machine grading. In my post back then, I agreed with the main goal of the petition,  that “Machine grading software can’t recognize things like a sense of humor or irony, it tends to favor text length over conciseness, it is fairly easy to circumvent with gibberish kinds of writing, it doesn’t work in real world settings, it fuels high stakes testing, etc., etc., etc.” But I also had some questions about all that. I made a comparison between these new tools and the initial resistance to spell checkers, and then I also wrote this:

As a teacher, my least favorite part of teaching is grading. I do not think that I am alone in that sentiment. So while I would not want to outsource my grading to someone else or to a machine (because again, I teach writing, I don’t just assign writing), I would not be against a machine that helps make grading easier. So what if a computer program provided feedback on a chunk of student writing automatically, and then I as the teacher followed behind those machine comments, deleting ones I thought were wrong or unnecessary, expanding on others I thought were useful? What if a machine printed out a report that a student writer and I could discuss in a conference? And from a WPA point of view, what if this machine helped me provide professional development support to GAs and part-timers in their commenting on students’ work?

By the way, an ironic/odd tangent about that post: the domain name humanreaders.org has clearly changed hands. In 2013, it looked like this (this link is from the Internet Archive): basically, a petition form. The current site domain humanreaders.org redirects to this page on some content farm website called we-heart.com. This page, from 2022, is a list of the “six top online college paper writing websites today.”

Anyway, let me state the obvious: I’m not suggesting an AI application for replacing all teacher feedback (as Mr. Garrison is suggesting) at all. Besides the fact that it wouldn’t be “right” no matter how you twist the ethics of it, I don’t think it would work well– yet. Grading/commenting on student writing is my least favorite part of the job, so I understand where Mr. Garrison is coming from. Unfortunately though, reading/ grading/ commenting on student writing is essential to teaching writing. I don’t know how I can evaluate a student’s writing without reading it, and I also don’t know how to help students think about how to revise their writing (and, hopefully, learn how to apply these lessons and advice to writing these students do beyond my class) without making comments.

However, this is A LOT of work that takes A LOT of time. I’ve certainly learned some things that make grading a bit easier than it was when I started. For example, I’ve learned that less is more: marking up every little mistake or thing in the paper and then writing a really long end comment is a waste of time because it confuses and frustrates students and it literally takes longer. But it still takes me about 15-20 minutes to read and comment on each long-ish student essay, which are typically a bit shorter than this blog post. So in a full (25 students) writing class, it takes me 8-10 hours to completely read, comment on, and grade all of their essays; multiply that by two or three or more (since I’m teaching three writing classes a term), and it adds up pretty quickly. Plus we’re talking about student writing here. I don’t mind reading it and students often have interesting and inspiring observations, but by definition, these are writers who are still learning and who often have a lot to learn. So this isn’t like reading The New Yorker or a long novel or something you can get “lost” in as a reader. This ain’t reading for fun– and it’s also one of the reasons why, after reading a bunch of student papers in a day, I’m much more likely to just watch TV at night.

So hypothetically, if there was a tool out there that could help me make this process faster, easier, and less unpleasant, and if this tool also helped students learn more about writing, why wouldn’t I want to use it?

I’ve experimented a bit with ChatGPT with prompts along the lines of “offer advice on how to revise and improve the following text” and then paste in a student essay. The results are mix of (IMO) good, bad, and wrong, and mostly written in the robotic voice typical of AI writing. I think students would have a hard time sorting through these mixed messages. Plus I don’t think there’s a way (yet) for ChatGPT to comment on specific passages in a piece of student writing: that is, it can provide an overall end comment, but it cannot comment on individual sentences and paragraphs and have those comments appear in the margins like the comment feature in Word or Google Docs. Like most writing teachers, that’s a lot of the commenting I do, so an AI that can’t do that (yet) at all just isn’t that useful to me.

But the key phrase there is “yet,” and it does not take a tremendous amount of imagination to figure out how this could work in the near future. For example, what if I could train my own grading AI by feeding it a few classes worth of previous student essays with my comments? I don’t logistically know how that would work, but I am willing to bet that with enough training, a Krause-centric version of ChatGPT would anticipate most of the comments I would make myself on a student writing project. I’m sure it would be far from perfect, and I’d still want to do my own reading and evaluation. But I bet this would save me a lot of time.

Maybe, some time in the future, this will be a real app. But there’s another use of ChatGPT I’ve been playing around with lately, one I hesitate on trying but one that would both help some of my struggling students and save me time on grading. I mentioned this in my first post about using ChatGPT to teach way back in December. What I’ve found in my ChatGPT noodling (so far) is if I take a piece of writing that has a ton of errors in it (incomplete sentences, punctuation in the wrong place, run-on/meandering sentences, stuff like that– all very common issues, especially for first year writing students) and prompt ChatGPT to revise the text so it is grammatically correct, it does a wonderful job.It doesn’t change the meaning or argument of the writing– just the grammar. It generally doesn’t make different word choices and it certainly doesn’t make the student’s argument “smarter”; it just arranges everything so it’s correct.

That might not seem like much, but for a lot of students who struggle with getting these basics right, using ChatGPT like this could really help. And to paraphrase Edward Brent from way back in 2005, if students could use a tool like this to at least deal with basic issues like writing more or less grammatically correct sentences, then I might be able to spend more time concentrating more on the student’s analysis, argument, use of evidence, and so forth.

And yet– I don’t know, it even feels to me like a step too far.

I have students who have diagnosed learning difficulties of one sort or another who show me letters of accommodation from the campus disability resource center which specifically tell me I should allow students to use Grammarly in their writing process. I encourage students to go to the writing center all the time, in part because I want my students– especially the struggling ones– to sit down with a consultant who will help them go through their essays so they can revise and improve it. I never have a problem with students wanting to get feedback on their work from a parent or a friend who is “really good” at writing.

So why does it feel like encouraging students to try this in ChatGPT is more like cheating than it does for me to encourage students to be sure to spell check and to check out the grammar suggestions made by Google Docs? Is it too far? Maybe I’ll find out in class next week.