Here’s another freakout piece about AI, James D. Walsh’s New York Magazine piece “Everyone is Cheating Their Way Through College.”1 The TLDR version is the headline. “Everyone” is cheating. No one wants to do the assignments. Cheaters are super-duper sophisticated. Teachers are helpless. Higher education is now irrelevant. Etc., etc.
Walsh frames his piece with the story of Chungin “Roy” Lee, a student recently kicked out of Columbia for using AI to do some rather sophisticated computer programming cheating, I believe both for some of his courses and for an internship interview. He has since launched a startup called Cluely, which claims to be an undetectable AI tool to help the user, well, cheat in virtually any situation, including while on dates and in interviews. Lee sees nothing wrong with this: Walsh quotes him as saying “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating.”
Walsh is tapping into the myth of the “mastermind” cheater, the student so brilliant they could do the work if they wanted to but prefer to cheat. In the real world, mastermind cheating does not exist, which is why Lee’s story has been retold in all kinds of places, including this New York Magazine article, and cheaters don’t usually raise over $5 million in VC start-up money with an app they created. Rather and 99.99999% of the time (and, in my 30+ years of teaching experience, 100% of the time), students who cheat are not very smart about it,2 and the reason they cheat is they are failing the course and they are desperate to try anything to pass.
The cheaters Walsh talks to for this article (though also maybe not cheaters, as I will get to in a moment) all claim “everyone” is already using ChatGPT et al for all of their assignments, so what’s the big deal? I’ve seen surveys, like this one summarized by Campus Technology, that claim close to 90% of students “already use AI in their studies,” but that’s not what my students have told me, and it’s not really what the survey results are either. I think 90% of college students have tried AI, but that’s not the same as saying they regularly use AI. According to this survey, it’s more like 54% of students said they used AI “at least on a weekly basis,” and the percentages were even lower for using AI to do things like create a first draft of an essay.3
I could go on with the ways that I think Walsh is wrong, but for me this article raises a larger question that I think is at the heart of AI hand wringing and resistance: what, exactly, is “cheating” in a college class?
I think everyone would agree that if a student turns in work that they did not do themselves, that’s cheating. The most obvious example in a writing class is a student handing in a paper that someone else wrote. But I don’t think it is cheating for students to seek help on their writing assignments, and what counts as cheating aided by others can be fuzzy. Here are three relatively recent non-AI-related examples I’ve had to deal with:
- I teach a class called “Writing for the Web” in which (among other things) I require students to work through a series of still free tutorials on HTML and CSS on Codecademy, and I also require them to use WordPress to make a basic website. A lot of my students struggle with the technical aspects of these projects, and I always tell them to seek help from me, from each other, and from friends. Occasionally, a struggling student will get help from a more techno-savvy friend, and sometimes, the line between “getting help” and “getting someone else to do the work” gets crossed. That student perhaps welcomed and encouraged a little too much help from their friend, but the student still did most of the writing. Is this cheating?
- I had a first-year writing student who went to see a writing tutor (although not one in the EMU writing center) about one of the assignments. I always think it is a good idea for students to seek help and advice from others outside the class— friends and family, but also tutors available on campus or even someone they might pay. I insist students do all of their writing in Google Docs for a variety of reasons— mostly as a way for me to see their writing process and to help me when grading revisions, but also because it discourages AI cheating. When I looked at the version history and the document comments, I saw that there were large chunks of the document actually written by the tutor. Is this cheating?
- Also in first-year writing, I had a student who handed in an essay much more polished than the same student’s earlier work. I suspected the essay was written by someone else, so I called the student in for a conference. After I asked a few questions about some of the details in the essay, the student said, “Wait, you don’t think I wrote this, do you?” “No, I don’t, actually,” I said. The student said, “Well, I didn’t type it. What happened was I sat down with my mom and told her what the essay was supposed to be about, and then she wrote it all down for me.” Is this cheating?
I think the first example is kind of cheating, but because the extra help was more about coding and less about the writing, I didn’t penalize that student. The second example could count as cheating because someone other than the student did the work. But it’s hard to blame the student because the tutor broke one of the cardinal rules of tutoring: help, but never actually do the client’s/tutee’s work for them. The third example strikes me as clearly cheating, and every person I’ve told this story to believes that the student had to have known they were cheating. It’s probably true that the student was lying to me, but what if they really did think this was just getting help? Maybe Mom did this for their child all the way through high school.4
While I think other college writing teachers would mostly agree with the previous paragraph, there is not nearly that level of consensus about cheating and AI. Annette Vee has a good post here (in a newsletter sponsored by Norton, fwiw) about this and AI policies. Usefully, Vee shares several different policies, including language for banning AI.
My own policy is pretty much the same as Vee’s, which is also very similar to Nature’s AI policy for publications. First, you cannot use verbatim the writing from AI because “any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.” Second, if a writer does use AI as part of the process (brainstorming, researching, summarizing, proofreading, etc.), they need to explain how they used AI and in some detail. So now, when my students turn in an essay, they also need to include an “AI Use Statement” in which they explain what AI tools they used, what kinds of prompts, how they applied the results, and so forth. I think both my students and I are still trying to figure out how much detail these AI Use Statements need, but that’s a slightly different topic.5
Anyway, while I am okay with students getting help from AI in more or less the same way they might get help from another human, I think a lot of teachers (especially AI refusers) are not.
Take this example of what Walsh sees as AI cheating:
Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copying and pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”
Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”
Now, I don’t think AI advice on outlining is especially helpful, and I don’t think any teacher should be asking for “tidy five-page papers.” If AI means teachers have to stop assigning writing as a product and to instead teach writing as a process, then I am all for it. But regardless of the usefulness of AI outline advice, does what Wendy did with AI count as cheating? Walsh seems to think it does, and a lot of AI refusers would see this as cheating as well.
If Wendy cut and pasted text directly from the AI and just dumped it into an essay, then yes, that’s cheating— though proving AI cheating like that isn’t easy.6 But let’s assume that she didn’t do that and she used this advice as another brainstorming technique. I do not think this counts as cheating, and the fact that Wendy probably has some professors who think this is cheating is what makes this so confusing for Wendy and every other student nowadays.
Eventually, educators will reach a consensus on what is and isn’t AI cheating, and while I’m obviously biased, I think the consensus will more or less line up with my thoughts. But because faculty can’t agree on this now, it is essential that we take the time to decide on an AI policy and to explain that policy as clearly as possible to our students. This is especially important for teachers who don’t want their students to use AI at all, which is why instead of “refusing” AI, educators ought to be “paying attention” to it.
- The article is behind a firewall, but I had luck accessing it via 12ft.io ↩︎
- Though I will admit that I may have had mastermind cheaters in the past who were so successful I never caught on…. ↩︎
- The other issue about when/why students cheat— with AI or anything else— is it depends a lot on the grade level of the student. The vast majority of problems I’ve had with cheaters, generally and with AI in particular, have been with first year students in gen ed composition and rhetoric. I rarely have cheating problems in more advanced courses and with students who are juniors and seniors. ↩︎
- Ultimately, I made this student rewrite their essay on their own. As I recall, the student ended up failing the course because they didn’t turn in a number of assignments and missed too many classes, which is a pretty typical profile of the kind of student who resorts to cheating. ↩︎
- I think for all of my students last year, I was the only teacher who had an AI policy like this. As a result, the genre of an “AI Use Statement” was obviously unfamiliar, and their responses were all over the map. So one of the things on my “to do” list for preparing to teach in the fall is to develop some better models and better language about how much detail to include. ↩︎
- As I’ve already mentioned, this is one of the reasons why I use Google Docs: I can look at the document’s “Version History” and see how they put their essays together. Between looking at that and just reading the essay, I can usually spot something suspicious. When I think the student is cheating with AI (and even though I spend a lot of time explaining to students what I think is acceptable and unacceptable AI use, this still happened several times last school year in first year writing), I talk to the student and tell them why I think it’s AI. So far, they’ve all confessed. I let them redo the assignment without AI, and I tell them if they do it again, they’ll fail the class. That too happened last school year, but only once. ↩︎