Six Things I Learned After a Semester of Lots of AI

Two years ago (plus about a week!), I wrote about how “AI Can Save Writing by Killing ‘The College Essay,'” meaning that if AI can be used to respond to bad writing assignments, maybe teachers will focus more on teaching writing as a process the way that scholars in writing studies have been talking about for over 50 years. That means an emphasis on “showing your work” through a series of scaffolded assignments, peer review activities, opportunities for revision, and so forth.

This past semester, I decided to really lean into AI in my classes. I taught two sections of first-year writing where the general research topic for everyone was “your career goals and AI,” and where I allowed (even encouraged) the use of AI under specific circumstances. I also taught an advanced class for majors called “Digital Writing” where the last two assignments were all about trying to use AI to “create” or “compose” “texts” (the scare quotes are intentional there). I’ve been blogging/substacking about this quite a bit since summer and there are more details I’m not getting to here because it’s likely to be part of a scholarly project in the near future.

But since the fall semester is done and I have popped the (metaphorical) celebratory bottle of bubbly, I thought I’d write a little bit about some of the big-picture lessons about teaching writing with (and against) AI I learned this semester.

Teachers can “refuse” or “resist” or “deny” AI all they want, but they should not ignore it.

As far as I can tell from talking with my students, most of my colleagues did not address AI in their classes at all. A few students reported that they did discuss and use AI in some of their other classes. I had several students in first-year writing who were interior design majors and all taking a course where the instructor introduced them to AI design tools– sounded like an interesting class. I had a couple of students tell me an instructor “forbid” the use of AI but with no explanation of what that meant. Most students told me the teacher never brought up the topic of AI at all.

Look, you can love AI and think it is going to completely transform learning and education, you can hate AI all you want and wish it had never been invented and do all you can to break that AI machine with your Great Enoch sledgehammers. But ignoring it or wishing it away is ridiculous.

For my first-year writing students, most of whom readily admitted they used AI a lot in high school to do things that were probably cheating, I spent some time explaining how they could and could not use AI. I did so in part to teach about how I think AI can be a useful tool as part of the process of writing, but I also did this to establish my credibility. I think a lot of students end up cheating with AI because they think that the teacher is clueless about it– and I think a lot of times, students are right.

You’re gonna need some specific rules and guidelines about AI– especially if you want to “refuse” or “resist” it.

I have always included on my syllabi an explicit policy about plagiarism, and this year I added language that makes it clear that copying and pasting large chunks of text from AI is cheating. I did allow and encourage first-year writing students to use AI as part of their process, and I required my advanced writing students to use AI as part of their “experiments” in that class. But I also asked students to include an “AI Use Statement” with their final drafts, one that explained what AI systems they used (and that included Grammarly), what prompts they used, how they used the AI feedback in their essay, and so forth. Because this was completely new to them (and me too), these AI Use Statements were sometimes a lot less complete and accurate than I would have preferred.

I also insisted that students write with Google Docs for each writing assignment and for all steps in the process, from the very start of the first hint of a first draft until they hand it into me. Students need to share this with me so I can edit it. I take a look at the “version history” of the Google Doc, and if I suddenly see pages of clear prose magically appear in the essay, we have a discussion. That seemed to work well.

Still, sometimes students are still going to cheat with AI, and often without realizing that they’re cheating.

Even with the series of scaffolded assignments and using Google Docs and all of my warnings, I did catch a few students cheating with AI in both intentional and not as intentional ways. Two of these examples were similar to old-school plagiarism. One was from a student from another country who had some cultural and language disconnections about the expectations of American higher education (to put it mildly); I think first-year writing was too advanced and this student should have been advised into an ESL class. Another was a student who was late on a short assignment and handed in an obviously AI-generated text (thanx, Google Docs!). I gave this person a stern warning and another chance, and they definitely didn’t do that again.

As I wrote about in this post about a month ago, I also had a bunch of students who followed the AI more closely than the first assignment, the Topic Proposal. This is a short essay where students write about how they came up with their topic and initial thesis for their research for the semester. Instead, a lot of students asked AI what it “thought” of their topic and thesis, and then they more or less summarized the AI responses, which were inevitably about why the thesis was correct. Imagine a mini research paper but without any research.

The problem was that wasn’t the assignment.  Rather, the assignment asked students to describe how they came up with their thesis idea: why they were interested in the topic in the first place, what kinds of other topics they considered, what sorts of brainstorming techniques they used, what their peers told them, and so forth. In other words, students tried to use the AI to tell them what they thought, and that just didn’t work. It ended up being a good teachable moment.

A lot of my students do not like AI and don’t use it that much. 

This was especially true in my more advanced writing class– where, as far as I can tell, no one used AI to blatantly cheat. For two of the three major projects of the semester, I required students to experiment with AI and then to write essays where they reflected/debriefed on their experiments while making connections to the assigned readings. Most of these students, all of whom were some flavor of an English major or writing minor, did not use AI for the reflection essays. They either felt that AI was just “wrong” in so many different ways (unethical, gross, unfair, bad for the environment, etc.), or they didn’t think the AI advice on their writing (other than some Grammarly) was all that useful for them.

This was not surprising; after all, students who major or minor in something English-related usually take pride in their writing and they don’t want to turn that over to AI. In the freshman composition classes, I had a few students who never used AI either–judging from what they told me in their AI Use statements. But a lot of students’ approaches to AI evolved as the semester went on, and by the time they were working on the larger research-driven essay where all the parts from the previous assignments come together, they said things like they asked ChatGPT for advice on “x” part of the essay, but it wasn’t useful advice so they ignored it.

But some students used AI in smart and completely undetectable ways.

This was especially true in the first year writing class. Some of the stronger writers articulated in some detail in their AI Use Statements how they used ChatGPT (and other platforms) to brainstorm, to suggest outlines for assignments, to go beyond Grammarly proofreading, to get more critical feedback on their drafts, and so forth. I did not consider this cheating at all because they weren’t getting AI to do the work for them; rather, they were getting some ideas and feedback on their work.

And here’s the thing that’s important: when a student (or anyone else) uses AI effectively and for what it’s really for, there is absolutely no way for the teacher (or any other reader) to possibly know.

The more time I have spent studying and teaching about AI, the more skeptical I have become about it. 

I think my students feel the same way, and this was especially true with the students in my advanced class who were directly studying and experimenting with many different AI platforms and tasks. The last assignment for the course asked students to use AI to do or make something that they could not have possibly done by themselves. For example, one student taught themself to play chess and was fairly successful with that– at least up to a point. Another student tried to get ChatGPT to teach them how to play the card game Euchre, though less successfully because the AI kept “cheating.” Another student asked the AI to code a website, and the AI was pretty good at that. Several students tried to use AI tools to compose music; similar to me I guess, they listen to lots of music and wished they could play an instrument and/or compose songs.

What was interesting to me and I think most of my students was how quickly they typically ran into the AI’s and their own limitations. Sometimes students wanted the AI to do something the AI simply could not do; for example, the problem with playing Euchre with the AI (according to the student) is it didn’t keep track of what cards had already been played– thus the cheating. But the bigger problem was that without any knowledge of how to accomplish the task on their own, the AI was of little use. For example, the student who used AI to code a website still had no idea at all what any of the code meant, nor did they know what to do with it to make it into a real website. Students who knew nothing about music who tried to write/create songs couldn’t get very far. In other words, it was not that difficult for students to discover ways AI fails at a task, which in many ways is far more interesting than discovering what it can accomplish.

I’m also increasingly skeptical of the hype and role of AI in education, mainly because I spent most of the 2010s studying MOOCs. Remember them? They were going to be the delivery method for general education offerings everywhere, and by 2030 or 2040 or so, MOOCs were going to completely replace all but the most prestigious universities all over the world. Well, that obviously didn’t happen. But that didn’t mean the end of MOOCs; in fact, there are more people taking MOOC “courses” now than there were during the height of the MOOC “panic” around 2014. It’s just that nowadays, MOOCs are mostly for training (particularly in STEM fields), certificates, and as “edutainment” along the lines of Master Class.

I think AI is different in all kinds of ways, not the least of which is AI is likely to be significantly more useful than a chat box or to check grammar. I had several first-year students this semester write about AI and their future careers in engineering, logistics, and finance, and they all had interesting evidence about both how AI is being used right now and how it will likely be used in the future. The potential of AI changing the world at least as much as another recent General Purpose Technology, “the internet,” is certainly there.

Does that mean AI is going to have as great of an impact on education as the internet did? Probably, and teachers have had to make all kinds of big and small changes to how they teach things because of the internet, which was also true when writing classes first took up computers and word processing software.  But I think the fundamentals of teaching (rather than merely assigning) writing still work.

IT’S A WITCH!

Reflecting on Melanie Dusseau’s “Burn It Down: A License for AI Resistance”

I don’t completely disagree with Melanie Dusseau’s advice in her recent Inside Higher Ed column Burn It Down: A License for AI Resistance, but there’s something about her over-the-top enthusiasm for “burning it down” that reminds me of this famous scene from Monty Python and the Holy Grail:

Dusseau, who is a creative writing professor at the University of Findlay, writes “Until writing studies adopted generative artificial intelligence as sound pedagogy, I always felt at home among my fellow word nerds in rhet comp and literary studies.” A bit later, she continues:

If you are tired of the drumbeat of inevitability that insists English faculty adopt AI into our teaching practices, I am here to tell you that you are allowed to object. Using an understanding of human writing as a means to allow for-profit technology companies to dismantle the imaginative practice of human writing is abhorrent and unethical. Writing faculty have both the agency and the academic freedom to examine generative AI’s dishonest training origins and conclude: There is no path to ethically teach AI skills. Not only are we allowed to say no, we ought to think deeply about the why of that no.

Then she catalogs the many many mmmmmaaaaaannnnnnyyyyyy problems of AI in prose I found engaging and intentionally funny in its alarmed tone. Dusseau writes:

Resistance is not anti-progress, and pedagogies that challenge the status quo are often the most experiential, progressive and diverse in a world of increasingly rote, Standard English, oat milk sameness. “Burn it down” is a call to action as much as it is a plea to have some fun. The robot revolution came so quickly on the heels of the pandemic that I think a lot of us forgot that teaching can be a profoundly joyful act.

AI resistance/refusal is catching on. The day after I read this article, I came across (via Facebook) a similar albeit much more academic call for resistance, “Refusing GenAI in Writing Studies: A Quickstart Guide” by Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes. While also calling for the field to “refuse” AI, it’s more of an academic manifesto with a lot of citation, it’s a much more nuanced and complicated, and also still a work in progress. For example, sections that are “coming soon” on their wordpress site include “What Is GenAI Refusal?” and “Practicing Refusal.” Perhaps I’ll write more specifically about this when it is closer to finished, but this post isn’t about that.

Anyway, why does “burning it down” make me think of that Monty Python scene? The peasants bring one of the knights (ChatGPT just told me it was “Sir Bedevere the Wise”— let’s hope that’s right!) a witch (or AI) to be burned at the stake. They’re screaming and enraged, wanting to burn her immediately. The knight asks why they believe she’s a witch, and the evidence the peasants offer up is flimsy. The wise knight walks them through the logic of how to test if the woman truly is a witch: to put her on the scales and see if she weighs as much as a duck and thus floats like wood and thus she too is made of wood and will burn for being a witch. (Stick with me here— the punchline at the end has a twist).

Like the mob, Dusseau has had enough with all these witches/AIs. She wants it gone and for it to have never existed in the first place. But since that’s not possible, Dusseau is calling for like-minded writing teachers to refuse to engage. “To the silent, hopeless AI skeptics and Star Trek fans: resistance is not futile. We simply do not have to participate. Let Melville’s Bartleby provide the brat slogan of our license to resist: ‘I would prefer not to.’”

Now, maybe I’m just not hearing the “drumbeat of inevitability” for embracing AI to teach writing because I’m one of these people teaching a lot with/about AI this semester. But I have no idea what she’s talking about. If anything, it seems like most faculty around here have either ignored AI or banned it. Most of my students this semester have told me that AI has not come up as a topic in their other classes at all.

Before one burns it all down, it probably is a good idea to figure out what “it” is. Maybe Dusseau has already done that. Or maybe she is like a lot of my fellow academic AI resisters who don’t know much about AI and think that it is only for brute-force cheating. Maybe she knows better and is making an informed decision about resisting AI; it’s hard for me to tell.

I think her arguments for why we should refuse AI boil down to two. First, AI requires giant data centers and it takes A LOT of electricity and water to run those sites. That is completely true, and that doesn’t even get into the labor exploitation that went into training LLMs and monitoring content, the monopolistic and unregulated giant corporations that control all this, etc. All true, but look: these data centers also power EVERYTHING we do online and they have been an environmental problem for decades. So it’s not that she’s wrong, but I suspect that Dusseau isn’t thinking about refusing Facebook or Google searches anytime soon.

The second argument is that it ruins writing. Like almost every other person I’ve read making this argument, Dusseau references Ted Chiang’s New Yorker article “Why A.I. Isn’t Going to Make Art” in passing. What she doesn’t mention is Chiang’s definition of art is really fiction writing, and he sets the bar extremely high as to what counts as “art.” I prefer Matteo Wong’s response in The Atlantic, “Ted Chiang Is Wrong About AI Art,” but I’ll leave that debate for another time.

I think what Dusseau means by “writing” is writing that is personal, expressive, and “creative,” poetry and fiction and the like. Of course, AI is not the right tool for that. It’s not for writing a heartfelt fan letter from a child to an Olympic athlete, and Google found that out with the backlash to their “Dear Sydney” ad campaign this summer. (If you don’t know what I’m talking about check out the great post Annette Vee wrote about this called “Why ‘just right’ is wrong: What the Gemini ad ‘Dear Sydney; says about writing that people choose to do.”) Everyone I follow/read about AI agrees with this.

But most writing tasks are not personal, expressive, or creative, and that is particularly true for many writing tasks we all have to do sometimes, often reluctantly, for school or for work: routine reports, memos, forms, the kind of things we call “paperwork.” A lot of students are required to write when they would “prefer not to,” which is why students sometimes use AI to sometimes cheat on writing assignments. So yes, like Dusseau, I don’t want AI writing my journal entries, personal emails, or anything else that’s writing I choose to do, and I don’t want students to cheat. But there’s a role for AI with some of these not-chosen writing tasks that is perhaps useful and not cheating.

The other problem is that Dusseau’s own resistance is not going to stop any of her students or her colleagues from using AI. I don’t know if AI-based writing tools are going to inevitably be a part of writing pedagogy or not, but I do know that AI is going to continue to be a tool that people are going to continue to use. I have students in all of my classes (though more of them in the class of English majors) who are AI refusers, and I think that’s really important to note here: not all students are on board with this AI stuff either. But for my students who seem to know how to use AI effectively and as something akin to a brainstorming/proofreading/tutoring tool, it seems to work pretty well. And that’s the kind of AI use that is impossible for a teacher to detect.

So to me, the council of the knight is best. Before we burn this AI witch, why don’t we see what we’re up against? Why don’t we research this a bit more? Why don’t we not burn it own but instead (to very generally reference Cynthia Selfe’s Technology and Literacy in the 21st Century) pay attention to it and on alert?

But here’s the thing: in that Monty Python scene, it turns out she is a witch.

The punchline in that scene goes by so quick it took me a few viewings to realize it, but the woman does weigh the same as the duck, thus is made out wood, and thus is a witch. The peasants were right! SHE’S A WITCH!

Because like I said at the beginning of this, I don’t completely disagree with Dusseau. I mean, I still don’t think “burn it down” is a good strategy— we gotta pay attention. But I’m also not saying that she’s wrong about her reasons for resisting AI.

My semester isn’t quite over, and I have to say I am not sure of the benefits of the up-front “here is how to use AI responsibly” approach I’ve taken this semester, particularly in freshman comp. But I do know an impassioned and spirited declaration to students about why they too should burn it all down is not going to work. If writing teachers don’t want their students to use AI in their courses, they cannot merely wish AI away. They need to learn enough to understand the basics of it, they need to explain to students why it’s a bad idea to use it (or they need to figure out when using AI might be okay), and they’re going to have to change their writing assignments to make them more AI proof.

AI Cheating as a Teachable Moment

A Simple Example

Back to my “regular programming” with a post/update/stack/whatever these things are calls that is more on brand….

Which makes this all a teachable moment for me as well: I think the lesson I’ve learned (or re-learned) from this is that the best way to prevent/discourage students from using AI to cheat is to get out in front of the issue. I’m not saying that all writing teachers ought to allow their students to use AI; in fact, as we’re approaching the end of the semester, I’m not sure if it is a good idea to encourage and sanction the use of AI in classes like first year writing. But I am sure that is is a very good idea for writing (and other kinds of ) teachers to be up-front about AI. I think when teachers do spend some time talking about what does or doesn’t work with AI, students are less likely to use it to cheat in that class— if they use it at all.

My students and I have reached the part of the semester where they are mostly working on finishing the assignments, and where I’m mostly working on reading/commenting/evaluating those assignments. So busy busy busy. Anyway, as kind of an occasional break from that work, I wrote this post in bits and pieces over the last week or two about how a particular example of AI “cheating” became a “teachable moment.”

I think there’s AI CHEATING and there’s AI “cheating,” much in the same way that there is PLAGIARISM and then there’s “plagiarism.” By PLAGIARISM, I mean the version where a student hands in a piece of writing they did not compose at all. The most obvious example is when a student pays someone else to do it, perhaps from an online paper mill. I know this happens, but I don’t think I’ve ever seen it— unless it was that good I didn’t notice.

More typically, students do this cheating themselves by copying, pasting, and slightly tweaking chunks of text from websites to piece together something kind of like the paper. This is usually easy to spot and for two reasons. First, the same Google searches students use to find stuff to cheat with also works for me to find the websites and articles they used to cheat. Second and perhaps more importantly, students only plagiarize like this when they know they’re failing and desperate, so it’s easy to spot.

The much more common kind of “plagiarism” I see is basically accidental. A lot of students— especially first year students— do not understand what needs to be cited and what does not. This is because citation is both confusing and a pain in the ass, so students sometimes do not realize they had to have a citation at all, or they just skip it and figure no one will notice. Fortunately, it’s easy to spot when students drop in a quote from an article without citation because of the writing shift: the text goes from a college freshman grappling with their prose suddenly to a polished and professional writer, often with specialized word choices and jargon. And as often as not, students do cite some of the article they’re accidentally plagiarizing, so it’s pretty easy to check.

This is a “teachable moment:” that is, one of those things that happens in a class or an assignment where it’s an opportunity to reinforce something that has already been taught. This is where I remind the student about what we already talked about: how unintentional plagiarism is still plagiarism, that this is specifically an example of why it’s important to cite your sources correctly, and so forth. This tends to click.

Similarly, there’s AI CHEATING and then there’s AI “cheating,” and I have seen examples of both in my first year writing classes this semester. The big example of extreme AI CHEATING I’ve seen so far this semester is not that interesting because it was so textbook: desperate failing student clumsily and obviously uses AI, I called the student out about it, student confesses, I gave the student the choice to fail or withdraw rather than going through the rig-a-ma-roll of getting that student expelled (oh yes, that is something I could have done). Slight tangent: if catching AI cheaters is as easy and as obvious as it seems to be, what’s the problem? Conversely, if students are using AI effectively as a tool to help their process (brainstorming, study guides, summarizing complicated texts, proofreading, etc.) and if that use of AI isn’t detectable by the teacher, well, what’s the problem with that?

The AI “cheating” example from this semester was a more interesting and teachable moment. Here’s what happened:

The first assignment in my freshman comp classes is a 2-3 page essay where students explain their initial working thesis and how they came up with it. It’s a low-stakes getting started kind of assignment I grade “complete/incomplete.” As I explain and remind students repeatedly, this is not an essay where they are trying to convince the reader to believe their thesis. Rather, this is an essay about the process of coming up with the working thesis in the first place. What I want students to write about is why they’re interested in their topic, what sorts of brainstorming activities they tried to come up with their topic, what sorts of conversations they had about this project with me and with classmates, and so forth.

This semester, the topic of research in my first year writing classes is “your career goals and AI.” I’ve also spent a lot of class time explaining why I think AI is not that useful for cheating because it just can’t do these assignments very well. But I also explained how AI might be useful as part of the process as well. For example, a lot of these students really struggle with coming up with a good and researchable topic idea/thesis, and even though most of AI’s ideas for a thesis about career goals and AI aren’t great, it does help them get beyond staring at a blank page.

I’ve given a version of this assignment for a long time, and in previous semesters and pre-AI, two or three students (out of 25) messed it up. It’s usually because the students didn’t understand the assignment, or they weren’t paying attention to/didn’t do any of the prewriting exercises we discussed in class. So they try to fake it by writing what ends up being a really short research paper without any research. I gave these students a do-over, and that usually was enough to get them back on track.

This semester, I had closer to half of the students in my two sections mess this up. I’m sure some of these students just didn’t get the assignment/didn’t do the prewriting activities, but what I think happened more is a lot of students got a little lazy and hypnotized by the smooth, mansplaining prose of AI. So instead of remembering what the assignment was about, they just took what the AI was feeding it about their working thesis ideas and tweaked that a bit.

The teachable moment? I met with the students who messed this up, reminded them what the assignment was actually supposed to be, and I pointed out that this was exactly the kind of thing that AI cannot do: it can’t help you write about what you think. At least not yet.

This was a couple weeks ago, and for most of my students, I think it clicked. I still have a number of students who are struggling and unlikely to pass for all kinds of reasons, but that’s typical for freshman comp. Some students (particularly the ones on the way to failing) are still trying to use AI for cheating, but for the most part, I think students have learned the lesson.

I ask students to include an “AI Use Statement” where they describe how they used AI, or to say explicitly that they didn’t use any AI. This is a brand-new thing for both them as students and me as a teacher, so they sometimes forget or they don’t explain their AI use as clearly as I wanted. And I am sure some students are fibbing a little about how much AI they used. But for the most part, what students are telling me is they aren’t using AI to write at all, or they’re using Grammarly for proofreading (which I think counts as AI), they are using an AI for some ideas about a particular paragraph, and/or getting started or some other brainstorming kind of suggestion.

Which makes this all a teachable moment for me as well: I think the lesson I’ve learned (or re-learned) from this is that the best way to prevent/discourage students from using AI to cheat is to get out in front of the issue. I’m not saying that all writing teachers ought to allow their students to use AI; in fact, as we’re approaching the end of the semester, I’m not sure if it is a good idea to encourage and sanction the use of AI in classes like first year writing. But I am sure that is is a very good idea for writing (and other kinds of ) teachers to be up-front about AI. I think when teachers do spend some time talking about what does or doesn’t work with AI, students are less likely to use it to cheat in that class— if they use it at all.

Four Rules For Discouraging Cheating with AI in Writing Classes

An indirect but positive review of Mollick’s “Co-Intelligence”

This semester, I’m teaching two sections of first year writing (aka freshman comp) and an advanced writing course called Digital Writing, and both have AI elements and themes. In first year writing, the research theme is “Your Career and AI.” In the Digital Writing course, the last two writing projects are going to be waist-deep in writing with AI. Maybe one day I will better understand/make use of Substack’s newsletter function to chronicle these classes in more detail, but that’s later.

For Digital Writing, we’re reading and discussing Ethan Mollick’s Co-Intelligence: Living and Working with AI. If you’re reading posts like this because you too are trying to make sense out of the what AI is about, there’s a good chance you’ve already heard of Mollick’s book and his Substack, One Useful Thing. If you haven’t heard of Mollick and you want to know more about AI but you’re overwhelmed with the firehose of news and information, then his book is for you. Co-Intelligence is a well-written, accessible, and a thoroughly researched 30,000 foot overview in less than 250 pages printed in a big font. It’s enough to get the “AI curious” up to speed on the current state of things (it was published in April 2024), while also pointing readers to ideas for further reading and research.

Mollick is a business professor at the Wharton School at the University of Pennsylvania, so he is primarily interested in how AI will impact productivity and innovation. I think we conceptualize teaching a bit differently, and like everything I’ve read about AI, Mollick is making some claims I doubt. But we’re mostly on the same page.

One of the most cited/mentioned chapters in Mollick’s book is “Four Rules for Co-Intelligence.” In brief, those rules are:

  • Always invite AI to the table, meaning you have to experiment and try to use AI (or really, different platforms, so AIs) for lots and lots of different things in order to discover what it/they can and can’t do.

  • Be the human in the loop: it’s a bad idea to completely turn over a task to AI, both because AI makes a lot of mistakes (aka hallucinations) and humans ought to be in charge rather than the other way around.

  • Treat AI like a person (but tell it what kind of person it is). AI doesn’t behave same way as other computer applications, so Mollick says we need to be conversational with it as if it were a human. Mostly he’s talking about creating context and scenarios in AI prompts, as in “You are an experienced teacher speaking to skeptical students about the value of group work. What advice would you give those students?”

  • Assume this is the worst AI you will ever use, which is perhaps the most accurate of these AI rules.

So, in that spirit, here are four more rules about teaching writing and AI— specifically, what teachers can do to discourage students from using AI to cheat.

Of course, I’m far from the first person to come up with four more rules for AI— I’m not even the first person to come up with four more rules for AI and writing! For example, there’s this fine post from Jane Rosenzweig at her site Writing Hacks “Four Rules for Writing in the Age of AI,” and also this guest post at John Warner’s Substack site by high school teacher and writer Brett Vogelsinger, “Artificial Intelligence and Writing: Four Things I Learned Listening to my High School Students.” Both great posts and great thoughts.

The most common concern about AI I read on Facebook (though not so much on Substack) from other professors and teachers is students using it to cheat on writing assignments. So this post isn’t about how to use AI to teach writing— maybe I’ll write more about that when I have a better sense of the answer. This is about how teachers can create an environment that discourages students from cheating with AI. It’s not foolproof. Sometimes, usually when they are desperate to try anything to pass the class, students cheat.

Teach writing as a process; don’t assign writing as a product.

I kicked off my writing about AI in this blog post from December 2022 “AI Can Save Writing by Killing ‘The College Essay.’” It’s the most frequently read post on the old blog. I wrote it in response to two different articles published in The Atlantic at the time arguing that the new ChatGPT had made writing assignments impossible and irrelevant.

Teaching writing as a process has been the mantra in composition and rhetoric since the late 1970s. Scholars debate the details about what this means, but in a nutshell, teaching writing as a process means setting up a series of assignments that begin with pre-writing invention exercises (freewriting and other brainstorming techniques, for example), activities that lead to rough drafts which are shared with other students through peer review. When students hand this work in, the instructor’s feedback is geared toward revision and (hopefully) improvement on future projects. My first year writing course is typical in that it is about research and students complete a research essay project. But long before we get to that assignment, students complete a series of smaller scaffolded assignments that build up to the larger essay. Again, none of this is new and it is how I was taught to teach writing back in the late 1980s when I started as a graduate teaching assistant.

I teach writing this way because there is good evidence that it works better than merely assigning writing. I also think teaching writing as a process deters plagiarism and other forms of cheating (including with AI). I require students to build their research writing projects through a series of smaller and specialized assignments, and to share their work in progress with other students in peer review. It’s awfully hard to fake this. Also, as I wrote back in July, I now make the process more visible by requiring students to complete their essays from beginning through final revisions on a Google Doc they share with me so I can view the document history and see what it is they did to put their writing together.

In contrast, assigned writing projects have always been much easier to cheat on. Before AI, students cheated with the internet, paper mills, by getting others doing the writing, or (at least according to my father who went to college in the early 1960s) with the library of papers that fraternities kept on hand.

There’s also the issue of the purpose of writing assignments in the first place. Teaching writing as a process is especially important in a course where the subject itself is writing and there is a lot of attention to how students craft their sentences and paragraphs. I realize that’s different from a class where the subject is literature or political science or business administration. But besides the fact that we should teach (not just assign) writing across the curriculum, writing assignments should ask students what they think about something. In research-based courses like freshman comp, students write about the research they did to persuade and inform both me and their classmates about something. It’s one of the reasons why I like teaching this class: my students are always teaching me new things. In my classes that are not as research-based (like Digital Writing), students write and reflect on the assigned readings and other projects of the class in order to share with readers what they think.

Assigned writing tasks tend to seek specific answers based on the content of the course— write about the theme of madness in Hamlet, about the balance of power between the three parts of the federal government, about they key causes of the great recession, etc. In evaluating assigned writing, teachers are less interested in what students think and are more in seeing if students correctly repeated the content of the course the teacher delivered through lectures, activities, and readings. In other words, assigned writing is an assessment tool, like an exam— and in most cases, it probably would be a more effective to use an exam.

Now, teaching writing as a process is A LOT more work for everyone because it means more reading, more teacher commenting, and more checking in with students’ writing as they progress through these assignments. This is why at the vast majority of colleges in the U.S., first year writing courses have 25 or fewer students. Some colleagues who teach lecture courses with 100 or so students who also assign papers have asked me how they’re supposed to teach writing as a process in these courses. My answer is I wouldn’t. Instead, I’d rely on short written responses to readings, quizzes, and exams.

Any course assignment that could be completed without being present in that course is a bad assignment.

A lot of the hype around AI is about how great it is at passing tests— LSAT, GRE, SAT, etc. etc.— and how that is supposed to mean something. But besides the issue of whether AI can pass these tests because it “knows” or because the test questions were part of the content used to create the AI, I think we all know this is not how school works. I mean, if on the first day of a course I introduced all the writing assignments, and then a student showed up on the second day and said “I finished everything— can I get my A now?” the answer, obviously, is no.

Which brings me to this second rule: if a teacher gives students an exam or an assignment that could be successfully completed without ever being in the class, then that’s a bad assignment. This is something I never thought about before AI. In the old old days, I don’t think it made much difference. When I went to college in the mid 1980s, if someone could pass an intro to chemistry exam or a history 101 exam without ever attending the class, what’s the problem? They already had enough mastery of the subject to pass the class anyway. That started to end with students doing Google searches to pass exams, and now that AI can answer all those questions in that history 101 class final in real time, it’s completely over.

AI isn’t attending classes with our students (at least not yet), and so it is not as useful to cheat on exams or assignments that have specific connections to the course. That’s easy enough to do in the kinds of courses I teach, though I have to assume this is more complicated in a subject like calculus where the concepts and methods transcend classroom boundaries. But perhaps an even easier way to address this problem is for the teacher to make participation count as part of the grade. As I discussed in this post, my classes have a participation grade component that counts for about 30% of the grade.

AI detection software doesn’t work and it never will.

A lot of teachers want to skip these first two rules and instead just rely on some kind of app that can detect what parts of a student’s paper were written by an AI. Essentially, they want something like the plagiarism detection software Turnitin many of these teachers have used for years. Though as a quick glance at the Turnitin website reveals, they are shifting from plagiarism detection along with AI detection as well.

Plagiarism detection software has been a divisive topic in writing studies for years. While I know lots of teachers routinely require their students to run their papers through Turnitin for a plagiarism check, I never have done this because I don’t think it’s necessary and I don’t think Turnitin is as good of a tool as many users seem to think. This is especially true with AI detection. According to Turnitin, the false-positive rate for “fully human-written text” is less than 1%, but up to 20% for AI writing. And that is just for the very common and very dumb way people use to cheat with AI: writing a simple prompt and copying and pasting the answer with few changes. I have to assume the ineffectiveness in detecting AI goes down if the human using the AI effectively: for brainstorming, proofreading/editing, chatting with it about revision ideas, and so forth.

It’s a futile effort, especially as the AIs improve and as all of us (including our students) learn more about how to use them for not just cheating. Which leads me to my last point:

Teachers at all levels need to learn more about AI.

Colleges and universities are certainly trying. The two talk things I did last year about AI were both faculty development events, and the attendance at both was pretty good. I know folks here at EMU have held similar events, and I get the impression this is pretty common at most colleges and universities. And faculty have heard of AI at this point, of course.

The problem is I’m not sure any of the faculty development or the oodles of news stories about AI has resulted in any differences in teaching. This is mostly just based on my own sense of things, but I did informally poll my current students (I have about 70 this semester) the other day about AI in other classes they were taking. A few students mentioned classes where they are using AI for various assignments. A few other students mentioned instructors who expressly forbid the use of AI. I asked these students if they thought the instructor had any way of enforcing that; “no.” But the majority of my students said that the topic has not come up at all. That’s a problem.

I’m not saying every teacher now needs to embrace AI and incorporate it into their teaching. Not at all. Besides experimenting with AI in my teaching, I’ve been doing a lot of writing and reading about AI that is (hopefully) going to turn into a research project. I think my teaching with AI experiments are going well, but I honestly don’t know if this is something I’ll continue to do in the future. I feel the same way about AI generally: it probably is going to “change everything,” but it also might end up being another one of those things (like MOOCs, which was the subject of my last major research project) that never lives up to the hype.

What I am saying though is AI is here now and it looks like it’s going to be (probably) a big deal for some time to come. It is not just going to “go away” and it cannot be ignored. A professor or teacher can continue to refuse to engage with AI for valid ethical or personal reasons, but that is not going to stop everyone else from using it. That includes some of our students who are using AI simplistically to cheat, perhaps by feeding the teacher’s writing assignment into ChatGPT and copying/pasting whatever the AI comes up with. Fortunately, it’s pretty easy to spot that sort of AI use. But what teachers cannot easily recognize or stop is a student who uses AI more in the way that it is really meant to be used: as a tool to help/improve what humans do, not replace it.

So start learning about AI, even if you hate it. Mollick’s book is a good place to start.

Classroom Cell Phone Bans, Before and After Covid

I used to be against these bans, but not anymore

Cell phone bans in K-12 schools have been in the news at the start of this school year. Several states have instituted measures to ban cell phones in elementary and secondary schools, and bans are happening in prestigious private schools as well. The research suggests these policies help students to pay attention in class and also to socialize and interact with their peers in real time. Interestingly enough, a lot of the objections to these policies are coming from parents who are used to being in contact with their children at all times.

For what its worth, I think this bad behavior with cell phones is fading, at least at the college level. A lot of my students have heard about and thought about cell phone addiction and the problems of various social media platforms, so I do think that there is a lot more awareness of the problem of staring at the phone too much. I think we’re soon returning to before Covid times with cell phones— and lots of other things, too.

It’s a good idea, though I didn’t used to think so.

When cell phones first showed up 25-20 years ago, I didn’t think much about a policy in my classes because there wasn’t much you could do with those flip phones besides talk to someone, and students knew they couldn’t do that. The main issue back then was students would forget to silence their phone and they’d go off in the middle of class. I’m not proud to admit this, but if a student’s cell phone rang during a discussion, I would make them sing a song for the group. I had heard this was the common practice in Norway and Sweden; the goal was to have a slightly embarrassing but also funny way of reminding students to turn off their phones. There’s no way I would do anything like this nowadays, of course— though it did work pretty well back then.

When smartphones came along, I continued my kind of non-policy policy: don’t let it be a distraction, and if you need to use it for taking notes or looking something up, feel free. A lot college instructors instituted strict bans on both phones and laptops, but these policies always struck me as reactionary and unnecessary. I didn’t want to be that sage on the stage who insists on complete attention from every student for every minute of class. And I didn’t want to be a hypocrite either. In faculty meetings, especially the larger ones, most of my colleagues have their laptops open and they are clearly multitasking. That’s what I do. Anyway, I always had a few students who could not resist the distraction and fondled their devices constantly, but I usually let it go, figuring that these students were mostly hurting themselves and that the overall benefits of these devices outweighed the harms.

In fact, as I blogged about back in June and September 2019, I was on a research release in the fall 2019 semester to work on a project officially called “Investigating Classroom Technology Bans Through the Lens of Writing Studies,” but which I more informally called the “Classroom Tech Bans Are Bullshit” project. I was studying the quantitative/experimental research that had been done about students using laptops and devices in classrooms, most of which was done by folks in education and/or psychology, to try to connect it to the practices and qualitative/observational sorts of research on this in writing studies. I gave a talk about starting this project at the Corridors 2019 conference at Oakland University (and I repurposed this for the online version of the CCCCs in 2020) called “Laptop/Cell Phone Bans are Bullshit (or Maybe Not).” The very short version for now is a lot of the scholarship argued it was better for students to take notes by hand rather than with a device, and that (IMO) was and is bullshit. But the “Maybe Not” part of the talk was about the problems of multitasking, how devices themselves are distracting to others, and the ways in which social media applications are designed to be as addictive as slot machines.

The next semester was the start of Covid. I and almost all my EMU colleagues taught online from the middle of that Winter 2020 semester through Winter 2022, and most high schools in Michigan were all online for those two years as well. That time online changed everyone in higher ed, but especially the students. I blogged about this in more detail here at the end of the 2022-23 school year and after being back to teaching f2f. In brief, two years of online courses was enough for a lot of students to forget they couldn’t behave the same way in person and in a classroom as they did when they were online and alone at home and often still in bed. After all, if you’re a student in a Zoom class with the camera off or in an asynchronous online class, no one cares if you’re texting or watching cat videos as the same time as you’re doing online class stuff. The freshmen had more problems with distraction then the juniors and seniors, but even some of the better students in the upper-division classes could not stop staring at their phone right in the middle of discussions.

In other words, I went from a few students not paying attention to most of them not paying attention. This was obviously bad for students, but it was also bad for me. Like I said, I don’t need to be the at the center when I’m teaching. But when a lot of students are ignoring everything and everyone around them, including me, it’s hard to not take that personally. And at the end of the day, my students’ behavior was just rude.

So in Fall 2023, I started doing something I never thought I’d do: I began class by asking students to place their cell phones on a table in the front of the room. Their phones are in sight, but out of reach. I’ve had a few students resist this by giving some reason (kids, ill relatives, etc.) why they must be in contact at all times. I tell them to leave their phones on, and if it rings, take the call in the hall. (No one has had to take a call). Also worth mentioning: I explain why I do all this by sharing a version of what I just wrote here, including the reality that learning how to participate in a f2f conversation with other humans without staring at your cell phone is a good adulting skill to have.

The class discussions improved immediately. Sure, some students grumbled about it, but no one complained on the end of the semester course evaluations where students tend to complain about all sorts of things. Now, I only do this for the first three weeks or so of the semester. After they get the idea, I tell them they can keep their phones— as long as they remain face down on the desk or otherwise stowed away. There is often a backsliding moment where I once again collect their phones, but that too is a teachable moment.

This semester when I first asked students to put their phones on the table in the front of the room, none of them hesitated or seemed surprised. I suspect I was not the first teacher they have had since Covid with a similar policy. The other day was the first class meeting in my more advanced class where I told students they could keep their phones, and one student went ahead and put their phone on the table I had previously set up anyway. “I think this helps,” she said.

For what its worth, I think this bad behavior with cell phones is fading, at least at the college level. A lot of my students have heard about and thought about cell phone addiction and the problems of various social media platforms, so I do think that there is a lot more awareness of the problem of staring at the phone too much. I think we’re soon returning to before Covid times with cell phones— and lots of other things, too.

A Small Example of the Jagged Frontier and Discovering What ChatGPT Can’t Do

Or, the case of the penless writing logo

One of the things about AI I find most fascinating (and I read about this in other commentaries about ChatGPT and other AIs all the time) is it is really good at lots of things and weirdly bad at lots of other things, and that seems especially true with a lot of things humans do very well. In his book Co-Intelligence: Living and Working with AI, Ethan Mollick describes this as the “jagged technological frontier.” He describes a large study he and some colleagues did (I will probably need to read this and possibly it assign this fall), and in the abstract of that study, they write that AI creates a jagged technological frontier “where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI.” ChatGPT can easily make a pretty good looking (to me) logo, but it cannot separate writing from the word/image/concept of “pen.”

This is going to find its way into my courses this semester; as one of a series of exercises, I want to ask students to discover for themselves something the AI can’t do, and, if possible, make a guess as to why. I guess this would be my example of some of that assignment. I had a “conversation” with ChatGPT the other day to create a logo for the Canvas site for my Digital Writing class this fall. This is using the “not free” version which allows me to generate lots of images (I think the free version limits users to just two a day.). ChatGPT won’t let me just share the conversation, so I’ll reproduce/paraphrase it here.

I started with this prompt:

“Draw me a logo for the top of a website for a college class I’m teaching called digital writing. The class examines writing as a technology. Since the course number is 354, incorporate the number somewhere into the design. The university’s colors are green and white, but it is okay to incorporate other colors if you want.”

ChatGPT made this:

Then I asked it to make a JSON script that describes what this image looks like. I’m going to skip that in this post in part because I don’t know enough about JSON, and in part that’s not the point I’m trying to make. That’s maybe a future post.

Anyway, here’s an image it made with the JSON script:

And here’s another one it made when I asked it to repeat what it did before:

FWIW, this was my favorite and the one I went with— after some minor editing.

Then I wrote: “Now forget the json script. Instead, draw me a logo for the top of a website for a college class I’m teaching called digital writing. The class examines writing as a technology. Since the course number is 354, incorporate the number somewhere into the design. The university’s colors are green and white, but it is okay to incorporate other colors if you want.” Here’s what it drew:

To me, I think the the reproduction of the logo by ChatGPT is getting a little worse in some ways. I mean, “Nourse?”

Then I wrote: “I noticed you added a pen even though that wasn’t in the prompt. There are a lot of other kinds of writing implements and tools. Why did you decide on using a fountain pen?”

ChatGPT responded: “I chose to include a fountain pen because it’s a classic symbol of writing, representing the traditional side of the writing process. The contrast between the fountain pen and digital elements (like circuits or binary code) visually communicates the course’s theme of examining writing as a technology.

“Fountain pens are often seen as timeless and elegant, bridging the gap between old and new, which fits the idea of exploring writing in both traditional and technological contexts. It also has a distinctive and recognizable shape, which makes it effective in a logo.

“If you’d prefer a different writing tool or symbol, I can certainly create a version with an alternative element!”

I wrote: “How do you think it might look if instead of a fountain pen you used a typewriter instead?”

ChatGPT responded: “Using a typewriter in the logo could create a strong visual connection to the history of writing technology, emphasizing the transition from traditional mechanical tools to digital ones. The typewriter could be depicted with keys or as a whole, evoking a sense of retro charm while still aligning with the theme of writing as technology. This could give the logo a more nostalgic feel, yet still relevant to the course’s examination of the evolution of writing tools.

“If you’d like, I can create a version of the logo with a typewriter instead of the fountain pen to see how it looks. Would you like me to do that?”

“Sure, create a version like that” I wrote back. Here’s what it made:

ChatGPT also wrote with this image “Here is the logo with a typewriter element instead of a fountain pen. This version emphasizes the traditional aspect of writing through the typewriter while maintaining a connection to digital technology. If you have any further requests or adjustments, feel free to let me know!” Oh, the mansplaining confidence of ChatGPT, trying to convince me there was no pen anymore!

I responded “That’s interesting, but there is still a pen in the logo. can you eliminate that and the three smaller versions of the logo on the left side and in the right corner?” It produced this:

And ChatGPT wrote “Here is the revised logo, with the pen removed and only the typewriter elements included. The smaller versions of the logo have also been eliminated. Let me know if this version meets your needs or if you would like any further adjustments!” Again, it seems to be glitching more on some of the words and such.

I responded “Nope, the pen is still there and there are still three smaller logos, this time across the bottom of the page. Can you get rid of the pen and the smaller logos?” ChatGPT made this:

I find the inclusion of a photo-realistic pen/mechanical pencil particularly amusing, like ChatGPT is just trying to sneak it in there. You can also see where it’s getting more confused with “W- – +—33ING” or whatever that is. I responded “The small logos are gone, but there is still a writing implement– a pen or a mechanical pencil, I can’t tell. Also you the writing for digital writing and university at the bottom of the logo shouldn’t be there. Can you eliminate these things?” It made this:

It also wrote back “This version removes the pen or pencil, as well as the additional text at the bottom of the logo. The design now focuses entirely on the typewriter and digital elements, with the course number 354 subtly incorporated. If this meets your expectations or if further adjustments are needed, just let me know!” It’s almost as if it is admitting that yeah, the pencil/pen is still there, but at least the typewriter is what is being emphasized.

Here’s the punchline for me. I asked again “Can you delete the pencil, the pen, and the word University from this logo?” ChatGPT’s response surprised me:

“It seems that the tools available to me do not allow direct editing of existing images to remove specific elements like the pencil, pen, or text. Instead, I can generate a new logo without those elements, as I attempted to do in the previous steps. However, if those elements persist in the new generation, it’s due to the constraints of the image generation process.

“If you have access to image editing software, you might be able to manually remove the undesired elements. Alternatively, I can keep trying to refine the prompt to produce a version that meets your specifications. Would you like me to attempt another version or provide instructions for editing the image manually?”

I asked “Can you tell me more in simple layman terms what you mean by the constraints of the image generation process?”

ChatGPT answered “Sure! When I generate images, I’m using a process that interprets a text description to create a visual design. The tool I use is powerful but not perfect—it tries to match the description as closely as possible, but sometimes it includes extra elements or doesn’t fully understand specific requests, like leaving out certain objects or text.

“These ‘constraints’ mean that even when I specify not to include a pen or extra text, the tool might still add them because it’s interpreting the description in a way that makes sense to it, based on patterns it has learned from millions of images.

“So, while I can describe what you want very precisely, the image might still include some unwanted elements because the generation process doesn’t always get it exactly right. This is why there might be some back-and-forth to refine the result, but it’s not always guaranteed to be perfect due to these limitations.”

If I am understanding this correctly, I think what I am running into here is a bit of that jagged frontier and the limits of the AI’s training data. Essentially, I think ChatGPT’s association between images of writing and pens/pencils are so connected that it can’t make a picture of writing that does not include a pen or a pencil.

When you think about it for a second, this isn’t too surprising. Try doing a Google image search for just the word “writing” to see what I mean. Here’s a screenshot of what I came up with from such a search:

It’s no wonder that ChatGPT thinks that the idea/word/concept “writing” goes with pen or pencil because of the rest of the data it’s been trained on does the same thing. Which is to say I think when you ask a human to create an image of the process of writing, they default to a pen.

Though I will say ChatGPT’s advice for me the human in this loop: why don’t you just use some image editing software and fix it yourself?

Grading Participation Helps Students

And the first step to participating is attendance

In my new (mis)adventures on Substack, I stumbled across “Grading attendance hurts students” from Jayme Dyer in the Threads feed. Dyer teaches biology and based on my very brief browsing of her site (stack? sub? newsletter? what the hell is this called again?), I am pretty sure we’d agree about most things.

I’ve had a version of a policy like this for my entire teaching career, and I am comfortable in asserting that students who miss two weeks of a 15 week semester are essentially fail themselves anyway. These students aren’t just absent a lot; they also don’t turn stuff in. So just like seat belt laws incentivized wearing a seat belt (and undoubtedly saved countless numbers of people), an attendance policy incentivizes the positive behavior of showing up. And I guarantee you that I have had students in classes who grumbled about being required to show up who would have otherwise failed themselves.

But no, not this. In my experience, attendance (participation, actually) needs to be a part of a student’s grade.

Dyer’s argument against grading attendance is based on compassion for students and their complicated lives. I get that, and I hear a lot of similar things from many of my fellow writing teachers as well. I teach at an opportunity granting institution where my students are probably similar to hers (she teaches at a community college). We do have “traditional” students who are 18-21 and living on or near campus, the kind of student more typical at a place like U of Michigan (which is about 7 miles away from where I work, EMU). But we also have students who commute and some from quite a distance away, and that creates a variety of attendance problems. We also have a lot of students who have significant work and family obligations— and that isn’t just the older returning students, either.

Dyer mentions a “secret project” she’s working on that includes reviewing syllabi from dozens of other gen ed biology classes, and she highlights a couple of draconian policies where missing two or three classes could drop a student’s grade by a full letter. That seems crazy and unreasonable to me too.

That said, I don’t think it takes a lot of research for me to claim that students who miss too much class tend to fail. Sure, teachers need to have some compassion and understanding, and they need to remember students have lives where stuff happens sometimes. But to me, a reasonable attendance policy is just like all kinds of rules and laws for things people should do anyway, even if it is arguably a “personal choice.” Take seat belt laws, for example. I’m old enough to remember riding in a car and learning to drive myself before seat belt laws, and I rarely bothered to buckle up. The law requiring it (and the possible ticket, of course) gave me and many other drivers the nudge we needed.

At Eastern, legend has it that the Board of Regents once passed a policy that declared no student could fail a class based on attendance alone. I’ve never found evidence that this policy exists (though I haven’t looked very hard), but whatever. I don’t grade students on attendance; I grade students’ participation, and the first thing a student needs to do to successfully participate is to show up.

Now, Dyer and I are working in different disciplines. I teach writing and all of the classes I teach have 25 or fewer students. It’s obviously easier to take attendance with 25 students than in a lecture hall with 250, and it’s a lot easier for students in a small class to understand why they need to show up. I have no idea how many students Dyer is working with in her courses, but since she teaches biology, I assume it’s more than me.

I think also think we have different assumptions about what class meetings are for. Dyer writes:

Think about it this way – if a student misses a class, makes up what they missed and performs well on the assessment, should their grade really be lower than a student who attended class and performed equally as well on the assessment?

I think she thinks that the point of a class meeting is for an instructor to deliver content to students, and the measurement of a student’s success in the course is an exam. And I get that— as far as I can tell, this has been the STEM assumption about pedagogy and assessment forever.

In the courses I teach (and I think this is true in most courses in the humanities), we value the stuff students do in these class meetings. The new-ish innovation of the “flipped classroom” is how most people I know have been teaching writing forever. My courses involve a lot of discussion of readings, discussions and brainstorming about the writing assignments, and peer review of those assignments. So “being there” is part of process, and there’s no way to cram on an exam at the end of the semester to try to make up for not being there.

The other thing is that now that we have AIs that easily answer any question that might pop up on a gen ed intro to biology exam, it seems to me that this approach to assessing students’ success is going to have to change and change very soon. One of the many things AI has made me rethink about teaching and learning is if someone can successfully complete an assignment without attending the course, then that’s not a very good assignment. But that’s a slightly different conversation for a different time.

Anyway, here’s what I do:

Participation in my classes is 30% of the overall grade and it includes activities like reading responses, small group work, and peer reviews. I don’t have a good way of keeping track of the details of these things in f2f classes, so to figure out a grade for participation, I have students email and tell me what grade they think they have earned, I respond, and then I base the grade on that. I think this is a surprisingly accurate and effective way of doing this, but that too might be a different post.

Students can’t participate if they aren’t there, so I tell my students they shouldn’t miss any class at all. However, the reality is there are of course legitimate reasons why students have to miss. So my policy is students can miss up to four class meetings— or the equivalent of two weeks in a 15 week semester— for any reason whatsoever. Students can always tell me why they need to miss class, but that’s up to them and I do not ask for any sort of “note” from someone.

Students who miss five classes fail— or at least they usually fail. Since the age of Covid, I have lightened up on this a bit and I’ve made a handful of acceptions with a few students. I also recently started giving students with perfect attendance a very small bonus, often enough to make a half-letter grade difference.

I’ve had a version of a policy like this for my entire teaching career, and I am comfortable in asserting that students who miss two weeks of a 15 week semester are essentially fail themselves anyway. These students aren’t just absent a lot; they also don’t turn stuff in. So just like seat belt laws incentivized wearing a seat belt (and undoubtedly saved countless numbers of people), an attendance policy incentivizes the positive behavior of showing up. And I guarantee you that I have had students in classes who grumbled about being required to show up who would have otherwise failed themselves.

Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI

A couple weeks ago, I wrote about why I use Google docs to teach writing at all levels. I’ve been using it for years–long before AI was a thing–in part because being able to see the history of a student’s Google doc is a teachable moment on the importance of the writing and revision process. This also has the added bonus of making it obvious if a student is skipping that work (by using AI, by copying/pasting from the internet, by stealing a paper from someone else, etc.) because the document history goes from nothing to a complete document in one step. I’m not saying that automatically means the student cheated, but it does prompt me to have a chat with that student.

In a similar vein and while I’m thinking about putting together my classes for the fall term, I thought I’d write about why I think teaching citation practices is increasingly important in research writing courses, particularly first year composition.

TL;DR version: None of this is new or innovative; rather, this is standard “teaching writing as a process” pedagogy and I’ve been teaching research writing like this for decades. But I do think it is even more important to teach citation skills now to help my students distinguish between the different types of sources, almost all of which are digital rather than on paper. Plus this is an assignment where AI might help, but I don’t think it’d help much.

Continue reading “Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI”

Why I Use Google Docs to Teach Writing, Especially in the Age of AI

I follow a couple different Facebook groups about AI, each of which have become a firehose of posts lately, a mix of cool new things and brand new freakouts. A while back, someone in one of these groups posted about an app to track the writing process in a student’s document as a way of proving that the text was not AI. My response to this was “why not just use Google docs?”

I wish I could be more specific than this, but I can’t find the original post or my comment to it; maybe it was deleted. Anyway, this person asked “what did I mean?” and I explained it briefly, but then I said I was thinking about writing a blog post about it. Here is that post.

For those interested in the tl;dr version: I think the best way to discourage students from handing in work they didn’t create (be that from a papermill, something copied and pasted from websites, or AI) is to teach writing rather than merely assigning writing. That’s not “my” idea; that’s been the mantra in writing studies for at least 50 years. Also not a new idea and one you already know if you use and/or teach with Google docs: it is a great tool for teaching writing because it helps with peer review and collaborative writing, and the version history feature helps me see a student’s writing process, from the beginning of the draft through revisions. And if a student’s draft goes from nothing to complete in one revision, well, then that student and I have a chat.

Continue reading “Why I Use Google Docs to Teach Writing, Especially in the Age of AI”

TALIA? This is Not the AI Grading App I Was Searching For

(My friend Bill Hart-Davidson unexpectedly died last week. At some point, I’ll write more about Bill here, probably. In the meantime, I thought I’d finish this post I started a while ago about the webinar about Instructify’s AI grading app. Bill and I had been texting/talking more about AI lately, and I wish I would have had a chance to text/talk more about this. Or anything else).

In March 2023, I wrote a blog post titled “What Would an AI Grading App Look Like?” I was inspired by what I still think is one of the best episodes of South Park I have seen in years, “Deep Learning.”  Follow this link for a detailed summary or look at my post from last year, but in the nutshell, the kids start using ChatGPT to write a paper assignment and Mr. Garrison figures out how to use ChatGPT to grade those papers. Hijinks ensue.

Well, about a month ago and at a time when I was up to my eyeballs in grading, I saw a webinar presentation from Instructify about their AI product called TALIA. The title of the webinar was “How To Save Dozens of Hours Grading Essays Using AI.” I missed the live event, but I watched the recording– and you can too, if you want— or at least you could when I started writing this. Much more about it after the break, but the tl;dr version is this AI grading tool is not the one I am looking for (not surprisingly), and I think it would be a good idea for these tech startups to include people with actual experience with teaching writing on their development teams.

Continue reading “TALIA? This is Not the AI Grading App I Was Searching For”