Marc Watkins is right; my flavor of AI skepticism

A “Paying Attention to AI” Substack post…

The other day, I read Marc Watkin’s excellent Substack post “AI Is Unavoidable, Not Inevitable,” and I would strongly encourage you to take a moment to do the same. Watkins begins by noting that he is “seeing a greater siloing among folks who situate themselves in camps adopting or refusing AI.” What follows is not exactly a direct response to these refusing folks, but it’s pretty close and I find myself agreeing with Watkins entirely. As he says, “To make my position clear about the current AI in education discourse I want to highlight several things under an umbrella of ‘it’s very complicated.'”

Like I said, you really should read the whole thing. But I will share this long quote that is so on point:

Many of us have wanted to take a path of actively resisting generative AI’s influence on our teaching and our students. The reasons for doing so are legion—environmental, energy, economic, privacy, and loss of skills, but the one that continually pops up is not wanting to participate in something many of us fundamentally find unethical and repulsive. These arguments are valid and make us feel like we have agency—that we can take an active stance on the changing landscape of our world. Such arguments also harken back to the liberal tradition of resisting oppression, protesting what we believe to be unjust, and taking radical action as a response.

But I do not believe we can resist something we don’t fully understand. Reading articles about generative AI or trying ChatGPT a few times isn’t enough to gauge GenAI’s impact on our existing skills. Nor is it enough to rethink student assessments or revise curriculum to try and keep pace with an ever-changing suite of features.

To meaningfully practice resistance of AI or any technology requires engagement. As I’ve written previously, engaging AI doesn’t mean adopting it. Refusing a technology is a radical action and we should consider what that path genuinely looks like when the technology you despise is already intertwined with the technology you use each day in our very digital, very online world.

Exactly. Teachers of all sorts, but especially those of us who are also researchers and scholars, need to engage with AI well enough to know what we are either embracing or refusing. Only refusing is at best willful ignorance.

AI is difficult to compare to previous technologies (as Watkins says, AI defies analogies), but I do think the emergence of AI now is kind of like the emergence of computers and the internet as tools for writing a couple of decades ago. A pre-internet teacher could still refuse that technology by insisting students take notes by hand, hand in handwritten papers, and take proctored timed exams completed on paper forms. When I started at EMU in 1998, I still had a few very senior colleagues who taught like this, who never touched their ancient office computers, who refused to use email, etc. But try as they might, that pre-internet teacher who required their students to hand in handwritten papers did not make computers and the internet disappear from the world.

It’s not quite the same now with AI as it was with the internet back then because I don’t think we are at the point where we can assume “everyone” routinely uses AI tools all the time. This is why I for one am quite happy that most universities have not rolled out institutional policies on AI use in teaching and scholarship– it’s still too early for that. I’ve been experimenting with incorporating AI into my teaching for all kinds of different reasons, but I understand and respect the choices of my colleagues to not allow their students to use AI. The problem though is refusing AI does not make it disappear out of the students’ lives outside of the class– or even within that class. After all, if someone uses AI as a tool effectively– not to just crudely cheat, but to help learn the subject or as a tool to help with the writing– there is no way for that AI forbidding professor to tell.

Again, engaging with AI (or any other technology) does not mean embracing, using, or otherwise “liking” AI (or any other technology). I spent the better part of the 2010s studying and publishing about MOOCs, and among many other things, I learned that there are some things MOOCs can do well and some things they cannot. But I never thought of my blogging and scholarship as endorsing MOOCs, certainly not as a valid replacement for in-person or “traditional” online courses.

I think that’s the point Watkins is trying to make, and for me, that’s what academics do: we’re skeptics, especially of things based on wild and largely unsubstantiated claims. As Watkins writes, “… what better way to sell a product than to convince people it can lead to both your salvation and your utter destruction? The utopia/ dystopia narratives are just two sides of a single fabulist coin we all carry around with us in our pockets about AI.”

This is perhaps a bad transition, but thinking about this reminded me of Benjamin Riley’s Substack post back in December, “Who and What comprise AI Skepticism?” This is one of those “read it if you want to get into the weeds” sorts of posts, but the very short version: Casey Newton, who is a well-known technology journalist, wrote about how he thought there are only two camps of AI Skepticism: AI is real and dangerous, and AI is fake and sucks. Well, A LOT of prominent AI experts and writers disputed Newton’s argument, including Riley. What Riley does in his post is describe/create his own taxonomy of nine different categories of AI Skepticism, including one category he calls the “Sociocultural Commentator Critics– ‘the neo-Luddite wing,'” which would include AI refusers.

Go and check it out to see the whole list, but I would describe my skepticism as being most like the “AI in Education Skeptics” and the “Technical AI Skeptics” categories, along with a touch of “Skeptics of AI Art and Literature” category. Riley says AI in Education Skeptics are “wary of yet another ed-tech phenomena that over-hypes and under-delivers on its promises.” I think we all felt the same warriness of ed-tech and over-hype with MOOCs.

Riley’s Technical AI Skeptics are science-types, but what I identify with is exploring and exposing AI’s limitations. AI failures are at least as interesting to me as AI successes, and it makes me question all of these claims about AI passing various tests or whatever. AI can do no wrong in controlled experiments much in the same way that self-driving cars do just fine on a closed course in clear weather. But just like that car doesn’t do so great driving itself through a construction zone or a snowstorm, AI isn’t nearly as capable outside of the lab.

And I say a touch of the Skeptics in AI Art and Literature because while I don’t have a problem with people using AI to make art or to write things, I do think that “there is something essential to being human, to being alive, that we express through art and writing.” Actually, this is one of my sources of “cautious optimism” about AI: since it isn’t that good at doing the kind of human things we teach directly and indirectly in the humanities, maybe there’s a future in these disciplines after all.

I’ll add two other reasons why I’m skeptical about how AI. First, I wonder about the business model. While this is not exactly my area of expertise, I keep reading pieces by people who do know what they’re talking about raising the same questions about where the “return on investment” is going to come from. The emergence of DeepSeek is less about its technical capabilities and more about further disrupting the business plans.

Second, I am skeptical about how disruptive AI is going to be in education. It’s fun and easy to talk with AI chatbots, and they can be helpful for some parts of the writing process, especially when it comes to brainstorming, feedback on a draft, proofreading, and so forth. There might be some promise that today’s AI will enable useful computer-assisted instruction tools that go beyond “drill and kill” applications from the 1980s. And assuming AI continues to develop and mature into a truly general-purpose technology (like electricity, automobiles, the internet, etc.), of course, it will change how everything works, including education. But besides the fact that I don’t think AI is going to ever be good enough to replace the presence of humans in the loop, I don’t think anyone is comfortable with an AI replacing a human teacher (or, for that matter, human physicians, airline pilots, lawyers, etc.).

If there is going to be an ROI opportunity from the trillion dollars these companies have sunk into this stuff, it ain’t going to come from students using AI for school work or from people noodling around with it for fun. The real potential with AI is in research, businesses, and industries that work with enormous data sets and in handling complex but routine tasks: coding, logistics, marketing, finance, research into the discovery of new proteins or novel building materials, and anything involving making predictions based on a large database.

Of course, the fun (and scary and daunting!) part of researching AI and predicting its future is everyone is probably mostly wrong, but some of us might have a chance of being right.

A New Substack About My AI Research: “Paying Attention to AI”

As I wrote about earlier in December, I am “Back to Blogging Again” after experimenting with shifting everything to Substack. I switched back to blogging because I still get a lot more traffic on this site than on Substack, and because my blogging habits are too eclectic and random to be what I think of as a Newsletter. I realize this isn’t true for lots of Substackers, but to me, a Newsletter should be about a more specific “topic” than a blog, and it should be published on a more regular schedule.

So that’s my goal with “Paying Attention to AI.” We’ll see how it works out. Because I still want to post those Substack things here– because this is a platform I control, unlike any of the other ones owned by tech oligarchs or whatever, and because while I do like Substack, there is still the “Nazi problem” they are trying to work out. Besides, while Substack could be bought out and turned into a dumpster fire (lookin’ at you, X), no one is going to buy stevendkrause.com, and that’s even if I was selling.

Anyway, here’s the first post on that new Substack space.

Welcome to (working title) Paying Attention to AI

More Notes on Late 20th Century Composition, CAI, Word Processing, the Internet, and AI

My goal for this Substack site/newsletter/etc. is to write (mostly to myself) about what will probably be the last big research/scholarly project of my academic career, but I still don’t have a good title. I’m currently thinking “Paying Attention to AI,” a reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was her chair’s address at the 1997 Conference for College Composition and Communication before it was republished in the journal for the CCCs in 1999 and also expanded into the book Technology and Literacy in the Twenty-First Century.

But I also thought something mentioning AI, Composition, and “More Notes” would be good. That’s a reference to “A Note on Composition and Artificial Intelligence,” a brief 1983 article by Hugh Burns in the first newsletter issue of what would become the journal Computers and Composition. AI meant something quite different in the late 1970s/early 1980s, of course. Burns was writing then about how research in natural language processing and AI could help improve Computer Assisted Instruction (CAI) programs, which were then seen as one of the primary uses of computer technology in the teaching of writing— along with the new and increasingly popular word processing programs that run on these newly emerging personal computers.

Maybe I’ll figure out a way to combine the two into one title…

This project is based on a proposal that’s been accepted for the 2025 CCCCs in Baltimore, and also a proposal I have submitted at EMU for a research leave or a sabbatical for the 2025-26 school year. 1 I’m interested in looking back at the (relatively) recent history of the beginnings of the widespread use of “computers” (CAI, personal computers, word processors and spell/grammar checkers, local area networks, and the beginnings of “the internet”).

Burns’ and Selfe’s articles make nice bookends for this era for me because between the late 1970s until about the mid 1990s, there were hundreds of presentations and articles in major publications in writing studies and English about the role of personal computers and (later) the internet and the teaching of writing. Burns was enthusiastic about the potential of AI research and writing instruction, calling for teachers to use emerging CAI and other tools. It was still largely a theory though since in 1983, fewer 8% of households had one personal computer. By the time Selfe was speaking and then writing 13 or so years later, over 36% of households had at least one computer, and the internet and “World Wide Web” was rapidly taking its place as a general purpose technology altering the ways we do nearly everything, including how we teach and practice writing.

These are also good bookends for my own history as a student, a teacher, and a scholar, not mention as a writer who dabbled a lot with computers for a long time. I first wrote with computers in the early 1980s while in high school. I started college in 1984 with a typewriter and I got a Macintosh 512KE by about 1986. I was introduced to the idea of teaching writing in a lab of terminals— not PCs— connected to a mainframe unix computer when I started my MFA program at Virginia Commonwealth University in fiction writing in 1988. (I never taught in that lab, fwiw). In the mid-90s and while in my PhD program at Bowling Green State University, the internet and “the web” came along, first as text (remember GopherLynx?) and then as GUI interfaces like Netscape. By the time Selfe was urging the English teachers attending the CCCCs attendees to, well, pay attention to technology, I had starte my first tenure-track job.

A lot of the things I read about AI right now (mostly on social media and MSM, but also in more scholarly work) dhas a tinge of the exuberant enthusiasm and/or the moral panic about the encroachment of computer technology back then, and that interests me a great deal. But at the same time, this is a different moment in lots of small and large ways. For one thing, while CAI applications never really caught on for teaching writing (at least beyond middle school), AI shows some real promise in making similar tutoring tools actually work. Of course, there were also a lot of other technologies and tools way back when that had their moments but then faded away. Remember MOOs/MUDs? Listservs? Blogs? And more recently, MOOCs?

So we’ll see where this goes.

1 FWIW: in an effort to make it kinda/sorta fit the conference theme, this presentation is awkwardly titled ““Echoes of the Past: Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction.” This will almost certainly be the last time I attend the CCCCs, my field’s annual flagship conference, because, as I am sure I will write about eventually, I think it has become a shit show. And whether or not this project continues much past the April 2025 conference will depend heavily on the research release time from EMU. Fingers crossed on that.

The Year 2024 was A LOT

This past year was A LOT for me and the rest of my family. So so SO much happened, so much of it horrible and still difficult to comprehend, so much of it fantastic and beautiful. I suppose this “the worst of times/the best of times” sentiment is always kinda true, but I can’t think of another year where there was just so so much and in such extremes.

It’s been a lot. It’s been way too much for one year.

January

We were already well underway with one of the big ticket items of this year, which is building/buying/selling houses and moving for the first time in over 25 years.

On January 7, I started taking Zepbound, which is one of these weight loss drugs in the category of what everyone has heard of, Ozempic (though, as I wrote about during the year, it’s more complicated than that.)

Otherwise, it was mostly the start of the winter term with work (it was the semester of all freshman composition for me), weather, watching some cheesy movies here and there.

February

My niece Emily got married in a huge and very Catholic ceremony in Kansas City. This was the first of the nieces/nephews (or cousins or grandchildren, depending on your perspective) to get married, so a big deal for the Krauses. Remarkably, there were no hitches with the weather or anything else.

The idea of moving started to get a lot more real when we were able to do a walk-through of the house right after they did the inspection for stuff they need to do before they put up drywall.

Of course, we (mostly me) have been driving by the construction site since November to see the progress, but walking around in what would become (in the order of these pictures) the upstairs/Steve loft area, stairs descending in the living room/main room and kitchen area was pretty cool. The Zepbound adventures continued (I was down about 7 pounds by the end of the month) as did the all first year writing semester.

March

We started getting real about selling the old house and preparing the move to the new one, and because we lived in our previous house in Normal Park for 25 years, it was stressful. I mean, we had decades worth of stuff to sort through– pack, sell, toss– and there was all the decluttering and the nervousness of would it sell and would we get what we were asking and all that. It’s kind of funny because everyone we talked to about this stuff– including my parents and in-laws– had all moved at least once (and usually twice) in the 25 years we hadn’t thought of it at all.

It’s funny to think about too because Annette grew up as an Air Force brat and her father was in for over 20 years, meaning she moved more than a dozen times before she was 15. I didn’t move that much as a kid, but we did move a couple of times, and in college and through my MFA program, I moved almost every year. So we used to know how to move.

School continued, my adventures with Zepbound continued and I complained about Oprah, I kept messing around with AI, kept teaching, etc., and I turned 58, too.

April

April was the beginning of the “A LOT,” the far too much of the year. We had two open houses on the first Sunday of the month, and then on April 8, Annette and I cleared out to make room for potential buyers to come take a second look while we went to the eclipse. We met our friends Steve and Michelle and their daughter down in Whitehouse, Ohio (just outside of Toledo), which seemed like the easiest place to get to for the totality while avoiding bumper-to-bumper traffic into the “totality zone” in northern Ohio.

As I wrote on Instagram, being there for the totality was intense. I probably won’t be able to see another total eclipse in my lifetime; then again, a cruise in August 2027 in the Mediterranean is not impossible.

We had a second open house, which was nerve-wracking. Remember, we had not had anything to do with selling and buying a house in forever and everyone told us we’d get an offer immediately, so when that didn’t happen, we started contemplating scenarios about how we can swing paying for the new house without money from the sale of the old house and all of that. Well, another open house and we got an offer and everything worked out– eventually.

And the end of April was when Bill died, suddenly and just a few days after a group of us got together for dinner. That’s at the top of my list for of horrible and difficult to comprehend. It still doesn’t feel real to me, and I think about Bill almost every day.

May

MSU had a quite large memorial for Bill in early May we were able to attend– Will flew back too. There had to be at least 500 people at it, and it was as celebratory about a remarkable life as it could be. I wrote about some of this in early May here, though this is as much about my own thoughts of mortality than anything else. Like I said, this year has been a lot, and this was the horrible part.

And in mid-May, we closed on both houses and pretty much on the same day. We went to a Title office in Ann Arbor and met the guy who bought our house for the first time, and without going into a lot of details, I feel pretty confident that that he and his partner (who was there via Facetime) are a great fit, ready for the adventures and challenges of fixing up the place and making it their own. That was the selling part. The buying part of the new house we were able to do electronically, and weirdly and quite literally while we were running errands after the closing where we were selling, we received a number of emails to electronically sign some forms and boom, we bought the new house too.

It was and still is kind of bitter-sweet, leaving the old place and the old neighborhood. It was time to move on and the longer we are in the new place, the fewer regrets I have. Still, when you live someplace for 25 years, that place becomes more than just housing, and that is especially true when it is in such a great neighborhood. I still drive through the old neighborhood and the old house about once a week on my way to or from EMU.

Five months after starting Zepbound, I finally got to the full dose of the meds and I was down about 20 pounds.

June

A lot of the last part of May and the first part of June was a complete daze of moving. We decided that the way we’d move is to start taking stuff over a carload at a time (and I did most of the heavy lifting, mostly because Annette was teaching a summer class) and then hiring movers for the big stuff later. I remember talking with my father about this approach to moving, and his joke was it’s sort of like getting hit in the nuts fairly gently every day for a month, or getting hit once really hard. When we move again (no idea when that will be), I think the smarter move would be to do it all at once, but I don’t think there’s any escaping what Annette and I had erased from our memories after staying put so long: moving sucks.

Also in June: we celebrated our 30th wedding anniversary. Well, sort of. Before we started getting serious about buying a new house, the original plan was go go on a big European adventure that sort of retraced the trip we took for our honeymoon, but we decided to give each other a house instead. The 31sth wedding anniversary trip to Europe is coming this spring instead.

As part of the house closing deal, we were able to be in the old house through the first weekend in June and we had one last Normal Park hurrah by selling lots and lots of stuff in the annual neighborhood big yard sale event. I went once last time on June 10 to mow the lawn, double-check to make sure everything was cleaned up, and to do one last terror selfie.

July

The new house– the cost of it of course, but also just settling into it and all– meant we didn’t travel anyplace this summer in I don’t know how many years. I missed going up north, and we might not be able to do that again this coming year either. And we watched the shitshow that was the presidential election tick by. But there was golf, there was more AI stuff, hanging out with friends, going to art fairs in Plymouth and Ann Arbor, seeing movies and hanging out. Annette went to visit her side of things in late July, leaving me to fly solo for a few days, and her parents came back with her to stay in the new place for a while, our first house guests.

August

The in-laws visited, we went for a lovely little overnight stay in Detroit. played some golf, started getting ready for teaching, and I wrote a fair amount about AI here and in a Substack space I switched to in August. The switching back happened later. Started feeling optimistic about Kamala’s chances….Oh, and my son defended his dissertation and is now Dr. William Steven Wannamaker Krause (but still Will to me).

September

By September 5, when I wrote this post about both weight loss and Johann Hari’s book about Ozempic called Magic Pill, I was down about 35 pounds from Zepbound. The semester was underway with a lot of AI things in all three classes. There was a touch of Covid– Annette tested positive, I don’t think I ever did, but I felt not great. My parents visited in the end of September, and of course they too liked the new house.

October

The month started with a joint 60th birthday for Annette and our friend Steve Benninghoff– they both turned 60 a few months apart. It was the first big party we had here at the new house. During EMU’s new tradition of a “Fall break,” we went to New York City. We let up with Will and his girlfriend and went to the Natural History Museum (pretty cool), went with them to see the very funny and silly Oh, Mary! Annette and I also went to see the excellent play Stereophonic and met up with old friends Troy and Lisa, and also Annette at an old school Italian restaurant that apparently Frank Sinatra used to like a lot. Rachel and Colin came by for dinner when they were in town too. And of course school/work, too.

November

We started by going to see Steve Martin and Martin Short at the Fox Theater in Detroit— great and fun show. Then, of course, there was the fucking election, another bit of horrible for the year. More Substack writing about AI and just being busy with work– the travels and events of October really put me behind with school, and I felt like I spent the last 6 or so weeks of the semester just barely caught up on it all. Will and his girlfriend came out here before Thanksgiving and she flew back home to be with her family. Meanwhile we made our annual trip to Iowa for Thanksgiving/Christmas. A good time that featured some taco pizza the day after the turkey, and happily, very very little discussion of politics.

December

The semester ended more quickly than usual, just a week after Thanksgiving rather than two. I was pretty pleased with the way the semester turned out overall; I definitely learned a lot more about what to do (and not do) with AI in teaching, and I hope my students got something out if it all too.

I ended up switching back to blogging but not quite giving up on Substack, as I talked about in this post. One of my goals for winter 2025 is to start a more focused Substack newsletter on my next (and likely last) academic research project on the history of AI, Computer Aided Instruction, and early uses of wordpressors in writing pedagogy from the late 70s until the early 90s. Stay tuned for that.

Oh, and the niece I had who was the first of the cousins to get married? Also the first to have a baby in early December– thus the first great-grandchild in the family.

There was much baking (in November too), and some decorating and some foggy pictures of the woods. Will and his girlfriend returned (I think Will has been back here more in the last couple of months than he has been in quite a while) and we took a trip to the Detroit Institute of Art before they left to California to see her family. Will came back here, we made the annual trip to Naples, Florida to see the in-laws, and now here we are.

Like I said, it’s been a lot, and a whole lot of it is bad. I worry about Trump. I miss Bill terribly. He touched a lot of people in his life and so I know I’m not alone on that one.

But I’m also oddly hopeful for what’s to come next. The more we are in the new house, the more it is home. The Zepbound adventure continues (I’m down about 40 pounds from last January), I’m hopeful for Will as he starts a new gig as a post-doc researcher, I’m looking forward to the new term, and I’m looking forward to all that is coming in the new year.

Six Things I Learned After a Semester of Lots of AI

Two years ago (plus about a week!), I wrote about how “AI Can Save Writing by Killing ‘The College Essay,'” meaning that if AI can be used to respond to bad writing assignments, maybe teachers will focus more on teaching writing as a process the way that scholars in writing studies have been talking about for over 50 years. That means an emphasis on “showing your work” through a series of scaffolded assignments, peer review activities, opportunities for revision, and so forth.

This past semester, I decided to really lean into AI in my classes. I taught two sections of first-year writing where the general research topic for everyone was “your career goals and AI,” and where I allowed (even encouraged) the use of AI under specific circumstances. I also taught an advanced class for majors called “Digital Writing” where the last two assignments were all about trying to use AI to “create” or “compose” “texts” (the scare quotes are intentional there). I’ve been blogging/substacking about this quite a bit since summer and there are more details I’m not getting to here because it’s likely to be part of a scholarly project in the near future.

But since the fall semester is done and I have popped the (metaphorical) celebratory bottle of bubbly, I thought I’d write a little bit about some of the big-picture lessons about teaching writing with (and against) AI I learned this semester.

Teachers can “refuse” or “resist” or “deny” AI all they want, but they should not ignore it.

As far as I can tell from talking with my students, most of my colleagues did not address AI in their classes at all. A few students reported that they did discuss and use AI in some of their other classes. I had several students in first-year writing who were interior design majors and all taking a course where the instructor introduced them to AI design tools– sounded like an interesting class. I had a couple of students tell me an instructor “forbid” the use of AI but with no explanation of what that meant. Most students told me the teacher never brought up the topic of AI at all.

Look, you can love AI and think it is going to completely transform learning and education, you can hate AI all you want and wish it had never been invented and do all you can to break that AI machine with your Great Enoch sledgehammers. But ignoring it or wishing it away is ridiculous.

For my first-year writing students, most of whom readily admitted they used AI a lot in high school to do things that were probably cheating, I spent some time explaining how they could and could not use AI. I did so in part to teach about how I think AI can be a useful tool as part of the process of writing, but I also did this to establish my credibility. I think a lot of students end up cheating with AI because they think that the teacher is clueless about it– and I think a lot of times, students are right.

You’re gonna need some specific rules and guidelines about AI– especially if you want to “refuse” or “resist” it.

I have always included on my syllabi an explicit policy about plagiarism, and this year I added language that makes it clear that copying and pasting large chunks of text from AI is cheating. I did allow and encourage first-year writing students to use AI as part of their process, and I required my advanced writing students to use AI as part of their “experiments” in that class. But I also asked students to include an “AI Use Statement” with their final drafts, one that explained what AI systems they used (and that included Grammarly), what prompts they used, how they used the AI feedback in their essay, and so forth. Because this was completely new to them (and me too), these AI Use Statements were sometimes a lot less complete and accurate than I would have preferred.

I also insisted that students write with Google Docs for each writing assignment and for all steps in the process, from the very start of the first hint of a first draft until they hand it into me. Students need to share this with me so I can edit it. I take a look at the “version history” of the Google Doc, and if I suddenly see pages of clear prose magically appear in the essay, we have a discussion. That seemed to work well.

Still, sometimes students are still going to cheat with AI, and often without realizing that they’re cheating.

Even with the series of scaffolded assignments and using Google Docs and all of my warnings, I did catch a few students cheating with AI in both intentional and not as intentional ways. Two of these examples were similar to old-school plagiarism. One was from a student from another country who had some cultural and language disconnections about the expectations of American higher education (to put it mildly); I think first-year writing was too advanced and this student should have been advised into an ESL class. Another was a student who was late on a short assignment and handed in an obviously AI-generated text (thanx, Google Docs!). I gave this person a stern warning and another chance, and they definitely didn’t do that again.

As I wrote about in this post about a month ago, I also had a bunch of students who followed the AI more closely than the first assignment, the Topic Proposal. This is a short essay where students write about how they came up with their topic and initial thesis for their research for the semester. Instead, a lot of students asked AI what it “thought” of their topic and thesis, and then they more or less summarized the AI responses, which were inevitably about why the thesis was correct. Imagine a mini research paper but without any research.

The problem was that wasn’t the assignment.  Rather, the assignment asked students to describe how they came up with their thesis idea: why they were interested in the topic in the first place, what kinds of other topics they considered, what sorts of brainstorming techniques they used, what their peers told them, and so forth. In other words, students tried to use the AI to tell them what they thought, and that just didn’t work. It ended up being a good teachable moment.

A lot of my students do not like AI and don’t use it that much. 

This was especially true in my more advanced writing class– where, as far as I can tell, no one used AI to blatantly cheat. For two of the three major projects of the semester, I required students to experiment with AI and then to write essays where they reflected/debriefed on their experiments while making connections to the assigned readings. Most of these students, all of whom were some flavor of an English major or writing minor, did not use AI for the reflection essays. They either felt that AI was just “wrong” in so many different ways (unethical, gross, unfair, bad for the environment, etc.), or they didn’t think the AI advice on their writing (other than some Grammarly) was all that useful for them.

This was not surprising; after all, students who major or minor in something English-related usually take pride in their writing and they don’t want to turn that over to AI. In the freshman composition classes, I had a few students who never used AI either–judging from what they told me in their AI Use statements. But a lot of students’ approaches to AI evolved as the semester went on, and by the time they were working on the larger research-driven essay where all the parts from the previous assignments come together, they said things like they asked ChatGPT for advice on “x” part of the essay, but it wasn’t useful advice so they ignored it.

But some students used AI in smart and completely undetectable ways.

This was especially true in the first year writing class. Some of the stronger writers articulated in some detail in their AI Use Statements how they used ChatGPT (and other platforms) to brainstorm, to suggest outlines for assignments, to go beyond Grammarly proofreading, to get more critical feedback on their drafts, and so forth. I did not consider this cheating at all because they weren’t getting AI to do the work for them; rather, they were getting some ideas and feedback on their work.

And here’s the thing that’s important: when a student (or anyone else) uses AI effectively and for what it’s really for, there is absolutely no way for the teacher (or any other reader) to possibly know.

The more time I have spent studying and teaching about AI, the more skeptical I have become about it. 

I think my students feel the same way, and this was especially true with the students in my advanced class who were directly studying and experimenting with many different AI platforms and tasks. The last assignment for the course asked students to use AI to do or make something that they could not have possibly done by themselves. For example, one student taught themself to play chess and was fairly successful with that– at least up to a point. Another student tried to get ChatGPT to teach them how to play the card game Euchre, though less successfully because the AI kept “cheating.” Another student asked the AI to code a website, and the AI was pretty good at that. Several students tried to use AI tools to compose music; similar to me I guess, they listen to lots of music and wished they could play an instrument and/or compose songs.

What was interesting to me and I think most of my students was how quickly they typically ran into the AI’s and their own limitations. Sometimes students wanted the AI to do something the AI simply could not do; for example, the problem with playing Euchre with the AI (according to the student) is it didn’t keep track of what cards had already been played– thus the cheating. But the bigger problem was that without any knowledge of how to accomplish the task on their own, the AI was of little use. For example, the student who used AI to code a website still had no idea at all what any of the code meant, nor did they know what to do with it to make it into a real website. Students who knew nothing about music who tried to write/create songs couldn’t get very far. In other words, it was not that difficult for students to discover ways AI fails at a task, which in many ways is far more interesting than discovering what it can accomplish.

I’m also increasingly skeptical of the hype and role of AI in education, mainly because I spent most of the 2010s studying MOOCs. Remember them? They were going to be the delivery method for general education offerings everywhere, and by 2030 or 2040 or so, MOOCs were going to completely replace all but the most prestigious universities all over the world. Well, that obviously didn’t happen. But that didn’t mean the end of MOOCs; in fact, there are more people taking MOOC “courses” now than there were during the height of the MOOC “panic” around 2014. It’s just that nowadays, MOOCs are mostly for training (particularly in STEM fields), certificates, and as “edutainment” along the lines of Master Class.

I think AI is different in all kinds of ways, not the least of which is AI is likely to be significantly more useful than a chat box or to check grammar. I had several first-year students this semester write about AI and their future careers in engineering, logistics, and finance, and they all had interesting evidence about both how AI is being used right now and how it will likely be used in the future. The potential of AI changing the world at least as much as another recent General Purpose Technology, “the internet,” is certainly there.

Does that mean AI is going to have as great of an impact on education as the internet did? Probably, and teachers have had to make all kinds of big and small changes to how they teach things because of the internet, which was also true when writing classes first took up computers and word processing software.  But I think the fundamentals of teaching (rather than merely assigning) writing still work.

Is Apple Intelligence (and AI) For Dumb and Lazy People?

And the challenges of an AI world where everyone is above average

I’ve been an Apple fanboy since the early 1980s. I owned one Windoze computer years ago that was mostly for games my kid wanted to play. Otherwise, I’ve been all Apple for around 40 years. But what the heck is the deal with these ads for Apple Intelligence?

In this ad (the most annoying of the group, IMO), we see a schlub of a guy, Warren, emailing his boss in idiotic/bro-based prose. He pushes the Apple Intelligence feature and boom, his email is transformed into appropriate office prose. The boss reads the prose, is obviously impressed, and the tagline at the end is “write smarter.” Ugh.

Then there’s this one:

This guy, Lance, is in a board meeting and he’s selected to present about “the Prospectus,” which he obviously has not read. He slowly wheels his office chair and his laptop into the hallway, asks Apple’s AI to summarize the key points in this long thing he didn’t read. Then he slowly wheels back into the conference room and delivers a successful presentation. The tagline on this one? “Catch up quick.” Ugh again.

But in a way, these ads might not be too far from wrong. These probably are the kind of “less than average” office workers who could benefit the most from AI— well, up to a point, in theory.

Among many other things, my advanced writing students and I read Ethan Mollick’s Co-Intelligence, and in several different places in that book, he argues that in experiments when knowledge workers (consultants, people completing a writing task, programmers) use AI to complete tasks, they are much more productive. Further, while AI does not make already excellent workers that much better, it does help less than excellent workers improve. There’s S. Noy and W. Zhang’s Science paper “Experimental evidence on the productivity effects of generative artificial intelligence;” here’s a quote from the editor’s summary:

Will generative artificial intelligence (AI) tools such as ChatGPT disrupt the labor market by making educated professionals obsolete, or will these tools complement their skills and enhance productivity? Noy and Zhang examined this issue in an experiment that recruited college-educated professionals to complete incentivized writing tasks. Participants assigned to use ChatGPT were more productive, efficient, and enjoyed the tasks more. Participants with weaker skills benefited the most from ChatGPT, which carries policy implications for efforts to reduce productivity inequality through AI.

Then there’s S. Peng et al and their paper “The Impact of AI on Developer Productivity: Evidence from GitHub Copilot.” This was an experiment with a programming AI on Github, and the programmers who used AI completed tasks 55.8% faster. And Mollick talks a fair amount about a project he was a co-writer on, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality,” which found that consultants in an experiment were more productive when allowed to use AI— except when faced with a “jagged technology frontier” problem, which in the study was a technical problem beyond the AI’s abilities. However, one of the problems Mollick and his colleagues observed is that a lot of the subjects in their study often copied and pasted content from the AI with minimal editing, and the AI-using subjects had a much harder time with that jagged frontier problem. I’ll come back to this in a couple more paragraphs.

Now, Mollick is looking at AI as a business professor, so he sees this as a good thing because it improves the quality of the workforce, and maybe it’ll enable employers to hire fewer people to complete the same tasks. More productivity with less labor equals more money, capitalism for the win. But my English major students and I all see ourselves (accurately or not) as well-above-average writers, and we all take pride in that. We like the fact we’re better at writing than most other people. Many of my students are aspiring novelists, poets, English teachers, or some other career where they make money from their abilities to write and read, and they all know that publishing writing that other people read is not something that everyone can do. So the last thing any of us who are good at something want is a technology that diminishes the value of that expertise.

This is part of what is behind various declarations of late for refusing or resisting AI, of course. Part of what is motivating someone like Ted Chiang to write about how AI can’t make art is making art is what he is good at. The last thing he wants is a world where any schmuck (like those dudes in the Apple AI ads) can click a button and be as good as he is at making art. I completely understand this reason for fearing and resisting AI, and I too hope that AI doesn’t someday in the future become humanity’s default story teller.

Fortunately for writers like Chiang and me and my students, the AI hype does not square with reality. I haven’t played around with Apple AI yet, but the reviews I’ve seen are underwhelming. I stumbled across a YouTube review by Marques Brownlee about the new AI that is quite thorough. I don’t know much about Brownlee, but he has over 19 million subscribers so he probably knows what he is talking about. If you’re curious, he talks about the writing feature in the first few minutes of this video, but the short version is he says that as a professional writer, he finds it useless.

The other issue I think my students and I are noticing is that the jagged frontier Mollick and his colleagues talk about— that is, the line/divide between tasks the AI can accomplish reasonably well and what it can’t— is actually quite large. In describing the study Mollick and his colleagues did which included a specifically difficult/can’t do with AI jagged frontier problem, I think he implies that this frontier is small. But Mollick and his colleagues— and the same is true with these other studies he quotes on this— are not studying AI in real settings. These are controlled experiments, and the researchers are trying to do all they can to eliminate other variables.

But in the more real world with lots of variables, there are jagged frontiers everywhere. The last assignment I gave in the advanced writing class asked students to attempt to “compose” or “make” something with the help of AI (a poem, a play, a song, a movie, a website, etc. etc.) that they could not do on their own. The reflection essays are not due until the last week of class, but we have had some “show and tell” exchanges about these projects. Some students were reasonably successful with making or doing something thanks to AI— and as a slight tangent: some students are better than others at prompting the AI and making it work for them. It’s not just a matter of clicking a button. But they all ran into that frontier, and for a lot of students, that was essentially how their experiment ended. For example, one student was successful at getting AI to generate the code for a website; but this student didn’t know what to do with the code the AI made to make it actually into a website. A couple of students tried to use AI to write music, but since they didn’t know much about music, their results were limited. One student tried to get AI to teach them how to play the card game Euchre, but the AI kept on doing things like playing cards in the student’s hand.

This brings me back to these Apple ads: I wish they both went on just another minute or so. Right after Warren and Lance confidently look directly at the camera with smug look that says to viewers “Do you see what I just got away with there,” they have to follow through with what they supposedly have accomplished, and I have a feeling that would go poorly. Right after Warren’s boss talks with him about that email and right after Lance starts his summary, I am pretty sure they’re gonna get busted. Sort of like what has happened when I have suspected correctly that a student used too much AI and that student can’t answer basic questions about what it is they (supposedly) wrote.

IT’S A WITCH!

Reflecting on Melanie Dusseau’s “Burn It Down: A License for AI Resistance”

I don’t completely disagree with Melanie Dusseau’s advice in her recent Inside Higher Ed column Burn It Down: A License for AI Resistance, but there’s something about her over-the-top enthusiasm for “burning it down” that reminds me of this famous scene from Monty Python and the Holy Grail:

Dusseau, who is a creative writing professor at the University of Findlay, writes “Until writing studies adopted generative artificial intelligence as sound pedagogy, I always felt at home among my fellow word nerds in rhet comp and literary studies.” A bit later, she continues:

If you are tired of the drumbeat of inevitability that insists English faculty adopt AI into our teaching practices, I am here to tell you that you are allowed to object. Using an understanding of human writing as a means to allow for-profit technology companies to dismantle the imaginative practice of human writing is abhorrent and unethical. Writing faculty have both the agency and the academic freedom to examine generative AI’s dishonest training origins and conclude: There is no path to ethically teach AI skills. Not only are we allowed to say no, we ought to think deeply about the why of that no.

Then she catalogs the many many mmmmmaaaaaannnnnnyyyyyy problems of AI in prose I found engaging and intentionally funny in its alarmed tone. Dusseau writes:

Resistance is not anti-progress, and pedagogies that challenge the status quo are often the most experiential, progressive and diverse in a world of increasingly rote, Standard English, oat milk sameness. “Burn it down” is a call to action as much as it is a plea to have some fun. The robot revolution came so quickly on the heels of the pandemic that I think a lot of us forgot that teaching can be a profoundly joyful act.

AI resistance/refusal is catching on. The day after I read this article, I came across (via Facebook) a similar albeit much more academic call for resistance, “Refusing GenAI in Writing Studies: A Quickstart Guide” by Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes. While also calling for the field to “refuse” AI, it’s more of an academic manifesto with a lot of citation, it’s a much more nuanced and complicated, and also still a work in progress. For example, sections that are “coming soon” on their wordpress site include “What Is GenAI Refusal?” and “Practicing Refusal.” Perhaps I’ll write more specifically about this when it is closer to finished, but this post isn’t about that.

Anyway, why does “burning it down” make me think of that Monty Python scene? The peasants bring one of the knights (ChatGPT just told me it was “Sir Bedevere the Wise”— let’s hope that’s right!) a witch (or AI) to be burned at the stake. They’re screaming and enraged, wanting to burn her immediately. The knight asks why they believe she’s a witch, and the evidence the peasants offer up is flimsy. The wise knight walks them through the logic of how to test if the woman truly is a witch: to put her on the scales and see if she weighs as much as a duck and thus floats like wood and thus she too is made of wood and will burn for being a witch. (Stick with me here— the punchline at the end has a twist).

Like the mob, Dusseau has had enough with all these witches/AIs. She wants it gone and for it to have never existed in the first place. But since that’s not possible, Dusseau is calling for like-minded writing teachers to refuse to engage. “To the silent, hopeless AI skeptics and Star Trek fans: resistance is not futile. We simply do not have to participate. Let Melville’s Bartleby provide the brat slogan of our license to resist: ‘I would prefer not to.’”

Now, maybe I’m just not hearing the “drumbeat of inevitability” for embracing AI to teach writing because I’m one of these people teaching a lot with/about AI this semester. But I have no idea what she’s talking about. If anything, it seems like most faculty around here have either ignored AI or banned it. Most of my students this semester have told me that AI has not come up as a topic in their other classes at all.

Before one burns it all down, it probably is a good idea to figure out what “it” is. Maybe Dusseau has already done that. Or maybe she is like a lot of my fellow academic AI resisters who don’t know much about AI and think that it is only for brute-force cheating. Maybe she knows better and is making an informed decision about resisting AI; it’s hard for me to tell.

I think her arguments for why we should refuse AI boil down to two. First, AI requires giant data centers and it takes A LOT of electricity and water to run those sites. That is completely true, and that doesn’t even get into the labor exploitation that went into training LLMs and monitoring content, the monopolistic and unregulated giant corporations that control all this, etc. All true, but look: these data centers also power EVERYTHING we do online and they have been an environmental problem for decades. So it’s not that she’s wrong, but I suspect that Dusseau isn’t thinking about refusing Facebook or Google searches anytime soon.

The second argument is that it ruins writing. Like almost every other person I’ve read making this argument, Dusseau references Ted Chiang’s New Yorker article “Why A.I. Isn’t Going to Make Art” in passing. What she doesn’t mention is Chiang’s definition of art is really fiction writing, and he sets the bar extremely high as to what counts as “art.” I prefer Matteo Wong’s response in The Atlantic, “Ted Chiang Is Wrong About AI Art,” but I’ll leave that debate for another time.

I think what Dusseau means by “writing” is writing that is personal, expressive, and “creative,” poetry and fiction and the like. Of course, AI is not the right tool for that. It’s not for writing a heartfelt fan letter from a child to an Olympic athlete, and Google found that out with the backlash to their “Dear Sydney” ad campaign this summer. (If you don’t know what I’m talking about check out the great post Annette Vee wrote about this called “Why ‘just right’ is wrong: What the Gemini ad ‘Dear Sydney; says about writing that people choose to do.”) Everyone I follow/read about AI agrees with this.

But most writing tasks are not personal, expressive, or creative, and that is particularly true for many writing tasks we all have to do sometimes, often reluctantly, for school or for work: routine reports, memos, forms, the kind of things we call “paperwork.” A lot of students are required to write when they would “prefer not to,” which is why students sometimes use AI to sometimes cheat on writing assignments. So yes, like Dusseau, I don’t want AI writing my journal entries, personal emails, or anything else that’s writing I choose to do, and I don’t want students to cheat. But there’s a role for AI with some of these not-chosen writing tasks that is perhaps useful and not cheating.

The other problem is that Dusseau’s own resistance is not going to stop any of her students or her colleagues from using AI. I don’t know if AI-based writing tools are going to inevitably be a part of writing pedagogy or not, but I do know that AI is going to continue to be a tool that people are going to continue to use. I have students in all of my classes (though more of them in the class of English majors) who are AI refusers, and I think that’s really important to note here: not all students are on board with this AI stuff either. But for my students who seem to know how to use AI effectively and as something akin to a brainstorming/proofreading/tutoring tool, it seems to work pretty well. And that’s the kind of AI use that is impossible for a teacher to detect.

So to me, the council of the knight is best. Before we burn this AI witch, why don’t we see what we’re up against? Why don’t we research this a bit more? Why don’t we not burn it own but instead (to very generally reference Cynthia Selfe’s Technology and Literacy in the 21st Century) pay attention to it and on alert?

But here’s the thing: in that Monty Python scene, it turns out she is a witch.

The punchline in that scene goes by so quick it took me a few viewings to realize it, but the woman does weigh the same as the duck, thus is made out wood, and thus is a witch. The peasants were right! SHE’S A WITCH!

Because like I said at the beginning of this, I don’t completely disagree with Dusseau. I mean, I still don’t think “burn it down” is a good strategy— we gotta pay attention. But I’m also not saying that she’s wrong about her reasons for resisting AI.

My semester isn’t quite over, and I have to say I am not sure of the benefits of the up-front “here is how to use AI responsibly” approach I’ve taken this semester, particularly in freshman comp. But I do know an impassioned and spirited declaration to students about why they too should burn it all down is not going to work. If writing teachers don’t want their students to use AI in their courses, they cannot merely wish AI away. They need to learn enough to understand the basics of it, they need to explain to students why it’s a bad idea to use it (or they need to figure out when using AI might be okay), and they’re going to have to change their writing assignments to make them more AI proof.

AI Cheating as a Teachable Moment

A Simple Example

Back to my “regular programming” with a post/update/stack/whatever these things are calls that is more on brand….

Which makes this all a teachable moment for me as well: I think the lesson I’ve learned (or re-learned) from this is that the best way to prevent/discourage students from using AI to cheat is to get out in front of the issue. I’m not saying that all writing teachers ought to allow their students to use AI; in fact, as we’re approaching the end of the semester, I’m not sure if it is a good idea to encourage and sanction the use of AI in classes like first year writing. But I am sure that is is a very good idea for writing (and other kinds of ) teachers to be up-front about AI. I think when teachers do spend some time talking about what does or doesn’t work with AI, students are less likely to use it to cheat in that class— if they use it at all.

My students and I have reached the part of the semester where they are mostly working on finishing the assignments, and where I’m mostly working on reading/commenting/evaluating those assignments. So busy busy busy. Anyway, as kind of an occasional break from that work, I wrote this post in bits and pieces over the last week or two about how a particular example of AI “cheating” became a “teachable moment.”

I think there’s AI CHEATING and there’s AI “cheating,” much in the same way that there is PLAGIARISM and then there’s “plagiarism.” By PLAGIARISM, I mean the version where a student hands in a piece of writing they did not compose at all. The most obvious example is when a student pays someone else to do it, perhaps from an online paper mill. I know this happens, but I don’t think I’ve ever seen it— unless it was that good I didn’t notice.

More typically, students do this cheating themselves by copying, pasting, and slightly tweaking chunks of text from websites to piece together something kind of like the paper. This is usually easy to spot and for two reasons. First, the same Google searches students use to find stuff to cheat with also works for me to find the websites and articles they used to cheat. Second and perhaps more importantly, students only plagiarize like this when they know they’re failing and desperate, so it’s easy to spot.

The much more common kind of “plagiarism” I see is basically accidental. A lot of students— especially first year students— do not understand what needs to be cited and what does not. This is because citation is both confusing and a pain in the ass, so students sometimes do not realize they had to have a citation at all, or they just skip it and figure no one will notice. Fortunately, it’s easy to spot when students drop in a quote from an article without citation because of the writing shift: the text goes from a college freshman grappling with their prose suddenly to a polished and professional writer, often with specialized word choices and jargon. And as often as not, students do cite some of the article they’re accidentally plagiarizing, so it’s pretty easy to check.

This is a “teachable moment:” that is, one of those things that happens in a class or an assignment where it’s an opportunity to reinforce something that has already been taught. This is where I remind the student about what we already talked about: how unintentional plagiarism is still plagiarism, that this is specifically an example of why it’s important to cite your sources correctly, and so forth. This tends to click.

Similarly, there’s AI CHEATING and then there’s AI “cheating,” and I have seen examples of both in my first year writing classes this semester. The big example of extreme AI CHEATING I’ve seen so far this semester is not that interesting because it was so textbook: desperate failing student clumsily and obviously uses AI, I called the student out about it, student confesses, I gave the student the choice to fail or withdraw rather than going through the rig-a-ma-roll of getting that student expelled (oh yes, that is something I could have done). Slight tangent: if catching AI cheaters is as easy and as obvious as it seems to be, what’s the problem? Conversely, if students are using AI effectively as a tool to help their process (brainstorming, study guides, summarizing complicated texts, proofreading, etc.) and if that use of AI isn’t detectable by the teacher, well, what’s the problem with that?

The AI “cheating” example from this semester was a more interesting and teachable moment. Here’s what happened:

The first assignment in my freshman comp classes is a 2-3 page essay where students explain their initial working thesis and how they came up with it. It’s a low-stakes getting started kind of assignment I grade “complete/incomplete.” As I explain and remind students repeatedly, this is not an essay where they are trying to convince the reader to believe their thesis. Rather, this is an essay about the process of coming up with the working thesis in the first place. What I want students to write about is why they’re interested in their topic, what sorts of brainstorming activities they tried to come up with their topic, what sorts of conversations they had about this project with me and with classmates, and so forth.

This semester, the topic of research in my first year writing classes is “your career goals and AI.” I’ve also spent a lot of class time explaining why I think AI is not that useful for cheating because it just can’t do these assignments very well. But I also explained how AI might be useful as part of the process as well. For example, a lot of these students really struggle with coming up with a good and researchable topic idea/thesis, and even though most of AI’s ideas for a thesis about career goals and AI aren’t great, it does help them get beyond staring at a blank page.

I’ve given a version of this assignment for a long time, and in previous semesters and pre-AI, two or three students (out of 25) messed it up. It’s usually because the students didn’t understand the assignment, or they weren’t paying attention to/didn’t do any of the prewriting exercises we discussed in class. So they try to fake it by writing what ends up being a really short research paper without any research. I gave these students a do-over, and that usually was enough to get them back on track.

This semester, I had closer to half of the students in my two sections mess this up. I’m sure some of these students just didn’t get the assignment/didn’t do the prewriting activities, but what I think happened more is a lot of students got a little lazy and hypnotized by the smooth, mansplaining prose of AI. So instead of remembering what the assignment was about, they just took what the AI was feeding it about their working thesis ideas and tweaked that a bit.

The teachable moment? I met with the students who messed this up, reminded them what the assignment was actually supposed to be, and I pointed out that this was exactly the kind of thing that AI cannot do: it can’t help you write about what you think. At least not yet.

This was a couple weeks ago, and for most of my students, I think it clicked. I still have a number of students who are struggling and unlikely to pass for all kinds of reasons, but that’s typical for freshman comp. Some students (particularly the ones on the way to failing) are still trying to use AI for cheating, but for the most part, I think students have learned the lesson.

I ask students to include an “AI Use Statement” where they describe how they used AI, or to say explicitly that they didn’t use any AI. This is a brand-new thing for both them as students and me as a teacher, so they sometimes forget or they don’t explain their AI use as clearly as I wanted. And I am sure some students are fibbing a little about how much AI they used. But for the most part, what students are telling me is they aren’t using AI to write at all, or they’re using Grammarly for proofreading (which I think counts as AI), they are using an AI for some ideas about a particular paragraph, and/or getting started or some other brainstorming kind of suggestion.

Which makes this all a teachable moment for me as well: I think the lesson I’ve learned (or re-learned) from this is that the best way to prevent/discourage students from using AI to cheat is to get out in front of the issue. I’m not saying that all writing teachers ought to allow their students to use AI; in fact, as we’re approaching the end of the semester, I’m not sure if it is a good idea to encourage and sanction the use of AI in classes like first year writing. But I am sure that is is a very good idea for writing (and other kinds of ) teachers to be up-front about AI. I think when teachers do spend some time talking about what does or doesn’t work with AI, students are less likely to use it to cheat in that class— if they use it at all.

Messing Around with the Viral “What’s One Thing I Don’t Know About Myself” ChatGPT Prompt

“Self-Discovery Through AI Insights”

I have been catching up from a fall break weekend in New York City (family, friends, shows, etc.) this afternoon. It’s a good thing classes don’t start for me until Thursday. While taking a break from grading and such, I stumbled across this piece on the website tech radar, “New ChatGPT prompt goes viral with Sam Altman’s approval.” It’s about a viral ChatGPT prompt:

Sam Altman, CEO of OpenAI, the makers of ChatGPT, put his name to a new viral trend on the platform recently when he retweeted it on X (formerly Twitter) with “love this” added in response to a tweet that has got ChatGPT users enthralled.

The original Tweet, from writer Tom Morgan, says simply, ‘Ask ChatGPT

“From all of our interactions what is one thing that you can tell me about myself that I may not know about myself’.

I don’t know if any of this is surprising, but it is kind of interesting, isn’t it? How long will it be before OpenAI does this?

The article goes on to describe how blown away users were by the AI response, how it made them feel better about themselves, etc. It also talks about people hacking the prompt by instead asking ChatGPT to “roast me” based on our interactions. Well, I didn’t try that, but I did try the first one. If you’re curious, here’s a link to the discussion.

I don’t want to suggest I’m the most introspective guy in the world, but I don’t thinking ChatGPT told me anything about myself that I didn’t think before. But its response was definitely flattering and clever, making connections between my cooking interests and technology, for example. So I can see why it would be a good pick-me-up for users.

Anyway, the thing I really wondered about was the title the AI gave to the discussion. ChatGPT always creates the titles of these discussions, but I thought this one, “Self-Discovery Through AI Insights” sounded a little too like marketing or advertising copy.

So I asked ChatGPT about that. It responded that no, to the best of its knowledge, the AI had not been specifically instructed to say that and that’s just now how the AI works. I pointed out that there are certain prompts about things like asking ChatGPT how to build a bomb that produce specific responses: in the case of “tell me how to make a bomb,” it explains why it cannot do that. Why couldn’t that same coding/protocols be used to require ChatGPT to title conversations from a specific prompt in a specific way? The AI said that theoretically, yes, that could happen.

Could that be used for advertising? Yes. “This would be similar to how search engines or other platforms may show ads or sponsored content. If I were programmed with specific instructions to promote a particular cruise line, I could be guided to prioritize talking about that company, its features, or its benefits when asked certain questions.”

I don’t know if any of this is surprising, but it is kind of interesting, isn’t it? How long will it be before OpenAI does this?

Four Rules For Discouraging Cheating with AI in Writing Classes

An indirect but positive review of Mollick’s “Co-Intelligence”

This semester, I’m teaching two sections of first year writing (aka freshman comp) and an advanced writing course called Digital Writing, and both have AI elements and themes. In first year writing, the research theme is “Your Career and AI.” In the Digital Writing course, the last two writing projects are going to be waist-deep in writing with AI. Maybe one day I will better understand/make use of Substack’s newsletter function to chronicle these classes in more detail, but that’s later.

For Digital Writing, we’re reading and discussing Ethan Mollick’s Co-Intelligence: Living and Working with AI. If you’re reading posts like this because you too are trying to make sense out of the what AI is about, there’s a good chance you’ve already heard of Mollick’s book and his Substack, One Useful Thing. If you haven’t heard of Mollick and you want to know more about AI but you’re overwhelmed with the firehose of news and information, then his book is for you. Co-Intelligence is a well-written, accessible, and a thoroughly researched 30,000 foot overview in less than 250 pages printed in a big font. It’s enough to get the “AI curious” up to speed on the current state of things (it was published in April 2024), while also pointing readers to ideas for further reading and research.

Mollick is a business professor at the Wharton School at the University of Pennsylvania, so he is primarily interested in how AI will impact productivity and innovation. I think we conceptualize teaching a bit differently, and like everything I’ve read about AI, Mollick is making some claims I doubt. But we’re mostly on the same page.

One of the most cited/mentioned chapters in Mollick’s book is “Four Rules for Co-Intelligence.” In brief, those rules are:

  • Always invite AI to the table, meaning you have to experiment and try to use AI (or really, different platforms, so AIs) for lots and lots of different things in order to discover what it/they can and can’t do.

  • Be the human in the loop: it’s a bad idea to completely turn over a task to AI, both because AI makes a lot of mistakes (aka hallucinations) and humans ought to be in charge rather than the other way around.

  • Treat AI like a person (but tell it what kind of person it is). AI doesn’t behave same way as other computer applications, so Mollick says we need to be conversational with it as if it were a human. Mostly he’s talking about creating context and scenarios in AI prompts, as in “You are an experienced teacher speaking to skeptical students about the value of group work. What advice would you give those students?”

  • Assume this is the worst AI you will ever use, which is perhaps the most accurate of these AI rules.

So, in that spirit, here are four more rules about teaching writing and AI— specifically, what teachers can do to discourage students from using AI to cheat.

Of course, I’m far from the first person to come up with four more rules for AI— I’m not even the first person to come up with four more rules for AI and writing! For example, there’s this fine post from Jane Rosenzweig at her site Writing Hacks “Four Rules for Writing in the Age of AI,” and also this guest post at John Warner’s Substack site by high school teacher and writer Brett Vogelsinger, “Artificial Intelligence and Writing: Four Things I Learned Listening to my High School Students.” Both great posts and great thoughts.

The most common concern about AI I read on Facebook (though not so much on Substack) from other professors and teachers is students using it to cheat on writing assignments. So this post isn’t about how to use AI to teach writing— maybe I’ll write more about that when I have a better sense of the answer. This is about how teachers can create an environment that discourages students from cheating with AI. It’s not foolproof. Sometimes, usually when they are desperate to try anything to pass the class, students cheat.

Teach writing as a process; don’t assign writing as a product.

I kicked off my writing about AI in this blog post from December 2022 “AI Can Save Writing by Killing ‘The College Essay.’” It’s the most frequently read post on the old blog. I wrote it in response to two different articles published in The Atlantic at the time arguing that the new ChatGPT had made writing assignments impossible and irrelevant.

Teaching writing as a process has been the mantra in composition and rhetoric since the late 1970s. Scholars debate the details about what this means, but in a nutshell, teaching writing as a process means setting up a series of assignments that begin with pre-writing invention exercises (freewriting and other brainstorming techniques, for example), activities that lead to rough drafts which are shared with other students through peer review. When students hand this work in, the instructor’s feedback is geared toward revision and (hopefully) improvement on future projects. My first year writing course is typical in that it is about research and students complete a research essay project. But long before we get to that assignment, students complete a series of smaller scaffolded assignments that build up to the larger essay. Again, none of this is new and it is how I was taught to teach writing back in the late 1980s when I started as a graduate teaching assistant.

I teach writing this way because there is good evidence that it works better than merely assigning writing. I also think teaching writing as a process deters plagiarism and other forms of cheating (including with AI). I require students to build their research writing projects through a series of smaller and specialized assignments, and to share their work in progress with other students in peer review. It’s awfully hard to fake this. Also, as I wrote back in July, I now make the process more visible by requiring students to complete their essays from beginning through final revisions on a Google Doc they share with me so I can view the document history and see what it is they did to put their writing together.

In contrast, assigned writing projects have always been much easier to cheat on. Before AI, students cheated with the internet, paper mills, by getting others doing the writing, or (at least according to my father who went to college in the early 1960s) with the library of papers that fraternities kept on hand.

There’s also the issue of the purpose of writing assignments in the first place. Teaching writing as a process is especially important in a course where the subject itself is writing and there is a lot of attention to how students craft their sentences and paragraphs. I realize that’s different from a class where the subject is literature or political science or business administration. But besides the fact that we should teach (not just assign) writing across the curriculum, writing assignments should ask students what they think about something. In research-based courses like freshman comp, students write about the research they did to persuade and inform both me and their classmates about something. It’s one of the reasons why I like teaching this class: my students are always teaching me new things. In my classes that are not as research-based (like Digital Writing), students write and reflect on the assigned readings and other projects of the class in order to share with readers what they think.

Assigned writing tasks tend to seek specific answers based on the content of the course— write about the theme of madness in Hamlet, about the balance of power between the three parts of the federal government, about they key causes of the great recession, etc. In evaluating assigned writing, teachers are less interested in what students think and are more in seeing if students correctly repeated the content of the course the teacher delivered through lectures, activities, and readings. In other words, assigned writing is an assessment tool, like an exam— and in most cases, it probably would be a more effective to use an exam.

Now, teaching writing as a process is A LOT more work for everyone because it means more reading, more teacher commenting, and more checking in with students’ writing as they progress through these assignments. This is why at the vast majority of colleges in the U.S., first year writing courses have 25 or fewer students. Some colleagues who teach lecture courses with 100 or so students who also assign papers have asked me how they’re supposed to teach writing as a process in these courses. My answer is I wouldn’t. Instead, I’d rely on short written responses to readings, quizzes, and exams.

Any course assignment that could be completed without being present in that course is a bad assignment.

A lot of the hype around AI is about how great it is at passing tests— LSAT, GRE, SAT, etc. etc.— and how that is supposed to mean something. But besides the issue of whether AI can pass these tests because it “knows” or because the test questions were part of the content used to create the AI, I think we all know this is not how school works. I mean, if on the first day of a course I introduced all the writing assignments, and then a student showed up on the second day and said “I finished everything— can I get my A now?” the answer, obviously, is no.

Which brings me to this second rule: if a teacher gives students an exam or an assignment that could be successfully completed without ever being in the class, then that’s a bad assignment. This is something I never thought about before AI. In the old old days, I don’t think it made much difference. When I went to college in the mid 1980s, if someone could pass an intro to chemistry exam or a history 101 exam without ever attending the class, what’s the problem? They already had enough mastery of the subject to pass the class anyway. That started to end with students doing Google searches to pass exams, and now that AI can answer all those questions in that history 101 class final in real time, it’s completely over.

AI isn’t attending classes with our students (at least not yet), and so it is not as useful to cheat on exams or assignments that have specific connections to the course. That’s easy enough to do in the kinds of courses I teach, though I have to assume this is more complicated in a subject like calculus where the concepts and methods transcend classroom boundaries. But perhaps an even easier way to address this problem is for the teacher to make participation count as part of the grade. As I discussed in this post, my classes have a participation grade component that counts for about 30% of the grade.

AI detection software doesn’t work and it never will.

A lot of teachers want to skip these first two rules and instead just rely on some kind of app that can detect what parts of a student’s paper were written by an AI. Essentially, they want something like the plagiarism detection software Turnitin many of these teachers have used for years. Though as a quick glance at the Turnitin website reveals, they are shifting from plagiarism detection along with AI detection as well.

Plagiarism detection software has been a divisive topic in writing studies for years. While I know lots of teachers routinely require their students to run their papers through Turnitin for a plagiarism check, I never have done this because I don’t think it’s necessary and I don’t think Turnitin is as good of a tool as many users seem to think. This is especially true with AI detection. According to Turnitin, the false-positive rate for “fully human-written text” is less than 1%, but up to 20% for AI writing. And that is just for the very common and very dumb way people use to cheat with AI: writing a simple prompt and copying and pasting the answer with few changes. I have to assume the ineffectiveness in detecting AI goes down if the human using the AI effectively: for brainstorming, proofreading/editing, chatting with it about revision ideas, and so forth.

It’s a futile effort, especially as the AIs improve and as all of us (including our students) learn more about how to use them for not just cheating. Which leads me to my last point:

Teachers at all levels need to learn more about AI.

Colleges and universities are certainly trying. The two talk things I did last year about AI were both faculty development events, and the attendance at both was pretty good. I know folks here at EMU have held similar events, and I get the impression this is pretty common at most colleges and universities. And faculty have heard of AI at this point, of course.

The problem is I’m not sure any of the faculty development or the oodles of news stories about AI has resulted in any differences in teaching. This is mostly just based on my own sense of things, but I did informally poll my current students (I have about 70 this semester) the other day about AI in other classes they were taking. A few students mentioned classes where they are using AI for various assignments. A few other students mentioned instructors who expressly forbid the use of AI. I asked these students if they thought the instructor had any way of enforcing that; “no.” But the majority of my students said that the topic has not come up at all. That’s a problem.

I’m not saying every teacher now needs to embrace AI and incorporate it into their teaching. Not at all. Besides experimenting with AI in my teaching, I’ve been doing a lot of writing and reading about AI that is (hopefully) going to turn into a research project. I think my teaching with AI experiments are going well, but I honestly don’t know if this is something I’ll continue to do in the future. I feel the same way about AI generally: it probably is going to “change everything,” but it also might end up being another one of those things (like MOOCs, which was the subject of my last major research project) that never lives up to the hype.

What I am saying though is AI is here now and it looks like it’s going to be (probably) a big deal for some time to come. It is not just going to “go away” and it cannot be ignored. A professor or teacher can continue to refuse to engage with AI for valid ethical or personal reasons, but that is not going to stop everyone else from using it. That includes some of our students who are using AI simplistically to cheat, perhaps by feeding the teacher’s writing assignment into ChatGPT and copying/pasting whatever the AI comes up with. Fortunately, it’s pretty easy to spot that sort of AI use. But what teachers cannot easily recognize or stop is a student who uses AI more in the way that it is really meant to be used: as a tool to help/improve what humans do, not replace it.

So start learning about AI, even if you hate it. Mollick’s book is a good place to start.

A Small Example of the Jagged Frontier and Discovering What ChatGPT Can’t Do

Or, the case of the penless writing logo

One of the things about AI I find most fascinating (and I read about this in other commentaries about ChatGPT and other AIs all the time) is it is really good at lots of things and weirdly bad at lots of other things, and that seems especially true with a lot of things humans do very well. In his book Co-Intelligence: Living and Working with AI, Ethan Mollick describes this as the “jagged technological frontier.” He describes a large study he and some colleagues did (I will probably need to read this and possibly it assign this fall), and in the abstract of that study, they write that AI creates a jagged technological frontier “where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI.” ChatGPT can easily make a pretty good looking (to me) logo, but it cannot separate writing from the word/image/concept of “pen.”

This is going to find its way into my courses this semester; as one of a series of exercises, I want to ask students to discover for themselves something the AI can’t do, and, if possible, make a guess as to why. I guess this would be my example of some of that assignment. I had a “conversation” with ChatGPT the other day to create a logo for the Canvas site for my Digital Writing class this fall. This is using the “not free” version which allows me to generate lots of images (I think the free version limits users to just two a day.). ChatGPT won’t let me just share the conversation, so I’ll reproduce/paraphrase it here.

I started with this prompt:

“Draw me a logo for the top of a website for a college class I’m teaching called digital writing. The class examines writing as a technology. Since the course number is 354, incorporate the number somewhere into the design. The university’s colors are green and white, but it is okay to incorporate other colors if you want.”

ChatGPT made this:

Then I asked it to make a JSON script that describes what this image looks like. I’m going to skip that in this post in part because I don’t know enough about JSON, and in part that’s not the point I’m trying to make. That’s maybe a future post.

Anyway, here’s an image it made with the JSON script:

And here’s another one it made when I asked it to repeat what it did before:

FWIW, this was my favorite and the one I went with— after some minor editing.

Then I wrote: “Now forget the json script. Instead, draw me a logo for the top of a website for a college class I’m teaching called digital writing. The class examines writing as a technology. Since the course number is 354, incorporate the number somewhere into the design. The university’s colors are green and white, but it is okay to incorporate other colors if you want.” Here’s what it drew:

To me, I think the the reproduction of the logo by ChatGPT is getting a little worse in some ways. I mean, “Nourse?”

Then I wrote: “I noticed you added a pen even though that wasn’t in the prompt. There are a lot of other kinds of writing implements and tools. Why did you decide on using a fountain pen?”

ChatGPT responded: “I chose to include a fountain pen because it’s a classic symbol of writing, representing the traditional side of the writing process. The contrast between the fountain pen and digital elements (like circuits or binary code) visually communicates the course’s theme of examining writing as a technology.

“Fountain pens are often seen as timeless and elegant, bridging the gap between old and new, which fits the idea of exploring writing in both traditional and technological contexts. It also has a distinctive and recognizable shape, which makes it effective in a logo.

“If you’d prefer a different writing tool or symbol, I can certainly create a version with an alternative element!”

I wrote: “How do you think it might look if instead of a fountain pen you used a typewriter instead?”

ChatGPT responded: “Using a typewriter in the logo could create a strong visual connection to the history of writing technology, emphasizing the transition from traditional mechanical tools to digital ones. The typewriter could be depicted with keys or as a whole, evoking a sense of retro charm while still aligning with the theme of writing as technology. This could give the logo a more nostalgic feel, yet still relevant to the course’s examination of the evolution of writing tools.

“If you’d like, I can create a version of the logo with a typewriter instead of the fountain pen to see how it looks. Would you like me to do that?”

“Sure, create a version like that” I wrote back. Here’s what it made:

ChatGPT also wrote with this image “Here is the logo with a typewriter element instead of a fountain pen. This version emphasizes the traditional aspect of writing through the typewriter while maintaining a connection to digital technology. If you have any further requests or adjustments, feel free to let me know!” Oh, the mansplaining confidence of ChatGPT, trying to convince me there was no pen anymore!

I responded “That’s interesting, but there is still a pen in the logo. can you eliminate that and the three smaller versions of the logo on the left side and in the right corner?” It produced this:

And ChatGPT wrote “Here is the revised logo, with the pen removed and only the typewriter elements included. The smaller versions of the logo have also been eliminated. Let me know if this version meets your needs or if you would like any further adjustments!” Again, it seems to be glitching more on some of the words and such.

I responded “Nope, the pen is still there and there are still three smaller logos, this time across the bottom of the page. Can you get rid of the pen and the smaller logos?” ChatGPT made this:

I find the inclusion of a photo-realistic pen/mechanical pencil particularly amusing, like ChatGPT is just trying to sneak it in there. You can also see where it’s getting more confused with “W- – +—33ING” or whatever that is. I responded “The small logos are gone, but there is still a writing implement– a pen or a mechanical pencil, I can’t tell. Also you the writing for digital writing and university at the bottom of the logo shouldn’t be there. Can you eliminate these things?” It made this:

It also wrote back “This version removes the pen or pencil, as well as the additional text at the bottom of the logo. The design now focuses entirely on the typewriter and digital elements, with the course number 354 subtly incorporated. If this meets your expectations or if further adjustments are needed, just let me know!” It’s almost as if it is admitting that yeah, the pencil/pen is still there, but at least the typewriter is what is being emphasized.

Here’s the punchline for me. I asked again “Can you delete the pencil, the pen, and the word University from this logo?” ChatGPT’s response surprised me:

“It seems that the tools available to me do not allow direct editing of existing images to remove specific elements like the pencil, pen, or text. Instead, I can generate a new logo without those elements, as I attempted to do in the previous steps. However, if those elements persist in the new generation, it’s due to the constraints of the image generation process.

“If you have access to image editing software, you might be able to manually remove the undesired elements. Alternatively, I can keep trying to refine the prompt to produce a version that meets your specifications. Would you like me to attempt another version or provide instructions for editing the image manually?”

I asked “Can you tell me more in simple layman terms what you mean by the constraints of the image generation process?”

ChatGPT answered “Sure! When I generate images, I’m using a process that interprets a text description to create a visual design. The tool I use is powerful but not perfect—it tries to match the description as closely as possible, but sometimes it includes extra elements or doesn’t fully understand specific requests, like leaving out certain objects or text.

“These ‘constraints’ mean that even when I specify not to include a pen or extra text, the tool might still add them because it’s interpreting the description in a way that makes sense to it, based on patterns it has learned from millions of images.

“So, while I can describe what you want very precisely, the image might still include some unwanted elements because the generation process doesn’t always get it exactly right. This is why there might be some back-and-forth to refine the result, but it’s not always guaranteed to be perfect due to these limitations.”

If I am understanding this correctly, I think what I am running into here is a bit of that jagged frontier and the limits of the AI’s training data. Essentially, I think ChatGPT’s association between images of writing and pens/pencils are so connected that it can’t make a picture of writing that does not include a pen or a pencil.

When you think about it for a second, this isn’t too surprising. Try doing a Google image search for just the word “writing” to see what I mean. Here’s a screenshot of what I came up with from such a search:

It’s no wonder that ChatGPT thinks that the idea/word/concept “writing” goes with pen or pencil because of the rest of the data it’s been trained on does the same thing. Which is to say I think when you ask a human to create an image of the process of writing, they default to a pen.

Though I will say ChatGPT’s advice for me the human in this loop: why don’t you just use some image editing software and fix it yourself?