Marc Watkins is right; my flavor of AI skepticism

A “Paying Attention to AI” Substack post…

The other day, I read Marc Watkin’s excellent Substack post “AI Is Unavoidable, Not Inevitable,” and I would strongly encourage you to take a moment to do the same. Watkins begins by noting that he is “seeing a greater siloing among folks who situate themselves in camps adopting or refusing AI.” What follows is not exactly a direct response to these refusing folks, but it’s pretty close and I find myself agreeing with Watkins entirely. As he says, “To make my position clear about the current AI in education discourse I want to highlight several things under an umbrella of ‘it’s very complicated.'”

Like I said, you really should read the whole thing. But I will share this long quote that is so on point:

Many of us have wanted to take a path of actively resisting generative AI’s influence on our teaching and our students. The reasons for doing so are legion—environmental, energy, economic, privacy, and loss of skills, but the one that continually pops up is not wanting to participate in something many of us fundamentally find unethical and repulsive. These arguments are valid and make us feel like we have agency—that we can take an active stance on the changing landscape of our world. Such arguments also harken back to the liberal tradition of resisting oppression, protesting what we believe to be unjust, and taking radical action as a response.

But I do not believe we can resist something we don’t fully understand. Reading articles about generative AI or trying ChatGPT a few times isn’t enough to gauge GenAI’s impact on our existing skills. Nor is it enough to rethink student assessments or revise curriculum to try and keep pace with an ever-changing suite of features.

To meaningfully practice resistance of AI or any technology requires engagement. As I’ve written previously, engaging AI doesn’t mean adopting it. Refusing a technology is a radical action and we should consider what that path genuinely looks like when the technology you despise is already intertwined with the technology you use each day in our very digital, very online world.

Exactly. Teachers of all sorts, but especially those of us who are also researchers and scholars, need to engage with AI well enough to know what we are either embracing or refusing. Only refusing is at best willful ignorance.

AI is difficult to compare to previous technologies (as Watkins says, AI defies analogies), but I do think the emergence of AI now is kind of like the emergence of computers and the internet as tools for writing a couple of decades ago. A pre-internet teacher could still refuse that technology by insisting students take notes by hand, hand in handwritten papers, and take proctored timed exams completed on paper forms. When I started at EMU in 1998, I still had a few very senior colleagues who taught like this, who never touched their ancient office computers, who refused to use email, etc. But try as they might, that pre-internet teacher who required their students to hand in handwritten papers did not make computers and the internet disappear from the world.

It’s not quite the same now with AI as it was with the internet back then because I don’t think we are at the point where we can assume “everyone” routinely uses AI tools all the time. This is why I for one am quite happy that most universities have not rolled out institutional policies on AI use in teaching and scholarship– it’s still too early for that. I’ve been experimenting with incorporating AI into my teaching for all kinds of different reasons, but I understand and respect the choices of my colleagues to not allow their students to use AI. The problem though is refusing AI does not make it disappear out of the students’ lives outside of the class– or even within that class. After all, if someone uses AI as a tool effectively– not to just crudely cheat, but to help learn the subject or as a tool to help with the writing– there is no way for that AI forbidding professor to tell.

Again, engaging with AI (or any other technology) does not mean embracing, using, or otherwise “liking” AI (or any other technology). I spent the better part of the 2010s studying and publishing about MOOCs, and among many other things, I learned that there are some things MOOCs can do well and some things they cannot. But I never thought of my blogging and scholarship as endorsing MOOCs, certainly not as a valid replacement for in-person or “traditional” online courses.

I think that’s the point Watkins is trying to make, and for me, that’s what academics do: we’re skeptics, especially of things based on wild and largely unsubstantiated claims. As Watkins writes, “… what better way to sell a product than to convince people it can lead to both your salvation and your utter destruction? The utopia/ dystopia narratives are just two sides of a single fabulist coin we all carry around with us in our pockets about AI.”

This is perhaps a bad transition, but thinking about this reminded me of Benjamin Riley’s Substack post back in December, “Who and What comprise AI Skepticism?” This is one of those “read it if you want to get into the weeds” sorts of posts, but the very short version: Casey Newton, who is a well-known technology journalist, wrote about how he thought there are only two camps of AI Skepticism: AI is real and dangerous, and AI is fake and sucks. Well, A LOT of prominent AI experts and writers disputed Newton’s argument, including Riley. What Riley does in his post is describe/create his own taxonomy of nine different categories of AI Skepticism, including one category he calls the “Sociocultural Commentator Critics– ‘the neo-Luddite wing,'” which would include AI refusers.

Go and check it out to see the whole list, but I would describe my skepticism as being most like the “AI in Education Skeptics” and the “Technical AI Skeptics” categories, along with a touch of “Skeptics of AI Art and Literature” category. Riley says AI in Education Skeptics are “wary of yet another ed-tech phenomena that over-hypes and under-delivers on its promises.” I think we all felt the same warriness of ed-tech and over-hype with MOOCs.

Riley’s Technical AI Skeptics are science-types, but what I identify with is exploring and exposing AI’s limitations. AI failures are at least as interesting to me as AI successes, and it makes me question all of these claims about AI passing various tests or whatever. AI can do no wrong in controlled experiments much in the same way that self-driving cars do just fine on a closed course in clear weather. But just like that car doesn’t do so great driving itself through a construction zone or a snowstorm, AI isn’t nearly as capable outside of the lab.

And I say a touch of the Skeptics in AI Art and Literature because while I don’t have a problem with people using AI to make art or to write things, I do think that “there is something essential to being human, to being alive, that we express through art and writing.” Actually, this is one of my sources of “cautious optimism” about AI: since it isn’t that good at doing the kind of human things we teach directly and indirectly in the humanities, maybe there’s a future in these disciplines after all.

I’ll add two other reasons why I’m skeptical about how AI. First, I wonder about the business model. While this is not exactly my area of expertise, I keep reading pieces by people who do know what they’re talking about raising the same questions about where the “return on investment” is going to come from. The emergence of DeepSeek is less about its technical capabilities and more about further disrupting the business plans.

Second, I am skeptical about how disruptive AI is going to be in education. It’s fun and easy to talk with AI chatbots, and they can be helpful for some parts of the writing process, especially when it comes to brainstorming, feedback on a draft, proofreading, and so forth. There might be some promise that today’s AI will enable useful computer-assisted instruction tools that go beyond “drill and kill” applications from the 1980s. And assuming AI continues to develop and mature into a truly general-purpose technology (like electricity, automobiles, the internet, etc.), of course, it will change how everything works, including education. But besides the fact that I don’t think AI is going to ever be good enough to replace the presence of humans in the loop, I don’t think anyone is comfortable with an AI replacing a human teacher (or, for that matter, human physicians, airline pilots, lawyers, etc.).

If there is going to be an ROI opportunity from the trillion dollars these companies have sunk into this stuff, it ain’t going to come from students using AI for school work or from people noodling around with it for fun. The real potential with AI is in research, businesses, and industries that work with enormous data sets and in handling complex but routine tasks: coding, logistics, marketing, finance, research into the discovery of new proteins or novel building materials, and anything involving making predictions based on a large database.

Of course, the fun (and scary and daunting!) part of researching AI and predicting its future is everyone is probably mostly wrong, but some of us might have a chance of being right.

Zepbound, One Year Later (and related thoughts)

My one year of Zepbound anniversary passed a couple of weeks ago without any real notice or celebration on my part. I started the drug on January 7, 2024. I‘ve blogged about my experiences on Zepbound a few times before in the last year, and so far, so good. Mostly.

The good news is I’ve lost about 40 pounds so far. My goal is to lose another 20 pounds, which, according to the problematic BMI scale, would just barely move me into the category of “overweight” from where I am now, which is “obese.” I know, I know, it probably doesn’t matter a whole lot if I manage to get my BMI from a 31 to a 29, but still, it’s a goal.

Anyway, I’m feeling pretty good. The last time I had blood work done as part of my yearly check-up was back in June and after I’d lost about 25 pounds. My various numbers had improved (I moved out of the “pre-diabetic” category, for example), so I’m assuming that all of that would be even better now. The main side effect I have from Zepbound are all “tummy issue” related, and I still do have a bit of that, especially for a day or two after I inject myself. But it’s still not a big deal. And the stuff I wrote about before is still true: it’s easier to exercise (though I haven’t been “running” as much lately, now that I think about it), I find myself eating healthier (I mean besides just eating less), I’m enjoying the fact that I have had to once again buy some new clothes that fit better, and so forth.

The bad news is I’ve only lost about 6 pounds since the beginning of October. I think there are two reasons for this. First, I think it’s fair to say my main remaining food weakness is sweet things. My cravings for fatty things like a Big Mac are way down, but I still like candy. So fall and winter were rough with all the leftover Halloween candy (especially since we literally only had 3 very small kids with their parents knocking on our door out here in the new house!), with pies and just excesses at Thanksgiving, cookies and cakes and stuff at Christmas, etc.

Second, I think I’ve reached the limits of the drug’s effectiveness alone. As I wrote about back in May (after I had lost about 20 pounds), the reason why Zepbound was working for me was I just wasn’t as hungry, so I didn’t eat much between meals and when I did sit down for dinner or lunch, I ate less. So it didn’t feel like I was trying at all.

But at this point, if I’m going to lose another 20 pounds, I am going to need to try. For me, “trying” means being more in an “I’m on a diet” mindset in the sense cutting back even more on calories, eating even better, doing more at the gym, all that kind of stuff. I think the Zepbound helps with that too. Besides quieting the so-called “food noise,” it also helps me to better recognize when I’m eating just to eat, versus eating when I’m actually hungry. One of the ways it does this is if I do find myself hungry nowadays, it’s almost certainly because I actually do need to eat something.

But enough about only me. What else is in the news about GLP- 1 drugs and Zepbound and the like? Here’s a few articles that struck me as interesting in recent months.

  • From something called The List comes “Elon Musk’s Holiday X Post Surely Got Under RFK Jr.s Skin (& Caused Trouble for Trump.)” Apparently, Elon has lost a bunch of weight from these drugs too. Among other things, Musk posted on X “Nothing would do more to improve the health, lifespan and quality of life for Americans than making GLP inhibitors super low cost to the public. Nothing else is even close.”

    RFK Jr. is no fan of these drugs at all, and he’s quoted in this article (from other sources) saying “If we just gave good food, three meals a day, to every man, woman and child in our country, we could solve the obesity and diabetes epidemic overnight.”

    Funny enough, I think both of these fascist meatbags full of shit are correct. As I’ll get to next, these drugs have all kinds of benefits, including a lot of things well beyond weight loss. The two main barriers for making them more available are the injectable format and the high costs. Also no question that Kennedy is right too: good food isn’t going to solve these problems “overnight,” but I get his point. But for me (and I’d bet 99% of GLP-1 users), it’s not an either/or thing– the drugs help me eat better.
  • There were several MSM articles about a study that was published in Nature Medicine called “Mapping the effectiveness and risks of GLP-1 receptor agonists.” That link to Nature Medicine only works at all if I access it through the EMU library, so your results will vary. Anyway, the study used the US Department of Veterans Affairs database to study hundreds of thousands of patients who had used these drugs, primarily for diabetes. As Time summed it up, patients taking “GLP-1 medications had a lower risk of a number of health conditions, including Alzheimer’s disease and dementia, addiction, seizures, blood-clotting problems, heart conditions, and infectious diseases, compared to people taking the other types of diabetes treatments. The people taking the GLP-1 drugs also had increased GI-related issues, low blood pressure, and arthritis, as well as certain kidney conditions and pancreatitis—most of which are already known side effects of the medications.”
  • I have a news alert for Zepbound, and I see a lot of articles like this one: “The Best Obesity Drugs Aren’t Even Here Yet,” from Gizmodo. Take that with a smaller piece of cake (if you will), but the success of Ozempic and other drugs like this have fueled a bit of a gold rush in research. Soon there are going to be versions of these drugs that are more effective, and, with any luck once they are available in pill form, versions that will be a lot cheaper.
  • And last, I guess Oprah got into a bit of trouble the other day. From Page Six, “Oprah Winfrey faces backlash for making bold claim about ‘thin people’ after taking weight-loss drug.” Read the whole thing, but I guess you can see the “bold claim” in this snippet on Instagram:

I mean, I am not in the business of defending Oprah, especially since she originally denied that she was taking these drugs to lose weight. And I’ve never been a skinny person, and of course people end up being skinny (or fat) for all kinds of different reasons. But I have had conversations similar to this with skinny (or not overweight at least) people, and I think what Oprah is saying here is right– at least for about half of the thinner/very in shape people I know well. One very skinny guy I know told me one time he has to remind himself to eat some days, and I assure that has never been a problem for me.

But I will say there is one other category of skinny/very fit people I have known over the years: the person who got a serious medical wake-up call. I’m talking about having a doctor say if you don’t make some seriously big changes in diet and exercise, you’re gonna die a lot sooner than you should. I think this category is much smaller than the category of “I never feel that hungry,” but I can see why these people might not like Oprah implying they don’t have will power or “work” at it.

A New Substack About My AI Research: “Paying Attention to AI”

As I wrote about earlier in December, I am “Back to Blogging Again” after experimenting with shifting everything to Substack. I switched back to blogging because I still get a lot more traffic on this site than on Substack, and because my blogging habits are too eclectic and random to be what I think of as a Newsletter. I realize this isn’t true for lots of Substackers, but to me, a Newsletter should be about a more specific “topic” than a blog, and it should be published on a more regular schedule.

So that’s my goal with “Paying Attention to AI.” We’ll see how it works out. Because I still want to post those Substack things here– because this is a platform I control, unlike any of the other ones owned by tech oligarchs or whatever, and because while I do like Substack, there is still the “Nazi problem” they are trying to work out. Besides, while Substack could be bought out and turned into a dumpster fire (lookin’ at you, X), no one is going to buy stevendkrause.com, and that’s even if I was selling.

Anyway, here’s the first post on that new Substack space.

Welcome to (working title) Paying Attention to AI

More Notes on Late 20th Century Composition, CAI, Word Processing, the Internet, and AI

My goal for this Substack site/newsletter/etc. is to write (mostly to myself) about what will probably be the last big research/scholarly project of my academic career, but I still don’t have a good title. I’m currently thinking “Paying Attention to AI,” a reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was her chair’s address at the 1997 Conference for College Composition and Communication before it was republished in the journal for the CCCs in 1999 and also expanded into the book Technology and Literacy in the Twenty-First Century.

But I also thought something mentioning AI, Composition, and “More Notes” would be good. That’s a reference to “A Note on Composition and Artificial Intelligence,” a brief 1983 article by Hugh Burns in the first newsletter issue of what would become the journal Computers and Composition. AI meant something quite different in the late 1970s/early 1980s, of course. Burns was writing then about how research in natural language processing and AI could help improve Computer Assisted Instruction (CAI) programs, which were then seen as one of the primary uses of computer technology in the teaching of writing— along with the new and increasingly popular word processing programs that run on these newly emerging personal computers.

Maybe I’ll figure out a way to combine the two into one title…

This project is based on a proposal that’s been accepted for the 2025 CCCCs in Baltimore, and also a proposal I have submitted at EMU for a research leave or a sabbatical for the 2025-26 school year. 1 I’m interested in looking back at the (relatively) recent history of the beginnings of the widespread use of “computers” (CAI, personal computers, word processors and spell/grammar checkers, local area networks, and the beginnings of “the internet”).

Burns’ and Selfe’s articles make nice bookends for this era for me because between the late 1970s until about the mid 1990s, there were hundreds of presentations and articles in major publications in writing studies and English about the role of personal computers and (later) the internet and the teaching of writing. Burns was enthusiastic about the potential of AI research and writing instruction, calling for teachers to use emerging CAI and other tools. It was still largely a theory though since in 1983, fewer 8% of households had one personal computer. By the time Selfe was speaking and then writing 13 or so years later, over 36% of households had at least one computer, and the internet and “World Wide Web” was rapidly taking its place as a general purpose technology altering the ways we do nearly everything, including how we teach and practice writing.

These are also good bookends for my own history as a student, a teacher, and a scholar, not mention as a writer who dabbled a lot with computers for a long time. I first wrote with computers in the early 1980s while in high school. I started college in 1984 with a typewriter and I got a Macintosh 512KE by about 1986. I was introduced to the idea of teaching writing in a lab of terminals— not PCs— connected to a mainframe unix computer when I started my MFA program at Virginia Commonwealth University in fiction writing in 1988. (I never taught in that lab, fwiw). In the mid-90s and while in my PhD program at Bowling Green State University, the internet and “the web” came along, first as text (remember GopherLynx?) and then as GUI interfaces like Netscape. By the time Selfe was urging the English teachers attending the CCCCs attendees to, well, pay attention to technology, I had starte my first tenure-track job.

A lot of the things I read about AI right now (mostly on social media and MSM, but also in more scholarly work) dhas a tinge of the exuberant enthusiasm and/or the moral panic about the encroachment of computer technology back then, and that interests me a great deal. But at the same time, this is a different moment in lots of small and large ways. For one thing, while CAI applications never really caught on for teaching writing (at least beyond middle school), AI shows some real promise in making similar tutoring tools actually work. Of course, there were also a lot of other technologies and tools way back when that had their moments but then faded away. Remember MOOs/MUDs? Listservs? Blogs? And more recently, MOOCs?

So we’ll see where this goes.

1 FWIW: in an effort to make it kinda/sorta fit the conference theme, this presentation is awkwardly titled ““Echoes of the Past: Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction.” This will almost certainly be the last time I attend the CCCCs, my field’s annual flagship conference, because, as I am sure I will write about eventually, I think it has become a shit show. And whether or not this project continues much past the April 2025 conference will depend heavily on the research release time from EMU. Fingers crossed on that.

The Year 2024 was A LOT

This past year was A LOT for me and the rest of my family. So so SO much happened, so much of it horrible and still difficult to comprehend, so much of it fantastic and beautiful. I suppose this “the worst of times/the best of times” sentiment is always kinda true, but I can’t think of another year where there was just so so much and in such extremes.

It’s been a lot. It’s been way too much for one year.

January

We were already well underway with one of the big ticket items of this year, which is building/buying/selling houses and moving for the first time in over 25 years.

On January 7, I started taking Zepbound, which is one of these weight loss drugs in the category of what everyone has heard of, Ozempic (though, as I wrote about during the year, it’s more complicated than that.)

Otherwise, it was mostly the start of the winter term with work (it was the semester of all freshman composition for me), weather, watching some cheesy movies here and there.

February

My niece Emily got married in a huge and very Catholic ceremony in Kansas City. This was the first of the nieces/nephews (or cousins or grandchildren, depending on your perspective) to get married, so a big deal for the Krauses. Remarkably, there were no hitches with the weather or anything else.

The idea of moving started to get a lot more real when we were able to do a walk-through of the house right after they did the inspection for stuff they need to do before they put up drywall.

Of course, we (mostly me) have been driving by the construction site since November to see the progress, but walking around in what would become (in the order of these pictures) the upstairs/Steve loft area, stairs descending in the living room/main room and kitchen area was pretty cool. The Zepbound adventures continued (I was down about 7 pounds by the end of the month) as did the all first year writing semester.

March

We started getting real about selling the old house and preparing the move to the new one, and because we lived in our previous house in Normal Park for 25 years, it was stressful. I mean, we had decades worth of stuff to sort through– pack, sell, toss– and there was all the decluttering and the nervousness of would it sell and would we get what we were asking and all that. It’s kind of funny because everyone we talked to about this stuff– including my parents and in-laws– had all moved at least once (and usually twice) in the 25 years we hadn’t thought of it at all.

It’s funny to think about too because Annette grew up as an Air Force brat and her father was in for over 20 years, meaning she moved more than a dozen times before she was 15. I didn’t move that much as a kid, but we did move a couple of times, and in college and through my MFA program, I moved almost every year. So we used to know how to move.

School continued, my adventures with Zepbound continued and I complained about Oprah, I kept messing around with AI, kept teaching, etc., and I turned 58, too.

April

April was the beginning of the “A LOT,” the far too much of the year. We had two open houses on the first Sunday of the month, and then on April 8, Annette and I cleared out to make room for potential buyers to come take a second look while we went to the eclipse. We met our friends Steve and Michelle and their daughter down in Whitehouse, Ohio (just outside of Toledo), which seemed like the easiest place to get to for the totality while avoiding bumper-to-bumper traffic into the “totality zone” in northern Ohio.

As I wrote on Instagram, being there for the totality was intense. I probably won’t be able to see another total eclipse in my lifetime; then again, a cruise in August 2027 in the Mediterranean is not impossible.

We had a second open house, which was nerve-wracking. Remember, we had not had anything to do with selling and buying a house in forever and everyone told us we’d get an offer immediately, so when that didn’t happen, we started contemplating scenarios about how we can swing paying for the new house without money from the sale of the old house and all of that. Well, another open house and we got an offer and everything worked out– eventually.

And the end of April was when Bill died, suddenly and just a few days after a group of us got together for dinner. That’s at the top of my list for of horrible and difficult to comprehend. It still doesn’t feel real to me, and I think about Bill almost every day.

May

MSU had a quite large memorial for Bill in early May we were able to attend– Will flew back too. There had to be at least 500 people at it, and it was as celebratory about a remarkable life as it could be. I wrote about some of this in early May here, though this is as much about my own thoughts of mortality than anything else. Like I said, this year has been a lot, and this was the horrible part.

And in mid-May, we closed on both houses and pretty much on the same day. We went to a Title office in Ann Arbor and met the guy who bought our house for the first time, and without going into a lot of details, I feel pretty confident that that he and his partner (who was there via Facetime) are a great fit, ready for the adventures and challenges of fixing up the place and making it their own. That was the selling part. The buying part of the new house we were able to do electronically, and weirdly and quite literally while we were running errands after the closing where we were selling, we received a number of emails to electronically sign some forms and boom, we bought the new house too.

It was and still is kind of bitter-sweet, leaving the old place and the old neighborhood. It was time to move on and the longer we are in the new place, the fewer regrets I have. Still, when you live someplace for 25 years, that place becomes more than just housing, and that is especially true when it is in such a great neighborhood. I still drive through the old neighborhood and the old house about once a week on my way to or from EMU.

Five months after starting Zepbound, I finally got to the full dose of the meds and I was down about 20 pounds.

June

A lot of the last part of May and the first part of June was a complete daze of moving. We decided that the way we’d move is to start taking stuff over a carload at a time (and I did most of the heavy lifting, mostly because Annette was teaching a summer class) and then hiring movers for the big stuff later. I remember talking with my father about this approach to moving, and his joke was it’s sort of like getting hit in the nuts fairly gently every day for a month, or getting hit once really hard. When we move again (no idea when that will be), I think the smarter move would be to do it all at once, but I don’t think there’s any escaping what Annette and I had erased from our memories after staying put so long: moving sucks.

Also in June: we celebrated our 30th wedding anniversary. Well, sort of. Before we started getting serious about buying a new house, the original plan was go go on a big European adventure that sort of retraced the trip we took for our honeymoon, but we decided to give each other a house instead. The 31sth wedding anniversary trip to Europe is coming this spring instead.

As part of the house closing deal, we were able to be in the old house through the first weekend in June and we had one last Normal Park hurrah by selling lots and lots of stuff in the annual neighborhood big yard sale event. I went once last time on June 10 to mow the lawn, double-check to make sure everything was cleaned up, and to do one last terror selfie.

July

The new house– the cost of it of course, but also just settling into it and all– meant we didn’t travel anyplace this summer in I don’t know how many years. I missed going up north, and we might not be able to do that again this coming year either. And we watched the shitshow that was the presidential election tick by. But there was golf, there was more AI stuff, hanging out with friends, going to art fairs in Plymouth and Ann Arbor, seeing movies and hanging out. Annette went to visit her side of things in late July, leaving me to fly solo for a few days, and her parents came back with her to stay in the new place for a while, our first house guests.

August

The in-laws visited, we went for a lovely little overnight stay in Detroit. played some golf, started getting ready for teaching, and I wrote a fair amount about AI here and in a Substack space I switched to in August. The switching back happened later. Started feeling optimistic about Kamala’s chances….Oh, and my son defended his dissertation and is now Dr. William Steven Wannamaker Krause (but still Will to me).

September

By September 5, when I wrote this post about both weight loss and Johann Hari’s book about Ozempic called Magic Pill, I was down about 35 pounds from Zepbound. The semester was underway with a lot of AI things in all three classes. There was a touch of Covid– Annette tested positive, I don’t think I ever did, but I felt not great. My parents visited in the end of September, and of course they too liked the new house.

October

The month started with a joint 60th birthday for Annette and our friend Steve Benninghoff– they both turned 60 a few months apart. It was the first big party we had here at the new house. During EMU’s new tradition of a “Fall break,” we went to New York City. We let up with Will and his girlfriend and went to the Natural History Museum (pretty cool), went with them to see the very funny and silly Oh, Mary! Annette and I also went to see the excellent play Stereophonic and met up with old friends Troy and Lisa, and also Annette at an old school Italian restaurant that apparently Frank Sinatra used to like a lot. Rachel and Colin came by for dinner when they were in town too. And of course school/work, too.

November

We started by going to see Steve Martin and Martin Short at the Fox Theater in Detroit— great and fun show. Then, of course, there was the fucking election, another bit of horrible for the year. More Substack writing about AI and just being busy with work– the travels and events of October really put me behind with school, and I felt like I spent the last 6 or so weeks of the semester just barely caught up on it all. Will and his girlfriend came out here before Thanksgiving and she flew back home to be with her family. Meanwhile we made our annual trip to Iowa for Thanksgiving/Christmas. A good time that featured some taco pizza the day after the turkey, and happily, very very little discussion of politics.

December

The semester ended more quickly than usual, just a week after Thanksgiving rather than two. I was pretty pleased with the way the semester turned out overall; I definitely learned a lot more about what to do (and not do) with AI in teaching, and I hope my students got something out if it all too.

I ended up switching back to blogging but not quite giving up on Substack, as I talked about in this post. One of my goals for winter 2025 is to start a more focused Substack newsletter on my next (and likely last) academic research project on the history of AI, Computer Aided Instruction, and early uses of wordpressors in writing pedagogy from the late 70s until the early 90s. Stay tuned for that.

Oh, and the niece I had who was the first of the cousins to get married? Also the first to have a baby in early December– thus the first great-grandchild in the family.

There was much baking (in November too), and some decorating and some foggy pictures of the woods. Will and his girlfriend returned (I think Will has been back here more in the last couple of months than he has been in quite a while) and we took a trip to the Detroit Institute of Art before they left to California to see her family. Will came back here, we made the annual trip to Naples, Florida to see the in-laws, and now here we are.

Like I said, it’s been a lot, and a whole lot of it is bad. I worry about Trump. I miss Bill terribly. He touched a lot of people in his life and so I know I’m not alone on that one.

But I’m also oddly hopeful for what’s to come next. The more we are in the new house, the more it is home. The Zepbound adventure continues (I’m down about 40 pounds from last January), I’m hopeful for Will as he starts a new gig as a post-doc researcher, I’m looking forward to the new term, and I’m looking forward to all that is coming in the new year.

Six Things I Learned After a Semester of Lots of AI

Two years ago (plus about a week!), I wrote about how “AI Can Save Writing by Killing ‘The College Essay,'” meaning that if AI can be used to respond to bad writing assignments, maybe teachers will focus more on teaching writing as a process the way that scholars in writing studies have been talking about for over 50 years. That means an emphasis on “showing your work” through a series of scaffolded assignments, peer review activities, opportunities for revision, and so forth.

This past semester, I decided to really lean into AI in my classes. I taught two sections of first-year writing where the general research topic for everyone was “your career goals and AI,” and where I allowed (even encouraged) the use of AI under specific circumstances. I also taught an advanced class for majors called “Digital Writing” where the last two assignments were all about trying to use AI to “create” or “compose” “texts” (the scare quotes are intentional there). I’ve been blogging/substacking about this quite a bit since summer and there are more details I’m not getting to here because it’s likely to be part of a scholarly project in the near future.

But since the fall semester is done and I have popped the (metaphorical) celebratory bottle of bubbly, I thought I’d write a little bit about some of the big-picture lessons about teaching writing with (and against) AI I learned this semester.

Teachers can “refuse” or “resist” or “deny” AI all they want, but they should not ignore it.

As far as I can tell from talking with my students, most of my colleagues did not address AI in their classes at all. A few students reported that they did discuss and use AI in some of their other classes. I had several students in first-year writing who were interior design majors and all taking a course where the instructor introduced them to AI design tools– sounded like an interesting class. I had a couple of students tell me an instructor “forbid” the use of AI but with no explanation of what that meant. Most students told me the teacher never brought up the topic of AI at all.

Look, you can love AI and think it is going to completely transform learning and education, you can hate AI all you want and wish it had never been invented and do all you can to break that AI machine with your Great Enoch sledgehammers. But ignoring it or wishing it away is ridiculous.

For my first-year writing students, most of whom readily admitted they used AI a lot in high school to do things that were probably cheating, I spent some time explaining how they could and could not use AI. I did so in part to teach about how I think AI can be a useful tool as part of the process of writing, but I also did this to establish my credibility. I think a lot of students end up cheating with AI because they think that the teacher is clueless about it– and I think a lot of times, students are right.

You’re gonna need some specific rules and guidelines about AI– especially if you want to “refuse” or “resist” it.

I have always included on my syllabi an explicit policy about plagiarism, and this year I added language that makes it clear that copying and pasting large chunks of text from AI is cheating. I did allow and encourage first-year writing students to use AI as part of their process, and I required my advanced writing students to use AI as part of their “experiments” in that class. But I also asked students to include an “AI Use Statement” with their final drafts, one that explained what AI systems they used (and that included Grammarly), what prompts they used, how they used the AI feedback in their essay, and so forth. Because this was completely new to them (and me too), these AI Use Statements were sometimes a lot less complete and accurate than I would have preferred.

I also insisted that students write with Google Docs for each writing assignment and for all steps in the process, from the very start of the first hint of a first draft until they hand it into me. Students need to share this with me so I can edit it. I take a look at the “version history” of the Google Doc, and if I suddenly see pages of clear prose magically appear in the essay, we have a discussion. That seemed to work well.

Still, sometimes students are still going to cheat with AI, and often without realizing that they’re cheating.

Even with the series of scaffolded assignments and using Google Docs and all of my warnings, I did catch a few students cheating with AI in both intentional and not as intentional ways. Two of these examples were similar to old-school plagiarism. One was from a student from another country who had some cultural and language disconnections about the expectations of American higher education (to put it mildly); I think first-year writing was too advanced and this student should have been advised into an ESL class. Another was a student who was late on a short assignment and handed in an obviously AI-generated text (thanx, Google Docs!). I gave this person a stern warning and another chance, and they definitely didn’t do that again.

As I wrote about in this post about a month ago, I also had a bunch of students who followed the AI more closely than the first assignment, the Topic Proposal. This is a short essay where students write about how they came up with their topic and initial thesis for their research for the semester. Instead, a lot of students asked AI what it “thought” of their topic and thesis, and then they more or less summarized the AI responses, which were inevitably about why the thesis was correct. Imagine a mini research paper but without any research.

The problem was that wasn’t the assignment.  Rather, the assignment asked students to describe how they came up with their thesis idea: why they were interested in the topic in the first place, what kinds of other topics they considered, what sorts of brainstorming techniques they used, what their peers told them, and so forth. In other words, students tried to use the AI to tell them what they thought, and that just didn’t work. It ended up being a good teachable moment.

A lot of my students do not like AI and don’t use it that much. 

This was especially true in my more advanced writing class– where, as far as I can tell, no one used AI to blatantly cheat. For two of the three major projects of the semester, I required students to experiment with AI and then to write essays where they reflected/debriefed on their experiments while making connections to the assigned readings. Most of these students, all of whom were some flavor of an English major or writing minor, did not use AI for the reflection essays. They either felt that AI was just “wrong” in so many different ways (unethical, gross, unfair, bad for the environment, etc.), or they didn’t think the AI advice on their writing (other than some Grammarly) was all that useful for them.

This was not surprising; after all, students who major or minor in something English-related usually take pride in their writing and they don’t want to turn that over to AI. In the freshman composition classes, I had a few students who never used AI either–judging from what they told me in their AI Use statements. But a lot of students’ approaches to AI evolved as the semester went on, and by the time they were working on the larger research-driven essay where all the parts from the previous assignments come together, they said things like they asked ChatGPT for advice on “x” part of the essay, but it wasn’t useful advice so they ignored it.

But some students used AI in smart and completely undetectable ways.

This was especially true in the first year writing class. Some of the stronger writers articulated in some detail in their AI Use Statements how they used ChatGPT (and other platforms) to brainstorm, to suggest outlines for assignments, to go beyond Grammarly proofreading, to get more critical feedback on their drafts, and so forth. I did not consider this cheating at all because they weren’t getting AI to do the work for them; rather, they were getting some ideas and feedback on their work.

And here’s the thing that’s important: when a student (or anyone else) uses AI effectively and for what it’s really for, there is absolutely no way for the teacher (or any other reader) to possibly know.

The more time I have spent studying and teaching about AI, the more skeptical I have become about it. 

I think my students feel the same way, and this was especially true with the students in my advanced class who were directly studying and experimenting with many different AI platforms and tasks. The last assignment for the course asked students to use AI to do or make something that they could not have possibly done by themselves. For example, one student taught themself to play chess and was fairly successful with that– at least up to a point. Another student tried to get ChatGPT to teach them how to play the card game Euchre, though less successfully because the AI kept “cheating.” Another student asked the AI to code a website, and the AI was pretty good at that. Several students tried to use AI tools to compose music; similar to me I guess, they listen to lots of music and wished they could play an instrument and/or compose songs.

What was interesting to me and I think most of my students was how quickly they typically ran into the AI’s and their own limitations. Sometimes students wanted the AI to do something the AI simply could not do; for example, the problem with playing Euchre with the AI (according to the student) is it didn’t keep track of what cards had already been played– thus the cheating. But the bigger problem was that without any knowledge of how to accomplish the task on their own, the AI was of little use. For example, the student who used AI to code a website still had no idea at all what any of the code meant, nor did they know what to do with it to make it into a real website. Students who knew nothing about music who tried to write/create songs couldn’t get very far. In other words, it was not that difficult for students to discover ways AI fails at a task, which in many ways is far more interesting than discovering what it can accomplish.

I’m also increasingly skeptical of the hype and role of AI in education, mainly because I spent most of the 2010s studying MOOCs. Remember them? They were going to be the delivery method for general education offerings everywhere, and by 2030 or 2040 or so, MOOCs were going to completely replace all but the most prestigious universities all over the world. Well, that obviously didn’t happen. But that didn’t mean the end of MOOCs; in fact, there are more people taking MOOC “courses” now than there were during the height of the MOOC “panic” around 2014. It’s just that nowadays, MOOCs are mostly for training (particularly in STEM fields), certificates, and as “edutainment” along the lines of Master Class.

I think AI is different in all kinds of ways, not the least of which is AI is likely to be significantly more useful than a chat box or to check grammar. I had several first-year students this semester write about AI and their future careers in engineering, logistics, and finance, and they all had interesting evidence about both how AI is being used right now and how it will likely be used in the future. The potential of AI changing the world at least as much as another recent General Purpose Technology, “the internet,” is certainly there.

Does that mean AI is going to have as great of an impact on education as the internet did? Probably, and teachers have had to make all kinds of big and small changes to how they teach things because of the internet, which was also true when writing classes first took up computers and word processing software.  But I think the fundamentals of teaching (rather than merely assigning) writing still work.

Back to Blogging Again

And with changes coming to my Substack experiments

Back in August, I announced to my vast audience of all things stevendkrause that I was going to shift my blogging practices to a Substack site. Now I’m shifting back— sort of.

There are two reasons for this.

First, while I have begun to find an audience on Substack, I still get more readers on the old blog– or at least I get a lot of hits, according to the Jetpack stats.  I am assuming that the reason for this is people stumble across 20+ years of content via Google searches and the like. The most popular post I’ve had on the site for the last couple of years, “AI Can Save Writing by Killing ‘The College Essay,'” had 68 hits since August, and after I said I was done here. Most of my Substack posts have had fewer views. Altogether, stevendkrause.com had around 1700 hits since August; that’s not a lot, but it is more than I received since August on Substack.

Second, and this is probably a more important reason for returning to the old blog, Substack isn’t a blogging platform. Rather, it is a newsletter platform with some interesting social media features (a place for updates ala Facebook or X or Bluesky, chat features, podcast features, etc.).  My friend and colleague Collin Brooke commented on my post announcing my shift to Substack that one of the reasons why he likes Substack emailed newsletters is he has them all going to a particular folder or something so he’s able to follow them “like an old school RSS reader.” That makes some sense from a reader’s perspective– and note to self, now that I’m nearly done with the semester, that’s something I ought to set up for my Substack subscriptions instead of just letting them clog up my inbox.

But I’m also interested in Substack as a way of growing my audience, and as far as I can tell, the most successful Substack newsletters are published regularly– some daily, some weekly, some less than that– and they are about a specific topic. My blogging habits have always been much more random than that both in terms of how often I post and what I post about. 

So here’s my plan– for now:

I’m going to post stuff here more or less whenever I can get to it/when I feel like it. For the last couple of years, I usually post a couple of times a month. Then I’ll repost/republish those posts on Substack as an “all things Krause” newsletter available in subscribers’ email and at stevendkrause.substack.com, probably around once a month. 

Eventually, maybe when I have some time over the break, maybe next summer (but honestly maybe never too), I’d like to get a little more systematic, specific, and newsletter-like on Substack. For example, I am thinking about starting a Substack newsletter about why it is a terrible idea for educators to resist/refuse/ignore AI, and about how “paying attention” to AI is not the same thing as embracing it. I’m also thinking I might create another Substack newsletter to post regularly about food things, which would be about my interests in cooking and I guess I’d say the “food biz.” That might also include more about Zepbound, which is kind of the opposite of being interested in food. 

Like I said, we’ll see. 

Is Apple Intelligence (and AI) For Dumb and Lazy People?

And the challenges of an AI world where everyone is above average

I’ve been an Apple fanboy since the early 1980s. I owned one Windoze computer years ago that was mostly for games my kid wanted to play. Otherwise, I’ve been all Apple for around 40 years. But what the heck is the deal with these ads for Apple Intelligence?

In this ad (the most annoying of the group, IMO), we see a schlub of a guy, Warren, emailing his boss in idiotic/bro-based prose. He pushes the Apple Intelligence feature and boom, his email is transformed into appropriate office prose. The boss reads the prose, is obviously impressed, and the tagline at the end is “write smarter.” Ugh.

Then there’s this one:

This guy, Lance, is in a board meeting and he’s selected to present about “the Prospectus,” which he obviously has not read. He slowly wheels his office chair and his laptop into the hallway, asks Apple’s AI to summarize the key points in this long thing he didn’t read. Then he slowly wheels back into the conference room and delivers a successful presentation. The tagline on this one? “Catch up quick.” Ugh again.

But in a way, these ads might not be too far from wrong. These probably are the kind of “less than average” office workers who could benefit the most from AI— well, up to a point, in theory.

Among many other things, my advanced writing students and I read Ethan Mollick’s Co-Intelligence, and in several different places in that book, he argues that in experiments when knowledge workers (consultants, people completing a writing task, programmers) use AI to complete tasks, they are much more productive. Further, while AI does not make already excellent workers that much better, it does help less than excellent workers improve. There’s S. Noy and W. Zhang’s Science paper “Experimental evidence on the productivity effects of generative artificial intelligence;” here’s a quote from the editor’s summary:

Will generative artificial intelligence (AI) tools such as ChatGPT disrupt the labor market by making educated professionals obsolete, or will these tools complement their skills and enhance productivity? Noy and Zhang examined this issue in an experiment that recruited college-educated professionals to complete incentivized writing tasks. Participants assigned to use ChatGPT were more productive, efficient, and enjoyed the tasks more. Participants with weaker skills benefited the most from ChatGPT, which carries policy implications for efforts to reduce productivity inequality through AI.

Then there’s S. Peng et al and their paper “The Impact of AI on Developer Productivity: Evidence from GitHub Copilot.” This was an experiment with a programming AI on Github, and the programmers who used AI completed tasks 55.8% faster. And Mollick talks a fair amount about a project he was a co-writer on, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality,” which found that consultants in an experiment were more productive when allowed to use AI— except when faced with a “jagged technology frontier” problem, which in the study was a technical problem beyond the AI’s abilities. However, one of the problems Mollick and his colleagues observed is that a lot of the subjects in their study often copied and pasted content from the AI with minimal editing, and the AI-using subjects had a much harder time with that jagged frontier problem. I’ll come back to this in a couple more paragraphs.

Now, Mollick is looking at AI as a business professor, so he sees this as a good thing because it improves the quality of the workforce, and maybe it’ll enable employers to hire fewer people to complete the same tasks. More productivity with less labor equals more money, capitalism for the win. But my English major students and I all see ourselves (accurately or not) as well-above-average writers, and we all take pride in that. We like the fact we’re better at writing than most other people. Many of my students are aspiring novelists, poets, English teachers, or some other career where they make money from their abilities to write and read, and they all know that publishing writing that other people read is not something that everyone can do. So the last thing any of us who are good at something want is a technology that diminishes the value of that expertise.

This is part of what is behind various declarations of late for refusing or resisting AI, of course. Part of what is motivating someone like Ted Chiang to write about how AI can’t make art is making art is what he is good at. The last thing he wants is a world where any schmuck (like those dudes in the Apple AI ads) can click a button and be as good as he is at making art. I completely understand this reason for fearing and resisting AI, and I too hope that AI doesn’t someday in the future become humanity’s default story teller.

Fortunately for writers like Chiang and me and my students, the AI hype does not square with reality. I haven’t played around with Apple AI yet, but the reviews I’ve seen are underwhelming. I stumbled across a YouTube review by Marques Brownlee about the new AI that is quite thorough. I don’t know much about Brownlee, but he has over 19 million subscribers so he probably knows what he is talking about. If you’re curious, he talks about the writing feature in the first few minutes of this video, but the short version is he says that as a professional writer, he finds it useless.

The other issue I think my students and I are noticing is that the jagged frontier Mollick and his colleagues talk about— that is, the line/divide between tasks the AI can accomplish reasonably well and what it can’t— is actually quite large. In describing the study Mollick and his colleagues did which included a specifically difficult/can’t do with AI jagged frontier problem, I think he implies that this frontier is small. But Mollick and his colleagues— and the same is true with these other studies he quotes on this— are not studying AI in real settings. These are controlled experiments, and the researchers are trying to do all they can to eliminate other variables.

But in the more real world with lots of variables, there are jagged frontiers everywhere. The last assignment I gave in the advanced writing class asked students to attempt to “compose” or “make” something with the help of AI (a poem, a play, a song, a movie, a website, etc. etc.) that they could not do on their own. The reflection essays are not due until the last week of class, but we have had some “show and tell” exchanges about these projects. Some students were reasonably successful with making or doing something thanks to AI— and as a slight tangent: some students are better than others at prompting the AI and making it work for them. It’s not just a matter of clicking a button. But they all ran into that frontier, and for a lot of students, that was essentially how their experiment ended. For example, one student was successful at getting AI to generate the code for a website; but this student didn’t know what to do with the code the AI made to make it actually into a website. A couple of students tried to use AI to write music, but since they didn’t know much about music, their results were limited. One student tried to get AI to teach them how to play the card game Euchre, but the AI kept on doing things like playing cards in the student’s hand.

This brings me back to these Apple ads: I wish they both went on just another minute or so. Right after Warren and Lance confidently look directly at the camera with smug look that says to viewers “Do you see what I just got away with there,” they have to follow through with what they supposedly have accomplished, and I have a feeling that would go poorly. Right after Warren’s boss talks with him about that email and right after Lance starts his summary, I am pretty sure they’re gonna get busted. Sort of like what has happened when I have suspected correctly that a student used too much AI and that student can’t answer basic questions about what it is they (supposedly) wrote.

IT’S A WITCH!

Reflecting on Melanie Dusseau’s “Burn It Down: A License for AI Resistance”

I don’t completely disagree with Melanie Dusseau’s advice in her recent Inside Higher Ed column Burn It Down: A License for AI Resistance, but there’s something about her over-the-top enthusiasm for “burning it down” that reminds me of this famous scene from Monty Python and the Holy Grail:

Dusseau, who is a creative writing professor at the University of Findlay, writes “Until writing studies adopted generative artificial intelligence as sound pedagogy, I always felt at home among my fellow word nerds in rhet comp and literary studies.” A bit later, she continues:

If you are tired of the drumbeat of inevitability that insists English faculty adopt AI into our teaching practices, I am here to tell you that you are allowed to object. Using an understanding of human writing as a means to allow for-profit technology companies to dismantle the imaginative practice of human writing is abhorrent and unethical. Writing faculty have both the agency and the academic freedom to examine generative AI’s dishonest training origins and conclude: There is no path to ethically teach AI skills. Not only are we allowed to say no, we ought to think deeply about the why of that no.

Then she catalogs the many many mmmmmaaaaaannnnnnyyyyyy problems of AI in prose I found engaging and intentionally funny in its alarmed tone. Dusseau writes:

Resistance is not anti-progress, and pedagogies that challenge the status quo are often the most experiential, progressive and diverse in a world of increasingly rote, Standard English, oat milk sameness. “Burn it down” is a call to action as much as it is a plea to have some fun. The robot revolution came so quickly on the heels of the pandemic that I think a lot of us forgot that teaching can be a profoundly joyful act.

AI resistance/refusal is catching on. The day after I read this article, I came across (via Facebook) a similar albeit much more academic call for resistance, “Refusing GenAI in Writing Studies: A Quickstart Guide” by Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes. While also calling for the field to “refuse” AI, it’s more of an academic manifesto with a lot of citation, it’s a much more nuanced and complicated, and also still a work in progress. For example, sections that are “coming soon” on their wordpress site include “What Is GenAI Refusal?” and “Practicing Refusal.” Perhaps I’ll write more specifically about this when it is closer to finished, but this post isn’t about that.

Anyway, why does “burning it down” make me think of that Monty Python scene? The peasants bring one of the knights (ChatGPT just told me it was “Sir Bedevere the Wise”— let’s hope that’s right!) a witch (or AI) to be burned at the stake. They’re screaming and enraged, wanting to burn her immediately. The knight asks why they believe she’s a witch, and the evidence the peasants offer up is flimsy. The wise knight walks them through the logic of how to test if the woman truly is a witch: to put her on the scales and see if she weighs as much as a duck and thus floats like wood and thus she too is made of wood and will burn for being a witch. (Stick with me here— the punchline at the end has a twist).

Like the mob, Dusseau has had enough with all these witches/AIs. She wants it gone and for it to have never existed in the first place. But since that’s not possible, Dusseau is calling for like-minded writing teachers to refuse to engage. “To the silent, hopeless AI skeptics and Star Trek fans: resistance is not futile. We simply do not have to participate. Let Melville’s Bartleby provide the brat slogan of our license to resist: ‘I would prefer not to.’”

Now, maybe I’m just not hearing the “drumbeat of inevitability” for embracing AI to teach writing because I’m one of these people teaching a lot with/about AI this semester. But I have no idea what she’s talking about. If anything, it seems like most faculty around here have either ignored AI or banned it. Most of my students this semester have told me that AI has not come up as a topic in their other classes at all.

Before one burns it all down, it probably is a good idea to figure out what “it” is. Maybe Dusseau has already done that. Or maybe she is like a lot of my fellow academic AI resisters who don’t know much about AI and think that it is only for brute-force cheating. Maybe she knows better and is making an informed decision about resisting AI; it’s hard for me to tell.

I think her arguments for why we should refuse AI boil down to two. First, AI requires giant data centers and it takes A LOT of electricity and water to run those sites. That is completely true, and that doesn’t even get into the labor exploitation that went into training LLMs and monitoring content, the monopolistic and unregulated giant corporations that control all this, etc. All true, but look: these data centers also power EVERYTHING we do online and they have been an environmental problem for decades. So it’s not that she’s wrong, but I suspect that Dusseau isn’t thinking about refusing Facebook or Google searches anytime soon.

The second argument is that it ruins writing. Like almost every other person I’ve read making this argument, Dusseau references Ted Chiang’s New Yorker article “Why A.I. Isn’t Going to Make Art” in passing. What she doesn’t mention is Chiang’s definition of art is really fiction writing, and he sets the bar extremely high as to what counts as “art.” I prefer Matteo Wong’s response in The Atlantic, “Ted Chiang Is Wrong About AI Art,” but I’ll leave that debate for another time.

I think what Dusseau means by “writing” is writing that is personal, expressive, and “creative,” poetry and fiction and the like. Of course, AI is not the right tool for that. It’s not for writing a heartfelt fan letter from a child to an Olympic athlete, and Google found that out with the backlash to their “Dear Sydney” ad campaign this summer. (If you don’t know what I’m talking about check out the great post Annette Vee wrote about this called “Why ‘just right’ is wrong: What the Gemini ad ‘Dear Sydney; says about writing that people choose to do.”) Everyone I follow/read about AI agrees with this.

But most writing tasks are not personal, expressive, or creative, and that is particularly true for many writing tasks we all have to do sometimes, often reluctantly, for school or for work: routine reports, memos, forms, the kind of things we call “paperwork.” A lot of students are required to write when they would “prefer not to,” which is why students sometimes use AI to sometimes cheat on writing assignments. So yes, like Dusseau, I don’t want AI writing my journal entries, personal emails, or anything else that’s writing I choose to do, and I don’t want students to cheat. But there’s a role for AI with some of these not-chosen writing tasks that is perhaps useful and not cheating.

The other problem is that Dusseau’s own resistance is not going to stop any of her students or her colleagues from using AI. I don’t know if AI-based writing tools are going to inevitably be a part of writing pedagogy or not, but I do know that AI is going to continue to be a tool that people are going to continue to use. I have students in all of my classes (though more of them in the class of English majors) who are AI refusers, and I think that’s really important to note here: not all students are on board with this AI stuff either. But for my students who seem to know how to use AI effectively and as something akin to a brainstorming/proofreading/tutoring tool, it seems to work pretty well. And that’s the kind of AI use that is impossible for a teacher to detect.

So to me, the council of the knight is best. Before we burn this AI witch, why don’t we see what we’re up against? Why don’t we research this a bit more? Why don’t we not burn it own but instead (to very generally reference Cynthia Selfe’s Technology and Literacy in the 21st Century) pay attention to it and on alert?

But here’s the thing: in that Monty Python scene, it turns out she is a witch.

The punchline in that scene goes by so quick it took me a few viewings to realize it, but the woman does weigh the same as the duck, thus is made out wood, and thus is a witch. The peasants were right! SHE’S A WITCH!

Because like I said at the beginning of this, I don’t completely disagree with Dusseau. I mean, I still don’t think “burn it down” is a good strategy— we gotta pay attention. But I’m also not saying that she’s wrong about her reasons for resisting AI.

My semester isn’t quite over, and I have to say I am not sure of the benefits of the up-front “here is how to use AI responsibly” approach I’ve taken this semester, particularly in freshman comp. But I do know an impassioned and spirited declaration to students about why they too should burn it all down is not going to work. If writing teachers don’t want their students to use AI in their courses, they cannot merely wish AI away. They need to learn enough to understand the basics of it, they need to explain to students why it’s a bad idea to use it (or they need to figure out when using AI might be okay), and they’re going to have to change their writing assignments to make them more AI proof.

AI Cheating as a Teachable Moment

A Simple Example

Back to my “regular programming” with a post/update/stack/whatever these things are calls that is more on brand….

Which makes this all a teachable moment for me as well: I think the lesson I’ve learned (or re-learned) from this is that the best way to prevent/discourage students from using AI to cheat is to get out in front of the issue. I’m not saying that all writing teachers ought to allow their students to use AI; in fact, as we’re approaching the end of the semester, I’m not sure if it is a good idea to encourage and sanction the use of AI in classes like first year writing. But I am sure that is is a very good idea for writing (and other kinds of ) teachers to be up-front about AI. I think when teachers do spend some time talking about what does or doesn’t work with AI, students are less likely to use it to cheat in that class— if they use it at all.

My students and I have reached the part of the semester where they are mostly working on finishing the assignments, and where I’m mostly working on reading/commenting/evaluating those assignments. So busy busy busy. Anyway, as kind of an occasional break from that work, I wrote this post in bits and pieces over the last week or two about how a particular example of AI “cheating” became a “teachable moment.”

I think there’s AI CHEATING and there’s AI “cheating,” much in the same way that there is PLAGIARISM and then there’s “plagiarism.” By PLAGIARISM, I mean the version where a student hands in a piece of writing they did not compose at all. The most obvious example is when a student pays someone else to do it, perhaps from an online paper mill. I know this happens, but I don’t think I’ve ever seen it— unless it was that good I didn’t notice.

More typically, students do this cheating themselves by copying, pasting, and slightly tweaking chunks of text from websites to piece together something kind of like the paper. This is usually easy to spot and for two reasons. First, the same Google searches students use to find stuff to cheat with also works for me to find the websites and articles they used to cheat. Second and perhaps more importantly, students only plagiarize like this when they know they’re failing and desperate, so it’s easy to spot.

The much more common kind of “plagiarism” I see is basically accidental. A lot of students— especially first year students— do not understand what needs to be cited and what does not. This is because citation is both confusing and a pain in the ass, so students sometimes do not realize they had to have a citation at all, or they just skip it and figure no one will notice. Fortunately, it’s easy to spot when students drop in a quote from an article without citation because of the writing shift: the text goes from a college freshman grappling with their prose suddenly to a polished and professional writer, often with specialized word choices and jargon. And as often as not, students do cite some of the article they’re accidentally plagiarizing, so it’s pretty easy to check.

This is a “teachable moment:” that is, one of those things that happens in a class or an assignment where it’s an opportunity to reinforce something that has already been taught. This is where I remind the student about what we already talked about: how unintentional plagiarism is still plagiarism, that this is specifically an example of why it’s important to cite your sources correctly, and so forth. This tends to click.

Similarly, there’s AI CHEATING and then there’s AI “cheating,” and I have seen examples of both in my first year writing classes this semester. The big example of extreme AI CHEATING I’ve seen so far this semester is not that interesting because it was so textbook: desperate failing student clumsily and obviously uses AI, I called the student out about it, student confesses, I gave the student the choice to fail or withdraw rather than going through the rig-a-ma-roll of getting that student expelled (oh yes, that is something I could have done). Slight tangent: if catching AI cheaters is as easy and as obvious as it seems to be, what’s the problem? Conversely, if students are using AI effectively as a tool to help their process (brainstorming, study guides, summarizing complicated texts, proofreading, etc.) and if that use of AI isn’t detectable by the teacher, well, what’s the problem with that?

The AI “cheating” example from this semester was a more interesting and teachable moment. Here’s what happened:

The first assignment in my freshman comp classes is a 2-3 page essay where students explain their initial working thesis and how they came up with it. It’s a low-stakes getting started kind of assignment I grade “complete/incomplete.” As I explain and remind students repeatedly, this is not an essay where they are trying to convince the reader to believe their thesis. Rather, this is an essay about the process of coming up with the working thesis in the first place. What I want students to write about is why they’re interested in their topic, what sorts of brainstorming activities they tried to come up with their topic, what sorts of conversations they had about this project with me and with classmates, and so forth.

This semester, the topic of research in my first year writing classes is “your career goals and AI.” I’ve also spent a lot of class time explaining why I think AI is not that useful for cheating because it just can’t do these assignments very well. But I also explained how AI might be useful as part of the process as well. For example, a lot of these students really struggle with coming up with a good and researchable topic idea/thesis, and even though most of AI’s ideas for a thesis about career goals and AI aren’t great, it does help them get beyond staring at a blank page.

I’ve given a version of this assignment for a long time, and in previous semesters and pre-AI, two or three students (out of 25) messed it up. It’s usually because the students didn’t understand the assignment, or they weren’t paying attention to/didn’t do any of the prewriting exercises we discussed in class. So they try to fake it by writing what ends up being a really short research paper without any research. I gave these students a do-over, and that usually was enough to get them back on track.

This semester, I had closer to half of the students in my two sections mess this up. I’m sure some of these students just didn’t get the assignment/didn’t do the prewriting activities, but what I think happened more is a lot of students got a little lazy and hypnotized by the smooth, mansplaining prose of AI. So instead of remembering what the assignment was about, they just took what the AI was feeding it about their working thesis ideas and tweaked that a bit.

The teachable moment? I met with the students who messed this up, reminded them what the assignment was actually supposed to be, and I pointed out that this was exactly the kind of thing that AI cannot do: it can’t help you write about what you think. At least not yet.

This was a couple weeks ago, and for most of my students, I think it clicked. I still have a number of students who are struggling and unlikely to pass for all kinds of reasons, but that’s typical for freshman comp. Some students (particularly the ones on the way to failing) are still trying to use AI for cheating, but for the most part, I think students have learned the lesson.

I ask students to include an “AI Use Statement” where they describe how they used AI, or to say explicitly that they didn’t use any AI. This is a brand-new thing for both them as students and me as a teacher, so they sometimes forget or they don’t explain their AI use as clearly as I wanted. And I am sure some students are fibbing a little about how much AI they used. But for the most part, what students are telling me is they aren’t using AI to write at all, or they’re using Grammarly for proofreading (which I think counts as AI), they are using an AI for some ideas about a particular paragraph, and/or getting started or some other brainstorming kind of suggestion.

Which makes this all a teachable moment for me as well: I think the lesson I’ve learned (or re-learned) from this is that the best way to prevent/discourage students from using AI to cheat is to get out in front of the issue. I’m not saying that all writing teachers ought to allow their students to use AI; in fact, as we’re approaching the end of the semester, I’m not sure if it is a good idea to encourage and sanction the use of AI in classes like first year writing. But I am sure that is is a very good idea for writing (and other kinds of ) teachers to be up-front about AI. I think when teachers do spend some time talking about what does or doesn’t work with AI, students are less likely to use it to cheat in that class— if they use it at all.

Money, Strong Men, and Blue Dots

A break from AI & academia to talk politics

When I got up this morning and before I started writing in my journal, I looked back to the entries I wrote just before and after the 2016 election. FWIW, I write in a journal— I will not verb that into “journaling”— every morning and I have been doing so very consistently for about the last 15 years. Eight years ago, my journal entries in October/November 2016 were more brief than they’ve been lately, and I didn’t write much about the election between Hillary and Trump before election day. I think that’s because like everyone else, I thought Hillary had it in the bag. The shock of the day on November 9, 2016 (the day after the election) comes off the page, and I kept writing about the surrealness of the results for quite some time after that.

And now it’s time to mourn a bit and then to join the resistance once again.

This morning, after Trump has won decisively, I am surprised but I’m not shocked. Harris seemed to be finishing strong and Trump seemed to be imploding, but I still knew Trump might win. Though it’s disturbing that the vote was this decisive.

There are lots of reasons why Trump won— immigration, Biden should have never run for a second term, the US is still not ready for a woman president, etc. I think it’s mostly about money though. As I heard Geoff Bennett early last night on the PBS Newshour (I’m paraphrasing here), perhaps it is a luxury for people to be concerned about the ideal of Democracy when they can’t afford groceries. After all, a surprisingly large percentage of Americans would struggle paying a $500 emergency expense. As I was driving around this morning, I heard (yes, on public radio because I am that kind of educated liberal elite) someone pointing out that in times of high inflation, incumbents lose— Ford, Carter, and now Biden/Harris. And let’s not forget that Clinton beat Bush I because “it’s the economy, stupid.”

So people poorer than me who couldn’t pay off a surprise $500 car repair bill (let alone something like a devastating medical bill) are so mad and desperate they’re willing to pick someone we know will be an agent of chaos both because that’s how he governed when he was president four years ago and that’s what he’s told us he would do in the campaign. They’ve been taken into the cult that is Trump. But rich people— I mean very very rich people, but also upper-middle class people like me who have plenty of money and safety nets to pay surprise bills, who have good jobs, who have retirement plans that have swollen thanks to a robust stock market, the kind of people who don’t pay attention to how much a loaf of bread costs— also voted for Trump because, duh, the economy, stupid. Money money money.

Let us also not forget the appeal of the strong man.

Back in spring 2019, my wife and son and I went on a guided tour/vacation to China. One of the many MANY striking things about that trip for me was seeing an authoritarian regime up close and personal on a day-to-day basis. There were cameras and checkpoints and heavily armed soldiers everywhere, especially in Beijing. My face was scanned by security guards dozens of times. Access to the western version of the Internet was blocked, and we had to use VPN software to get around it with mixed results. Most of the programming available on television (at least in the hotels where we stayed) were state broadcasts with a bit of clearly censored news from CNN and BBC. It wasn’t as bad as I imagine it was on Soviet-era Eastern Europe or it is now in North Korea, but it was pretty bad.

But here’s the thing: as far as I could tell, most Chinese people were okay with this arrangement. As long as the great leader and the state enabled the poorest citizens to have food and shelter and for the elites to shop for western goods (I saw stores for every luxury good product that you can think of, along with almost every American fast food chain), everyone was happy— or at least satisfied. Political polls in authoritarian regimes are always sketchy, but the government in places like China and Russia are popular.

What’s clear from this election is a lot of Americans seem eager to give this fascism thing a try. Because while I strongly disagree with the results, it was a fair and square vote where more people opted for the wannabe strong man. This obviously makes me worry about mindset of the majority of voters in this country right now.

I think we all know that the next four years are going be a chaotic shitshow filled with scandals and protests and insane policy proposals and all the rest. Trump might be more able to do more damage, sure, but we also know a lot more about how to try to resist and fight back. I don’t know if this will be the beginning of the end of democracy as we know it, but I know it ain’t going to be good and the minority of us who didn’t vote for this are going to have to find ways to resist and fight.

But to speak selfishly here for a moment, at least I live in a very blue dot community. Just over 70% of voters in my county (Washtenaw) voted for Harris, so a very blue community in a left-leaning region (67% of Wayne county/Detroit and just over half of Oakland county voters picked Harris) and in a still purple state. We didn’t move here for politics— we moved here to work at EMU— but we’re liable to stay for politics.

And now it’s time to mourn a bit and then to join the resistance once again.