Teaching this Fall (TBA): Writing, Rhetoric, and AI

The two big things on my mind right now are finishing this semester (I am well into the major grading portion of the term in all three of my classes) and preparing for the CCCCs road trip that will begin next week. I’m sure I’ll write more on the CCCCs/road trip after I’m back.

But this morning, I thought I’d write a post about a course I’m hoping to teach this fall, “Writing, Rhetoric, and AI.” I’ve set up that page on my site with a brief description of the course– at least as I’m imagining it now. “Topics in” courses like this always begin with just a sketch of a plan, but given the twists and turns and speed of developments in AI, I’ve learned not to commit to a plan too early.

For example: the first time I tried to teach anything about AI was in a course I taught in fall 2022 in a 300-level digital writing course. I came up with an AI assignment based in part on an online presentation by Christine Photinos and Julie Wihelm for the 2023 Computers and Writing Conference, and also on Paul Fyfe’s article “How to Cheat on Your Final Paper: Assigning AI for Student Writing.” My plan at the beginning of that semester was to have students use the same AI tools these writers were talking about, which was OpenAI’s GPT-2. By the time we were starting to work on the AI writing assignment for that class, ChatGPT was released. So plans changed, English teachers started freaking out, etc.

Anyway, the first thing that needs to happen is the class needs to “make”– that is, get enough students to justify it running at all. But right now, I’m cautiously optimistic that it is going to happen. The course will be on Canvas and behind a firewall, but my plan for now is to eventually post assignments and readings lists and the like here. Once I figure out what we’re going to do.

Now is a Good Time to be at a “Third Tier” University

The New York Times ran an editorial a couple of weekends ago called “The Authoritarian Endgame on Higher Education,” where the first sentence was “When a political leader wants to move a democracy toward a more authoritarian form of government, he often sets out to undermine independent sources of information and accountability.” The editorial goes on to describe the hundreds of millions of dollars of cuts in grants, and while the cuts are especially large and newsworthy at Johns Hopkins ($800 million) and Columbia ($400 million), they’re happening in lots of smaller amounts at lots of research universities. Full disclosure: my son is a post-doc at Yale, and while his lab has not been severely impacted by these cuts (yet), it is and continues to be a looming problem for him and his colleagues.

The NYT’s editorial board is correct: Trump is following the playbook of other modern authoritarian leaders (Putin, Orban in Hungary, Modi in India, Erdogan in Turkey, etc.) and is trying to weaken universities. Trump and shadow president Musk are cutting off the funding from the National Institute of Health (and other similar federal agencies) to research universities not so much because of waste and fraud and wanting to end DEI initiatives, and they’re destroying the rest of the federal government not because they want to save money. They’re doing it to consolidate power. They are trying to revamp the U.S. into an authoritarian system run by big tech and billionaires. I wish MSM would remind people more often that this is what is going on right now.

Then last week, Princeton President David A. Graham wrote a piece published in The Atlantic in which he insisted that now was the time for universities like Columbia to stand up to the Trump administration in the name of academic freedom. He quotes Joan Scott, the leader of the American Association of University Professors, who said “Even during the McCarthy period in the United States, this was not done.” The day after The Atlantic ran Graham’s column, Columbia more or less caved in and appeared to be ready to give Trump what he wanted.

And of course, Trump signed an executive order to close down the Department of Education– which is not something that Trump can do without Congress, but never mind the details of the law.

This is all very bad for all kinds of reasons that go well beyond the impact on these institutions. This is grant money from agencies like the National Institutes of Health to fund research, typically the kind of basic research that the private sector doesn’t do– but of course, research that the private sector profits from greatly. Just about every medical breakthrough you can think of over the last 75 years has been a result of this partnership between the feds and research universities, but to use one example close to my own heart (and the rest of my body) right now: take Zepbound. One of the origins of these current weight loss drugs was basic research the NIH and other federal government agencies did back in the 80s and 90s about the venom of Gila monsters, the kind of research MSM and politicians frequently mock– “why are we spending so much money to research lizards?” Because that’s where discoveries are made that eventually lead us to all sorts of surprising benefits.

But there is one detail about the way this story is being reported that bothers me. MSM puts all universities into the same bucket when the reality is much more complicated than that. The universities most impacted by Trump’s actions are very different kinds of institutions than the ones where I’ve spent my career.

In my book about MOOCs (More Than A Moment), I wrote a bit about the disparity between different tiers of universities, and how MOOCs (potentially) made the distance between higher ed’s haves and have-nots even greater. I frequently referenced the book A Perfect Mess: The Unlikely Ascendancy of American Higher Education by David F. Labaree. If you too are interested in the history of higher education (and who isn’t?), I’d highly recommend it. Among other things, Labaree describes the unofficial but well-understood hierarchy of different institutions. At the bottom fourth tier of this pyramid are community colleges, and I would also add proprietary schools and largely online universities. Roughly speaking, there are about 1,000 schools in this category. Labaree says that the third tier consists of universities that mostly began as “normal schools” in the 19th century, though I would add into that tier lots of small/private/often religious/not elite colleges, along with most other regional institutions. There are probably close to 1500 institutions in this category, and I think it’s fair to say most four-year colleges and universities in the US are in this group. EMU, which began as the Michigan State Normal School, is smack-dab in the middle of this tier.

The second tier and top tier are probably easiest for most non-academic types to understand because these are the only kinds of places that MSM routinely reports on as being “higher education.” Roughly speaking, these two tiers are comprised of about the top 150 or so national universities on the US News and World Report Rankings of Universities, with the top fifty or so in those rankings being the tippy-top 1 tier. By the way, EMU is “tied” as the 377th school on the list.

Now, those universities at the tippy-top that receive a lot of NIH and other federal grants– Columbia, Johns Hopkins, Michigan, Yale, etc.– have a serious problem because those grants are a major revenue stream. But for the rest of us in higher ed, especially on the third tier? Well, I was in a meeting just the other day where one of my colleagues asked an administrator when EMU could expect to see a cut in federal funding. This administrator, who seemed a little surprised at the question, pointed out that about 25% of our funding comes from state appropriations, and the rest of it comes from tuition. The amount of direct federal funding we receive is negligible.

And herein lies the Trump administration’s challenge at taking over education in this country, thankfully. Unlike most other countries in the world where schooling is more centralized, public education in the United States is quite decentralized and is mostly controlled by states and localities. As this piece from Inside Higher Ed reminds us, the main role of the federal government in higher education (besides collecting data about higher education nationwide, working with accreditors, and overseeing students’ civil rights) is to run the student loan and Pell Grant programs. The Trump administration has repeatedly said they want these programs to continue even if they are successful at eliminating the Department of Education. Not that I completely believe that– Trump/Musk might want to cut Pell grants, and they are trying to roll back Biden’s moves on loan forgiveness. But given how many students (and their parents) depend on these programs, including MAGA voters, I don’t see these programs going away.

In other words, now is a good time to be at a third-tier university.

Now, that New York Times editorial does have one paragraph where they acknowledge this difference between the haves and have-nots:

We understand why many Americans don’t trust higher education and feel they have little stake in it. Elite universities can come off as privileged playgrounds for young people seeking advantages only for themselves. Less elite schools, including community colleges, often have high dropout rates, leaving their students with the onerous combination of debt and no degree. Throughout higher education, faculty members can seem out of touch, with political views that skew far to the left.

I don’t know how much Americans do or don’t “trust” higher education, but the main reason why EMU and similar universities have a much higher dropout rate is we admit students more selective universities don’t. I don’t remember the details, but I heard this story years ago about this administrator in charge of admissions at EMU. When he was asked why our graduation rate is around 50% while the University of Michigan’s rate is more like 93%, he responded “Why isn’t U of M’s graduation rate 100%? They only admit students they know will graduate.” In contrast, EMU (and most other universities in the third tier) takes a lot of chances and admits almost everyone who applies.

I’m biased of course, but I think a more accurate way to frame the role of third-tier/regional universities is as institutions of opportunity. We give folks a chance at a college degree who otherwise would have few options. We aren’t a school that helps upper-middle-class kids stay that way. We’re a school that helps working class/working poor students improve their lives, to be one of the first (if not the first) people in their families to graduate from college. Sure, a lot of the students we admit don’t make it for all kinds of different reasons. But I think the benefits we provide to the ones who succeed in graduating outweigh the problems of admitting students who are just not prepared to go to college. Though I’ll admit it’s a close call.

Anyway, I don’t know what those of us working on the lower levels of the pyramid can do to help those at the top, if there’s anything we can do. That’s the frustration of everyone against Trump right now, right? What can we do?

Cancún, Winter Break 2025

A few months ago, we had no plans for Winter (aka Spring) Break. I had suggested to Annette (who is the one who manages the finances in our household, and for good reason) that maybe it’d be nice to at least get out of town for a long weekend to someplace warmer. Wisely, Annette pointed out that we just bought a new house and we are going on a big trip to Europe this summer, so no, we don’t have the money. Okay, fine.

Then we got a check from the IRS for $2500 because (we think) it turns out we were eligible for COVID relief money from the feds we never claimed. Thanks, Biden. “C’mon, found money!” I said and Annette could not disagree.

We considered a couple options, but we landed on Cancún for two reasons. First, we’ve talked for years about checking out an “all-inclusive” resort option. We’ve been on four cruises now, and I for one am undecided about them: there’s stuff I like, there’s stuff I don’t like. But we talked about how an all-inclusive resort might be interesting to try because we imagined it to be like a cruise that didn’t go anywhere. Second, while Annette visited Cancún a couple of times in the late 80s and early 90s, I’ve never been anywhere in Mexico, so what the heck?

Would I do it again? Well, like cruises, there are good things and not good things, so I don’t know.

My Peter Elbow Story

Peter Elbow died earlier this month at the age of 89. The New York Times had an obituary February 27 (a gift article) that did a reasonably good job of capturing his importance in the field of composition and rhetoric. I would not agree with the Times about how Elbow’s signature innovation, “free writing,” is a “touchy-feely” technique, but other than that, I think they get it about right. I can think of plenty of other key scholars and forces in the field, but I can’t think of anyone more important than Elbow.

Elbow was an active scholar and regular presence at the Conference for College Composition and Communication well into the 2000s. I remember seeing him in the halls going from event to event, and I saw him speak several times, including a huge event where he and Wayne Booth presented and then discussed their talks with each other.

A lot of people in the field had one store or another about meeting Peter Elbow; here’s my story (which I shared on Facebook earlier this month when I first learned of his passing):

When I was a junior in high school, in 1982-83 and in Cedar Falls, Iowa, I participated in some kind of state-wide or county-wide writing writing event/contest. This was a long time ago and I don’t remember any of the details about how it worked or what I wrote to participate in it, but I’m pretty sure it was an essay event/contest of some sort– as opposed to a fiction/poetry contest. It was held on the campus of the University of Northern Iowa, which is in Cedar Falls. So because it was local, a bunch of people from my high school and other local schools and beyond show up. My recollection was students participated in a version of a peer review sort of workshop.

This event was also a contest of some sort and there was a banquet everyone went to and where there were “winners” of some sort. I definitely remember I was not one of them. The banquet was a buffet, and I remember going through the line and there was this old guy (well, he would have been not quite 50 at this point) who was perfectly polite and nice and with a wondering eye getting something out of a chaffing dish right next to me. I don’t remember the details, but I think he was asked me about what I thought of this whole peer review thing we did, and I’m sure I told him it was fun because it was.

So then it turns out that this guy was there to give some kind of speech to all of the kids and all of the teachers and other adults that were at this thing. Well, really this was a speech for the teachers and adults and the kids were just there. I don’t remember how many were there, but I’m guessing maybe 100-200 people. I don’t remember anything Elbow talked about and I didn’t think a lot about it afterwards. But then a few years later and when I was first introduced to Elbow’s work in the comp/rhet theory class I took in my MFA program, I somehow figured out that I met that guy once years before and didn’t realize it at the time.

I can’t say I’ve read a ton of his writing, but what I have read I have found both smart and inspirational. It’s hard for me to think of anyone else who has had as much of an influence on shaping the field and the kind of work I do. May his memory be a blessing to his friends and family.

I’m Still Not Using AI Detection Software; However….

Back in mid-February, Anna Mills wrote a Substack post called “Why I’m using AI detection after all, alongside many other strategies.” Mills, who teaches at Cañada College in Silicon Valley, has written a lot about teaching and AI, and she was a member of the MLA-CCCC Joint Task Force on Writing and AI. That group recommended that teachers use AI detection tools with extreme caution, or to use them not at all.


What changed her mind? Well, it sounds like she had had enough:

I argued against use of AI detection in college classrooms for two years, but my perspective has shifted. I ran into the limits of my current approaches last semester, when a first-year writing student persisted in submitting work that was clearly not his own, presenting document history that showed him typing the work (maybe he. typed it and maybe he used an autotyper). He only admitted to the AI use and apologized for wasting my time when he realized that I was not going to give him credit and that if he initiated an appeals process, the college would run his writing through detection software.
I haven’t had this kind of student encounter over AI cheating, but it’s not hard for me to imagine this scenario. It might be the last straw for me too. And like I think is the case with Mills, I’m getting sick of seeing this kind of dumb AI cheating.

Last November, I wrote here about a “teachable moment” I had when an unusually high number of freshman comp students who dumbly cheated with AI. The short version: for the first short assignment (2 or 3 pages), students are supposed to explain why they are interested in the topic they’ve selected for their research, and to explain what prewriting and brainstorming activities they did to come up with their working thesis. It’s not supposed to be about why they think their thesis is right; it’s supposed to be a reflection on the process they used to come up with a thesis that they know will change with research. It’s a “pass/revise” assignment I’ve given for years, and I always have a few students who misunderstand and end up writing something kind of like a research paper with no research. I make them revise. But last fall, a lot more of my students did the assignment wrong because they blindly trusted what ChatGPT told them. I met with these students, reminded them what the assignment actually was, and to also remember that AI cannot write an essay that explains what you think.

I’m teaching another couple of sections of freshman composition this semester and students just finished that first assignment. I warned them about avoiding the mistakes with AI students made last semester, and I repeated more often that the assignment is about their process and is not a research paper. The result? Well, I had fewer students trying to pass off something written by AI, but I still had a few.

My approach to dealing with AI cheating is the same as it has been ever since ChatGPT appeared: I focus on teaching writing as a process, and I require students to use Google Docs so I can use the version history to see how they put together their essays. I still don’t want to use Turnitin, and to be fair, Mills has not completely gone all-in with AI detection. Far from it. She sees Turnitin as an additional tool to use along with solid process writing pedagogy. Mills also shares some interesting resources about research into AI detection software and the difficulty of accurately spotting AI writing. Totally worth checking her post out.

I do disagree with her about how difficult it is to spot AI writing. Sure, it’s hard to figure out if a chunk of writing came from a human or an AI if there’s no context. But in writing classes like freshman composition, I see A LOT of my students’ writing (not just in final drafts), and because these are classes of 25 or so students, I get to know them as writers and people fairly well. So when a struggling student suddenly produces a piece of writing that is perfect grammatically and that sounds like a robot, I get suspicious and I meet with the student. So far, they have all confessed, more or less, and I’ve given them a second chance. In the fall, I had a student who cheated a second time; I failed them on the spot. If I had a student who persisted like the one Mills describes, I’m not quite sure what I would do.

But like I said, I too am starting to get annoyed that students keep using AI like this.

When ChatGPT first became a thing in late 2022 and everyone was all freaked out about everyone cheating, I wrote about/gave a couple of talks about how plagiarism has been a problem in writing classes literally forever. The vast majority of examples of plagiarism I see are still a result of students not knowing how to cite sources (or just being too lazy to do it), and it’s clear that most students don’t want to cheat and they see the point of needing to do the work themselves so they might learn something.

But it is different. Before ChatGPT, I had to deal with a blatant and intentional case of plagiarism once every couple of years. For the last year or so, I’ve had to deal with some examples of blatant AI plagiarism in pretty much every section of first-year writing I teach. It’s frustrating, especially since I like to think that one of the benefits of teaching students how to use AI is to discourage them from cheating with it.

Marc Watkins is right; my flavor of AI skepticism

A “Paying Attention to AI” Substack post…

The other day, I read Marc Watkin’s excellent Substack post “AI Is Unavoidable, Not Inevitable,” and I would strongly encourage you to take a moment to do the same. Watkins begins by noting that he is “seeing a greater siloing among folks who situate themselves in camps adopting or refusing AI.” What follows is not exactly a direct response to these refusing folks, but it’s pretty close and I find myself agreeing with Watkins entirely. As he says, “To make my position clear about the current AI in education discourse I want to highlight several things under an umbrella of ‘it’s very complicated.'”

Like I said, you really should read the whole thing. But I will share this long quote that is so on point:

Many of us have wanted to take a path of actively resisting generative AI’s influence on our teaching and our students. The reasons for doing so are legion—environmental, energy, economic, privacy, and loss of skills, but the one that continually pops up is not wanting to participate in something many of us fundamentally find unethical and repulsive. These arguments are valid and make us feel like we have agency—that we can take an active stance on the changing landscape of our world. Such arguments also harken back to the liberal tradition of resisting oppression, protesting what we believe to be unjust, and taking radical action as a response.

But I do not believe we can resist something we don’t fully understand. Reading articles about generative AI or trying ChatGPT a few times isn’t enough to gauge GenAI’s impact on our existing skills. Nor is it enough to rethink student assessments or revise curriculum to try and keep pace with an ever-changing suite of features.

To meaningfully practice resistance of AI or any technology requires engagement. As I’ve written previously, engaging AI doesn’t mean adopting it. Refusing a technology is a radical action and we should consider what that path genuinely looks like when the technology you despise is already intertwined with the technology you use each day in our very digital, very online world.

Exactly. Teachers of all sorts, but especially those of us who are also researchers and scholars, need to engage with AI well enough to know what we are either embracing or refusing. Only refusing is at best willful ignorance.

AI is difficult to compare to previous technologies (as Watkins says, AI defies analogies), but I do think the emergence of AI now is kind of like the emergence of computers and the internet as tools for writing a couple of decades ago. A pre-internet teacher could still refuse that technology by insisting students take notes by hand, hand in handwritten papers, and take proctored timed exams completed on paper forms. When I started at EMU in 1998, I still had a few very senior colleagues who taught like this, who never touched their ancient office computers, who refused to use email, etc. But try as they might, that pre-internet teacher who required their students to hand in handwritten papers did not make computers and the internet disappear from the world.

It’s not quite the same now with AI as it was with the internet back then because I don’t think we are at the point where we can assume “everyone” routinely uses AI tools all the time. This is why I for one am quite happy that most universities have not rolled out institutional policies on AI use in teaching and scholarship– it’s still too early for that. I’ve been experimenting with incorporating AI into my teaching for all kinds of different reasons, but I understand and respect the choices of my colleagues to not allow their students to use AI. The problem though is refusing AI does not make it disappear out of the students’ lives outside of the class– or even within that class. After all, if someone uses AI as a tool effectively– not to just crudely cheat, but to help learn the subject or as a tool to help with the writing– there is no way for that AI forbidding professor to tell.

Again, engaging with AI (or any other technology) does not mean embracing, using, or otherwise “liking” AI (or any other technology). I spent the better part of the 2010s studying and publishing about MOOCs, and among many other things, I learned that there are some things MOOCs can do well and some things they cannot. But I never thought of my blogging and scholarship as endorsing MOOCs, certainly not as a valid replacement for in-person or “traditional” online courses.

I think that’s the point Watkins is trying to make, and for me, that’s what academics do: we’re skeptics, especially of things based on wild and largely unsubstantiated claims. As Watkins writes, “… what better way to sell a product than to convince people it can lead to both your salvation and your utter destruction? The utopia/ dystopia narratives are just two sides of a single fabulist coin we all carry around with us in our pockets about AI.”

This is perhaps a bad transition, but thinking about this reminded me of Benjamin Riley’s Substack post back in December, “Who and What comprise AI Skepticism?” This is one of those “read it if you want to get into the weeds” sorts of posts, but the very short version: Casey Newton, who is a well-known technology journalist, wrote about how he thought there are only two camps of AI Skepticism: AI is real and dangerous, and AI is fake and sucks. Well, A LOT of prominent AI experts and writers disputed Newton’s argument, including Riley. What Riley does in his post is describe/create his own taxonomy of nine different categories of AI Skepticism, including one category he calls the “Sociocultural Commentator Critics– ‘the neo-Luddite wing,'” which would include AI refusers.

Go and check it out to see the whole list, but I would describe my skepticism as being most like the “AI in Education Skeptics” and the “Technical AI Skeptics” categories, along with a touch of “Skeptics of AI Art and Literature” category. Riley says AI in Education Skeptics are “wary of yet another ed-tech phenomena that over-hypes and under-delivers on its promises.” I think we all felt the same warriness of ed-tech and over-hype with MOOCs.

Riley’s Technical AI Skeptics are science-types, but what I identify with is exploring and exposing AI’s limitations. AI failures are at least as interesting to me as AI successes, and it makes me question all of these claims about AI passing various tests or whatever. AI can do no wrong in controlled experiments much in the same way that self-driving cars do just fine on a closed course in clear weather. But just like that car doesn’t do so great driving itself through a construction zone or a snowstorm, AI isn’t nearly as capable outside of the lab.

And I say a touch of the Skeptics in AI Art and Literature because while I don’t have a problem with people using AI to make art or to write things, I do think that “there is something essential to being human, to being alive, that we express through art and writing.” Actually, this is one of my sources of “cautious optimism” about AI: since it isn’t that good at doing the kind of human things we teach directly and indirectly in the humanities, maybe there’s a future in these disciplines after all.

I’ll add two other reasons why I’m skeptical about how AI. First, I wonder about the business model. While this is not exactly my area of expertise, I keep reading pieces by people who do know what they’re talking about raising the same questions about where the “return on investment” is going to come from. The emergence of DeepSeek is less about its technical capabilities and more about further disrupting the business plans.

Second, I am skeptical about how disruptive AI is going to be in education. It’s fun and easy to talk with AI chatbots, and they can be helpful for some parts of the writing process, especially when it comes to brainstorming, feedback on a draft, proofreading, and so forth. There might be some promise that today’s AI will enable useful computer-assisted instruction tools that go beyond “drill and kill” applications from the 1980s. And assuming AI continues to develop and mature into a truly general-purpose technology (like electricity, automobiles, the internet, etc.), of course, it will change how everything works, including education. But besides the fact that I don’t think AI is going to ever be good enough to replace the presence of humans in the loop, I don’t think anyone is comfortable with an AI replacing a human teacher (or, for that matter, human physicians, airline pilots, lawyers, etc.).

If there is going to be an ROI opportunity from the trillion dollars these companies have sunk into this stuff, it ain’t going to come from students using AI for school work or from people noodling around with it for fun. The real potential with AI is in research, businesses, and industries that work with enormous data sets and in handling complex but routine tasks: coding, logistics, marketing, finance, research into the discovery of new proteins or novel building materials, and anything involving making predictions based on a large database.

Of course, the fun (and scary and daunting!) part of researching AI and predicting its future is everyone is probably mostly wrong, but some of us might have a chance of being right.

Zepbound, One Year Later (and related thoughts)

My one year of Zepbound anniversary passed a couple of weeks ago without any real notice or celebration on my part. I started the drug on January 7, 2024. I‘ve blogged about my experiences on Zepbound a few times before in the last year, and so far, so good. Mostly.

The good news is I’ve lost about 40 pounds so far. My goal is to lose another 20 pounds, which, according to the problematic BMI scale, would just barely move me into the category of “overweight” from where I am now, which is “obese.” I know, I know, it probably doesn’t matter a whole lot if I manage to get my BMI from a 31 to a 29, but still, it’s a goal.

Anyway, I’m feeling pretty good. The last time I had blood work done as part of my yearly check-up was back in June and after I’d lost about 25 pounds. My various numbers had improved (I moved out of the “pre-diabetic” category, for example), so I’m assuming that all of that would be even better now. The main side effect I have from Zepbound are all “tummy issue” related, and I still do have a bit of that, especially for a day or two after I inject myself. But it’s still not a big deal. And the stuff I wrote about before is still true: it’s easier to exercise (though I haven’t been “running” as much lately, now that I think about it), I find myself eating healthier (I mean besides just eating less), I’m enjoying the fact that I have had to once again buy some new clothes that fit better, and so forth.

The bad news is I’ve only lost about 6 pounds since the beginning of October. I think there are two reasons for this. First, I think it’s fair to say my main remaining food weakness is sweet things. My cravings for fatty things like a Big Mac are way down, but I still like candy. So fall and winter were rough with all the leftover Halloween candy (especially since we literally only had 3 very small kids with their parents knocking on our door out here in the new house!), with pies and just excesses at Thanksgiving, cookies and cakes and stuff at Christmas, etc.

Second, I think I’ve reached the limits of the drug’s effectiveness alone. As I wrote about back in May (after I had lost about 20 pounds), the reason why Zepbound was working for me was I just wasn’t as hungry, so I didn’t eat much between meals and when I did sit down for dinner or lunch, I ate less. So it didn’t feel like I was trying at all.

But at this point, if I’m going to lose another 20 pounds, I am going to need to try. For me, “trying” means being more in an “I’m on a diet” mindset in the sense cutting back even more on calories, eating even better, doing more at the gym, all that kind of stuff. I think the Zepbound helps with that too. Besides quieting the so-called “food noise,” it also helps me to better recognize when I’m eating just to eat, versus eating when I’m actually hungry. One of the ways it does this is if I do find myself hungry nowadays, it’s almost certainly because I actually do need to eat something.

But enough about only me. What else is in the news about GLP- 1 drugs and Zepbound and the like? Here’s a few articles that struck me as interesting in recent months.

  • From something called The List comes “Elon Musk’s Holiday X Post Surely Got Under RFK Jr.s Skin (& Caused Trouble for Trump.)” Apparently, Elon has lost a bunch of weight from these drugs too. Among other things, Musk posted on X “Nothing would do more to improve the health, lifespan and quality of life for Americans than making GLP inhibitors super low cost to the public. Nothing else is even close.”

    RFK Jr. is no fan of these drugs at all, and he’s quoted in this article (from other sources) saying “If we just gave good food, three meals a day, to every man, woman and child in our country, we could solve the obesity and diabetes epidemic overnight.”

    Funny enough, I think both of these fascist meatbags full of shit are correct. As I’ll get to next, these drugs have all kinds of benefits, including a lot of things well beyond weight loss. The two main barriers for making them more available are the injectable format and the high costs. Also no question that Kennedy is right too: good food isn’t going to solve these problems “overnight,” but I get his point. But for me (and I’d bet 99% of GLP-1 users), it’s not an either/or thing– the drugs help me eat better.
  • There were several MSM articles about a study that was published in Nature Medicine called “Mapping the effectiveness and risks of GLP-1 receptor agonists.” That link to Nature Medicine only works at all if I access it through the EMU library, so your results will vary. Anyway, the study used the US Department of Veterans Affairs database to study hundreds of thousands of patients who had used these drugs, primarily for diabetes. As Time summed it up, patients taking “GLP-1 medications had a lower risk of a number of health conditions, including Alzheimer’s disease and dementia, addiction, seizures, blood-clotting problems, heart conditions, and infectious diseases, compared to people taking the other types of diabetes treatments. The people taking the GLP-1 drugs also had increased GI-related issues, low blood pressure, and arthritis, as well as certain kidney conditions and pancreatitis—most of which are already known side effects of the medications.”
  • I have a news alert for Zepbound, and I see a lot of articles like this one: “The Best Obesity Drugs Aren’t Even Here Yet,” from Gizmodo. Take that with a smaller piece of cake (if you will), but the success of Ozempic and other drugs like this have fueled a bit of a gold rush in research. Soon there are going to be versions of these drugs that are more effective, and, with any luck once they are available in pill form, versions that will be a lot cheaper.
  • And last, I guess Oprah got into a bit of trouble the other day. From Page Six, “Oprah Winfrey faces backlash for making bold claim about ‘thin people’ after taking weight-loss drug.” Read the whole thing, but I guess you can see the “bold claim” in this snippet on Instagram:

I mean, I am not in the business of defending Oprah, especially since she originally denied that she was taking these drugs to lose weight. And I’ve never been a skinny person, and of course people end up being skinny (or fat) for all kinds of different reasons. But I have had conversations similar to this with skinny (or not overweight at least) people, and I think what Oprah is saying here is right– at least for about half of the thinner/very in shape people I know well. One very skinny guy I know told me one time he has to remind himself to eat some days, and I assure that has never been a problem for me.

But I will say there is one other category of skinny/very fit people I have known over the years: the person who got a serious medical wake-up call. I’m talking about having a doctor say if you don’t make some seriously big changes in diet and exercise, you’re gonna die a lot sooner than you should. I think this category is much smaller than the category of “I never feel that hungry,” but I can see why these people might not like Oprah implying they don’t have will power or “work” at it.

A New Substack About My AI Research: “Paying Attention to AI”

As I wrote about earlier in December, I am “Back to Blogging Again” after experimenting with shifting everything to Substack. I switched back to blogging because I still get a lot more traffic on this site than on Substack, and because my blogging habits are too eclectic and random to be what I think of as a Newsletter. I realize this isn’t true for lots of Substackers, but to me, a Newsletter should be about a more specific “topic” than a blog, and it should be published on a more regular schedule.

So that’s my goal with “Paying Attention to AI.” We’ll see how it works out. Because I still want to post those Substack things here– because this is a platform I control, unlike any of the other ones owned by tech oligarchs or whatever, and because while I do like Substack, there is still the “Nazi problem” they are trying to work out. Besides, while Substack could be bought out and turned into a dumpster fire (lookin’ at you, X), no one is going to buy stevendkrause.com, and that’s even if I was selling.

Anyway, here’s the first post on that new Substack space.

Welcome to (working title) Paying Attention to AI

More Notes on Late 20th Century Composition, CAI, Word Processing, the Internet, and AI

My goal for this Substack site/newsletter/etc. is to write (mostly to myself) about what will probably be the last big research/scholarly project of my academic career, but I still don’t have a good title. I’m currently thinking “Paying Attention to AI,” a reference to Cynthia Selfe’s “Technology and Literacy: A Story about the Perils of Not Paying Attention,” which was her chair’s address at the 1997 Conference for College Composition and Communication before it was republished in the journal for the CCCs in 1999 and also expanded into the book Technology and Literacy in the Twenty-First Century.

But I also thought something mentioning AI, Composition, and “More Notes” would be good. That’s a reference to “A Note on Composition and Artificial Intelligence,” a brief 1983 article by Hugh Burns in the first newsletter issue of what would become the journal Computers and Composition. AI meant something quite different in the late 1970s/early 1980s, of course. Burns was writing then about how research in natural language processing and AI could help improve Computer Assisted Instruction (CAI) programs, which were then seen as one of the primary uses of computer technology in the teaching of writing— along with the new and increasingly popular word processing programs that run on these newly emerging personal computers.

Maybe I’ll figure out a way to combine the two into one title…

This project is based on a proposal that’s been accepted for the 2025 CCCCs in Baltimore, and also a proposal I have submitted at EMU for a research leave or a sabbatical for the 2025-26 school year. 1 I’m interested in looking back at the (relatively) recent history of the beginnings of the widespread use of “computers” (CAI, personal computers, word processors and spell/grammar checkers, local area networks, and the beginnings of “the internet”).

Burns’ and Selfe’s articles make nice bookends for this era for me because between the late 1970s until about the mid 1990s, there were hundreds of presentations and articles in major publications in writing studies and English about the role of personal computers and (later) the internet and the teaching of writing. Burns was enthusiastic about the potential of AI research and writing instruction, calling for teachers to use emerging CAI and other tools. It was still largely a theory though since in 1983, fewer 8% of households had one personal computer. By the time Selfe was speaking and then writing 13 or so years later, over 36% of households had at least one computer, and the internet and “World Wide Web” was rapidly taking its place as a general purpose technology altering the ways we do nearly everything, including how we teach and practice writing.

These are also good bookends for my own history as a student, a teacher, and a scholar, not mention as a writer who dabbled a lot with computers for a long time. I first wrote with computers in the early 1980s while in high school. I started college in 1984 with a typewriter and I got a Macintosh 512KE by about 1986. I was introduced to the idea of teaching writing in a lab of terminals— not PCs— connected to a mainframe unix computer when I started my MFA program at Virginia Commonwealth University in fiction writing in 1988. (I never taught in that lab, fwiw). In the mid-90s and while in my PhD program at Bowling Green State University, the internet and “the web” came along, first as text (remember GopherLynx?) and then as GUI interfaces like Netscape. By the time Selfe was urging the English teachers attending the CCCCs attendees to, well, pay attention to technology, I had starte my first tenure-track job.

A lot of the things I read about AI right now (mostly on social media and MSM, but also in more scholarly work) dhas a tinge of the exuberant enthusiasm and/or the moral panic about the encroachment of computer technology back then, and that interests me a great deal. But at the same time, this is a different moment in lots of small and large ways. For one thing, while CAI applications never really caught on for teaching writing (at least beyond middle school), AI shows some real promise in making similar tutoring tools actually work. Of course, there were also a lot of other technologies and tools way back when that had their moments but then faded away. Remember MOOs/MUDs? Listservs? Blogs? And more recently, MOOCs?

So we’ll see where this goes.

1 FWIW: in an effort to make it kinda/sorta fit the conference theme, this presentation is awkwardly titled ““Echoes of the Past: Considering Current Artificial Intelligence Writing Pedagogies with Insights from the Era of Computer-Aided Instruction.” This will almost certainly be the last time I attend the CCCCs, my field’s annual flagship conference, because, as I am sure I will write about eventually, I think it has become a shit show. And whether or not this project continues much past the April 2025 conference will depend heavily on the research release time from EMU. Fingers crossed on that.

The Year 2024 was A LOT

This past year was A LOT for me and the rest of my family. So so SO much happened, so much of it horrible and still difficult to comprehend, so much of it fantastic and beautiful. I suppose this “the worst of times/the best of times” sentiment is always kinda true, but I can’t think of another year where there was just so so much and in such extremes.

It’s been a lot. It’s been way too much for one year.

January

We were already well underway with one of the big ticket items of this year, which is building/buying/selling houses and moving for the first time in over 25 years.

On January 7, I started taking Zepbound, which is one of these weight loss drugs in the category of what everyone has heard of, Ozempic (though, as I wrote about during the year, it’s more complicated than that.)

Otherwise, it was mostly the start of the winter term with work (it was the semester of all freshman composition for me), weather, watching some cheesy movies here and there.

February

My niece Emily got married in a huge and very Catholic ceremony in Kansas City. This was the first of the nieces/nephews (or cousins or grandchildren, depending on your perspective) to get married, so a big deal for the Krauses. Remarkably, there were no hitches with the weather or anything else.

The idea of moving started to get a lot more real when we were able to do a walk-through of the house right after they did the inspection for stuff they need to do before they put up drywall.

Of course, we (mostly me) have been driving by the construction site since November to see the progress, but walking around in what would become (in the order of these pictures) the upstairs/Steve loft area, stairs descending in the living room/main room and kitchen area was pretty cool. The Zepbound adventures continued (I was down about 7 pounds by the end of the month) as did the all first year writing semester.

March

We started getting real about selling the old house and preparing the move to the new one, and because we lived in our previous house in Normal Park for 25 years, it was stressful. I mean, we had decades worth of stuff to sort through– pack, sell, toss– and there was all the decluttering and the nervousness of would it sell and would we get what we were asking and all that. It’s kind of funny because everyone we talked to about this stuff– including my parents and in-laws– had all moved at least once (and usually twice) in the 25 years we hadn’t thought of it at all.

It’s funny to think about too because Annette grew up as an Air Force brat and her father was in for over 20 years, meaning she moved more than a dozen times before she was 15. I didn’t move that much as a kid, but we did move a couple of times, and in college and through my MFA program, I moved almost every year. So we used to know how to move.

School continued, my adventures with Zepbound continued and I complained about Oprah, I kept messing around with AI, kept teaching, etc., and I turned 58, too.

April

April was the beginning of the “A LOT,” the far too much of the year. We had two open houses on the first Sunday of the month, and then on April 8, Annette and I cleared out to make room for potential buyers to come take a second look while we went to the eclipse. We met our friends Steve and Michelle and their daughter down in Whitehouse, Ohio (just outside of Toledo), which seemed like the easiest place to get to for the totality while avoiding bumper-to-bumper traffic into the “totality zone” in northern Ohio.

As I wrote on Instagram, being there for the totality was intense. I probably won’t be able to see another total eclipse in my lifetime; then again, a cruise in August 2027 in the Mediterranean is not impossible.

We had a second open house, which was nerve-wracking. Remember, we had not had anything to do with selling and buying a house in forever and everyone told us we’d get an offer immediately, so when that didn’t happen, we started contemplating scenarios about how we can swing paying for the new house without money from the sale of the old house and all of that. Well, another open house and we got an offer and everything worked out– eventually.

And the end of April was when Bill died, suddenly and just a few days after a group of us got together for dinner. That’s at the top of my list for of horrible and difficult to comprehend. It still doesn’t feel real to me, and I think about Bill almost every day.

May

MSU had a quite large memorial for Bill in early May we were able to attend– Will flew back too. There had to be at least 500 people at it, and it was as celebratory about a remarkable life as it could be. I wrote about some of this in early May here, though this is as much about my own thoughts of mortality than anything else. Like I said, this year has been a lot, and this was the horrible part.

And in mid-May, we closed on both houses and pretty much on the same day. We went to a Title office in Ann Arbor and met the guy who bought our house for the first time, and without going into a lot of details, I feel pretty confident that that he and his partner (who was there via Facetime) are a great fit, ready for the adventures and challenges of fixing up the place and making it their own. That was the selling part. The buying part of the new house we were able to do electronically, and weirdly and quite literally while we were running errands after the closing where we were selling, we received a number of emails to electronically sign some forms and boom, we bought the new house too.

It was and still is kind of bitter-sweet, leaving the old place and the old neighborhood. It was time to move on and the longer we are in the new place, the fewer regrets I have. Still, when you live someplace for 25 years, that place becomes more than just housing, and that is especially true when it is in such a great neighborhood. I still drive through the old neighborhood and the old house about once a week on my way to or from EMU.

Five months after starting Zepbound, I finally got to the full dose of the meds and I was down about 20 pounds.

June

A lot of the last part of May and the first part of June was a complete daze of moving. We decided that the way we’d move is to start taking stuff over a carload at a time (and I did most of the heavy lifting, mostly because Annette was teaching a summer class) and then hiring movers for the big stuff later. I remember talking with my father about this approach to moving, and his joke was it’s sort of like getting hit in the nuts fairly gently every day for a month, or getting hit once really hard. When we move again (no idea when that will be), I think the smarter move would be to do it all at once, but I don’t think there’s any escaping what Annette and I had erased from our memories after staying put so long: moving sucks.

Also in June: we celebrated our 30th wedding anniversary. Well, sort of. Before we started getting serious about buying a new house, the original plan was go go on a big European adventure that sort of retraced the trip we took for our honeymoon, but we decided to give each other a house instead. The 31sth wedding anniversary trip to Europe is coming this spring instead.

As part of the house closing deal, we were able to be in the old house through the first weekend in June and we had one last Normal Park hurrah by selling lots and lots of stuff in the annual neighborhood big yard sale event. I went once last time on June 10 to mow the lawn, double-check to make sure everything was cleaned up, and to do one last terror selfie.

July

The new house– the cost of it of course, but also just settling into it and all– meant we didn’t travel anyplace this summer in I don’t know how many years. I missed going up north, and we might not be able to do that again this coming year either. And we watched the shitshow that was the presidential election tick by. But there was golf, there was more AI stuff, hanging out with friends, going to art fairs in Plymouth and Ann Arbor, seeing movies and hanging out. Annette went to visit her side of things in late July, leaving me to fly solo for a few days, and her parents came back with her to stay in the new place for a while, our first house guests.

August

The in-laws visited, we went for a lovely little overnight stay in Detroit. played some golf, started getting ready for teaching, and I wrote a fair amount about AI here and in a Substack space I switched to in August. The switching back happened later. Started feeling optimistic about Kamala’s chances….Oh, and my son defended his dissertation and is now Dr. William Steven Wannamaker Krause (but still Will to me).

September

By September 5, when I wrote this post about both weight loss and Johann Hari’s book about Ozempic called Magic Pill, I was down about 35 pounds from Zepbound. The semester was underway with a lot of AI things in all three classes. There was a touch of Covid– Annette tested positive, I don’t think I ever did, but I felt not great. My parents visited in the end of September, and of course they too liked the new house.

October

The month started with a joint 60th birthday for Annette and our friend Steve Benninghoff– they both turned 60 a few months apart. It was the first big party we had here at the new house. During EMU’s new tradition of a “Fall break,” we went to New York City. We let up with Will and his girlfriend and went to the Natural History Museum (pretty cool), went with them to see the very funny and silly Oh, Mary! Annette and I also went to see the excellent play Stereophonic and met up with old friends Troy and Lisa, and also Annette at an old school Italian restaurant that apparently Frank Sinatra used to like a lot. Rachel and Colin came by for dinner when they were in town too. And of course school/work, too.

November

We started by going to see Steve Martin and Martin Short at the Fox Theater in Detroit— great and fun show. Then, of course, there was the fucking election, another bit of horrible for the year. More Substack writing about AI and just being busy with work– the travels and events of October really put me behind with school, and I felt like I spent the last 6 or so weeks of the semester just barely caught up on it all. Will and his girlfriend came out here before Thanksgiving and she flew back home to be with her family. Meanwhile we made our annual trip to Iowa for Thanksgiving/Christmas. A good time that featured some taco pizza the day after the turkey, and happily, very very little discussion of politics.

December

The semester ended more quickly than usual, just a week after Thanksgiving rather than two. I was pretty pleased with the way the semester turned out overall; I definitely learned a lot more about what to do (and not do) with AI in teaching, and I hope my students got something out if it all too.

I ended up switching back to blogging but not quite giving up on Substack, as I talked about in this post. One of my goals for winter 2025 is to start a more focused Substack newsletter on my next (and likely last) academic research project on the history of AI, Computer Aided Instruction, and early uses of wordpressors in writing pedagogy from the late 70s until the early 90s. Stay tuned for that.

Oh, and the niece I had who was the first of the cousins to get married? Also the first to have a baby in early December– thus the first great-grandchild in the family.

There was much baking (in November too), and some decorating and some foggy pictures of the woods. Will and his girlfriend returned (I think Will has been back here more in the last couple of months than he has been in quite a while) and we took a trip to the Detroit Institute of Art before they left to California to see her family. Will came back here, we made the annual trip to Naples, Florida to see the in-laws, and now here we are.

Like I said, it’s been a lot, and a whole lot of it is bad. I worry about Trump. I miss Bill terribly. He touched a lot of people in his life and so I know I’m not alone on that one.

But I’m also oddly hopeful for what’s to come next. The more we are in the new house, the more it is home. The Zepbound adventure continues (I’m down about 40 pounds from last January), I’m hopeful for Will as he starts a new gig as a post-doc researcher, I’m looking forward to the new term, and I’m looking forward to all that is coming in the new year.

Six Things I Learned After a Semester of Lots of AI

Two years ago (plus about a week!), I wrote about how “AI Can Save Writing by Killing ‘The College Essay,'” meaning that if AI can be used to respond to bad writing assignments, maybe teachers will focus more on teaching writing as a process the way that scholars in writing studies have been talking about for over 50 years. That means an emphasis on “showing your work” through a series of scaffolded assignments, peer review activities, opportunities for revision, and so forth.

This past semester, I decided to really lean into AI in my classes. I taught two sections of first-year writing where the general research topic for everyone was “your career goals and AI,” and where I allowed (even encouraged) the use of AI under specific circumstances. I also taught an advanced class for majors called “Digital Writing” where the last two assignments were all about trying to use AI to “create” or “compose” “texts” (the scare quotes are intentional there). I’ve been blogging/substacking about this quite a bit since summer and there are more details I’m not getting to here because it’s likely to be part of a scholarly project in the near future.

But since the fall semester is done and I have popped the (metaphorical) celebratory bottle of bubbly, I thought I’d write a little bit about some of the big-picture lessons about teaching writing with (and against) AI I learned this semester.

Teachers can “refuse” or “resist” or “deny” AI all they want, but they should not ignore it.

As far as I can tell from talking with my students, most of my colleagues did not address AI in their classes at all. A few students reported that they did discuss and use AI in some of their other classes. I had several students in first-year writing who were interior design majors and all taking a course where the instructor introduced them to AI design tools– sounded like an interesting class. I had a couple of students tell me an instructor “forbid” the use of AI but with no explanation of what that meant. Most students told me the teacher never brought up the topic of AI at all.

Look, you can love AI and think it is going to completely transform learning and education, you can hate AI all you want and wish it had never been invented and do all you can to break that AI machine with your Great Enoch sledgehammers. But ignoring it or wishing it away is ridiculous.

For my first-year writing students, most of whom readily admitted they used AI a lot in high school to do things that were probably cheating, I spent some time explaining how they could and could not use AI. I did so in part to teach about how I think AI can be a useful tool as part of the process of writing, but I also did this to establish my credibility. I think a lot of students end up cheating with AI because they think that the teacher is clueless about it– and I think a lot of times, students are right.

You’re gonna need some specific rules and guidelines about AI– especially if you want to “refuse” or “resist” it.

I have always included on my syllabi an explicit policy about plagiarism, and this year I added language that makes it clear that copying and pasting large chunks of text from AI is cheating. I did allow and encourage first-year writing students to use AI as part of their process, and I required my advanced writing students to use AI as part of their “experiments” in that class. But I also asked students to include an “AI Use Statement” with their final drafts, one that explained what AI systems they used (and that included Grammarly), what prompts they used, how they used the AI feedback in their essay, and so forth. Because this was completely new to them (and me too), these AI Use Statements were sometimes a lot less complete and accurate than I would have preferred.

I also insisted that students write with Google Docs for each writing assignment and for all steps in the process, from the very start of the first hint of a first draft until they hand it into me. Students need to share this with me so I can edit it. I take a look at the “version history” of the Google Doc, and if I suddenly see pages of clear prose magically appear in the essay, we have a discussion. That seemed to work well.

Still, sometimes students are still going to cheat with AI, and often without realizing that they’re cheating.

Even with the series of scaffolded assignments and using Google Docs and all of my warnings, I did catch a few students cheating with AI in both intentional and not as intentional ways. Two of these examples were similar to old-school plagiarism. One was from a student from another country who had some cultural and language disconnections about the expectations of American higher education (to put it mildly); I think first-year writing was too advanced and this student should have been advised into an ESL class. Another was a student who was late on a short assignment and handed in an obviously AI-generated text (thanx, Google Docs!). I gave this person a stern warning and another chance, and they definitely didn’t do that again.

As I wrote about in this post about a month ago, I also had a bunch of students who followed the AI more closely than the first assignment, the Topic Proposal. This is a short essay where students write about how they came up with their topic and initial thesis for their research for the semester. Instead, a lot of students asked AI what it “thought” of their topic and thesis, and then they more or less summarized the AI responses, which were inevitably about why the thesis was correct. Imagine a mini research paper but without any research.

The problem was that wasn’t the assignment.  Rather, the assignment asked students to describe how they came up with their thesis idea: why they were interested in the topic in the first place, what kinds of other topics they considered, what sorts of brainstorming techniques they used, what their peers told them, and so forth. In other words, students tried to use the AI to tell them what they thought, and that just didn’t work. It ended up being a good teachable moment.

A lot of my students do not like AI and don’t use it that much. 

This was especially true in my more advanced writing class– where, as far as I can tell, no one used AI to blatantly cheat. For two of the three major projects of the semester, I required students to experiment with AI and then to write essays where they reflected/debriefed on their experiments while making connections to the assigned readings. Most of these students, all of whom were some flavor of an English major or writing minor, did not use AI for the reflection essays. They either felt that AI was just “wrong” in so many different ways (unethical, gross, unfair, bad for the environment, etc.), or they didn’t think the AI advice on their writing (other than some Grammarly) was all that useful for them.

This was not surprising; after all, students who major or minor in something English-related usually take pride in their writing and they don’t want to turn that over to AI. In the freshman composition classes, I had a few students who never used AI either–judging from what they told me in their AI Use statements. But a lot of students’ approaches to AI evolved as the semester went on, and by the time they were working on the larger research-driven essay where all the parts from the previous assignments come together, they said things like they asked ChatGPT for advice on “x” part of the essay, but it wasn’t useful advice so they ignored it.

But some students used AI in smart and completely undetectable ways.

This was especially true in the first year writing class. Some of the stronger writers articulated in some detail in their AI Use Statements how they used ChatGPT (and other platforms) to brainstorm, to suggest outlines for assignments, to go beyond Grammarly proofreading, to get more critical feedback on their drafts, and so forth. I did not consider this cheating at all because they weren’t getting AI to do the work for them; rather, they were getting some ideas and feedback on their work.

And here’s the thing that’s important: when a student (or anyone else) uses AI effectively and for what it’s really for, there is absolutely no way for the teacher (or any other reader) to possibly know.

The more time I have spent studying and teaching about AI, the more skeptical I have become about it. 

I think my students feel the same way, and this was especially true with the students in my advanced class who were directly studying and experimenting with many different AI platforms and tasks. The last assignment for the course asked students to use AI to do or make something that they could not have possibly done by themselves. For example, one student taught themself to play chess and was fairly successful with that– at least up to a point. Another student tried to get ChatGPT to teach them how to play the card game Euchre, though less successfully because the AI kept “cheating.” Another student asked the AI to code a website, and the AI was pretty good at that. Several students tried to use AI tools to compose music; similar to me I guess, they listen to lots of music and wished they could play an instrument and/or compose songs.

What was interesting to me and I think most of my students was how quickly they typically ran into the AI’s and their own limitations. Sometimes students wanted the AI to do something the AI simply could not do; for example, the problem with playing Euchre with the AI (according to the student) is it didn’t keep track of what cards had already been played– thus the cheating. But the bigger problem was that without any knowledge of how to accomplish the task on their own, the AI was of little use. For example, the student who used AI to code a website still had no idea at all what any of the code meant, nor did they know what to do with it to make it into a real website. Students who knew nothing about music who tried to write/create songs couldn’t get very far. In other words, it was not that difficult for students to discover ways AI fails at a task, which in many ways is far more interesting than discovering what it can accomplish.

I’m also increasingly skeptical of the hype and role of AI in education, mainly because I spent most of the 2010s studying MOOCs. Remember them? They were going to be the delivery method for general education offerings everywhere, and by 2030 or 2040 or so, MOOCs were going to completely replace all but the most prestigious universities all over the world. Well, that obviously didn’t happen. But that didn’t mean the end of MOOCs; in fact, there are more people taking MOOC “courses” now than there were during the height of the MOOC “panic” around 2014. It’s just that nowadays, MOOCs are mostly for training (particularly in STEM fields), certificates, and as “edutainment” along the lines of Master Class.

I think AI is different in all kinds of ways, not the least of which is AI is likely to be significantly more useful than a chat box or to check grammar. I had several first-year students this semester write about AI and their future careers in engineering, logistics, and finance, and they all had interesting evidence about both how AI is being used right now and how it will likely be used in the future. The potential of AI changing the world at least as much as another recent General Purpose Technology, “the internet,” is certainly there.

Does that mean AI is going to have as great of an impact on education as the internet did? Probably, and teachers have had to make all kinds of big and small changes to how they teach things because of the internet, which was also true when writing classes first took up computers and word processing software.  But I think the fundamentals of teaching (rather than merely assigning) writing still work.