Advice from IHE for Teaching With and Around AI

It’s been three years since ChatGPT was released and AI exploded all over the place, causing a whole new moral panic in academia that is still going strong. And some of these panicking professors are on the verge of losing their minds! For example, consider Ronald Purser’s Current Affairs essay “AI is Destroying the University and Learning Itself,” which is easily summarized by the title. To be fair, Purser is not completely wrong. I agree that Cal State’s “partnership” with OpenAI is problematic at best. As a professor who also works at a regional university, the labor issues and poor allocation of resources at SFSU, along with poor decisions about how to spend those limited funds, sound very familiar to me. I completely agree that a lot of our working-class students see through the con– more on that below. But he also devotes a whole lot of time to trotting out the standard angry complaints about AI from academics, that it is a cheating machine, it’s turning students into zombies, that the only solution is to return to paper and oral exams, blah-blah-blah.

A lot of people I know on Facebook shared and endorsed this story, and I get that it strikes a nerve– really, a lot of nerves itching higher education right now, not all of which are about AI. But it’s also a bit too “my hair is on fire” panicky for my tastes, especially when it comes to his basic thesis, which is, well, “AI is Destroying the University and Learning Itself.” That is simply not true.

In contrast, I was pleased to read the Inside Higher Ed article “You Can’t AI-Proof the Classroom, Experts Say. Get Creative Instead,” by Emma Whitford. Whitford interviewed several professors and instructors from a variety of institutions who believe that yes, learning and higher education still exist in a world of AI. It’s just that faculty have to change the way they teach things, particularly changing up assignments and activities that were a bad idea long before AI demonstrated them to be bad teaching. That’s more or less what I’ve been saying about AI ever since I wrote what has been the most popular post on this site for a couple of years, “AI Can Save Writing by Killing ‘The College Essay.'” Of course, it’s always reassuring to read others saying similar things.

I think the first step for any teacher who hates all things AI and who believe AI is the end of the university and learning and anything meaningful in the world (blah-blah-blah) is to take a deep breath and to play around with a few different chatbots for a day or two, trying out different prompts (enter in your assignments and see what happens!), and just generally goof around with it so you can experience first hand what AI actually is. This is a tough sell because a lot (most?) of my colleagues and students who passionately hate AI have also taken what they see as the principled stance of refusing themselves to ever use any AI chatbot for anything.

Well, besides the fact that I never think willful ignorance is a good idea, AI is already baked into everything we do directly and indirectly with computers already. Take the “blue book” strategy designed to prevent students from using ChatGPT, an approach to teaching I know some of my EMU colleagues have tried. As Luke Hobson notes at the beginning of Whitford’s article, we already have AI-infused wearables like smartwatches, rings, and Ray-Ban Meta Glasses. “What is to stop someone from sitting in the back of a classroom and whispering into their glasses to say, ‘Hey, I need help with solving this problem.”

AI is now embedded into browsers, it is built into Canvas and other Learning Management Systems, it’s built into word processors and email applications and search engines, and everything else to the point where refusal is not an option. At least it’s not an option if you want to stay connected to things like the internet, social media, streaming entertainment, online shopping– that is, if you want to stay connected to contemporary life in the western world.

I do not think this means giving up and ignoring AI cheating, and I certainly don’t think it means that the rest of higher ed ought to follow in the footsteps of Cal State and cut deals with OpenAI or whoever. Also, learning about AI by monkeying around with it or reading about it is not the same as “liking” AI. Rather, learning the basics of AI is important to understand what it is and to recognize that AI is not a fad, it is not going away, and it is going to shape the future for years to come, both for better and for worse. We can’t “refuse” AI, but we can learn about AI and make changes to how we teach to minimize cheating and promote learning.

Once reaching the “acceptance” phase, the next step is for the AI hating teacher to ask themselves what can I do about it? I think all of the ideas for dealing with AI in the classroom in Whitford’s article (and a few other ideas I’ve seen elsewhere) all lean into two kinds of learning activities that AI can’t do very well:

  • demonstrate presence in the classroom and the physical world; and
  • emphasize the process of learning rather than the products students produce.

And if you are familiar with the theory and practice of teaching writing as a process (rather than assigning writing as a product or assessment), you already know what I’m talking about.

Hobson’s LinkedIn post titled “5 AI-Proof Assessment Ideas” is a good example of this, though some of these ideas are easier to implement than others. Oh, also worth noting: Hobson mostly teaches online, and his ideas reflect that “presence” does not have to be physical or synchronous. Anyway, one idea I like is self-recorded/almost TikTok-style “journal entries.” As Whitford puts it in her IHE article, recorded journals “require students to regularly record and upload five-minute videos of themselves talking about what they’ve learned in class, how it connects to their past experiences and how they might use it in the future.” I suspect students would enjoy this activity, actually.

Oral exams seem less practical to me, though asking students to conduct interviews for various reasons is another interesting idea. I’ve done this sometimes in the past, and the main problem I’ve had with these kinds of assignments there are always scheduling and other logistic snafus. Still, something I could see working well.

On his LinkedIn post, Hobson also mentions “community-based learning,” which for me would also include the fairly common practice in technical/professional communication classes of students working with “clients” on projects for that client. But community-based learning can also mean almost anything that requires students to interact with the world around them, either as a group or individually.

I think this also includes valuing (and grading) participation, and lots of activities where students have to interact with each other. In f2f classes, I do a lot of group discussions/activities, peer review activities, all that kind of stuff, plus I keep track of attendance. In online classes, most of the participation involves discussion boards about readings and activities and the like. Basically, students are required to post to a discussion their initial thoughts/reactions to something (usually a reading assignment) before seeing anyone else’s post, and then they need to read and respond to other students’ posts. I base the grade for these discussions not so much on their quality, but that they complete them. So, to get an “A” on a discussion, students have to post once initially on time, and then follow that up over the next couple of days by responding to at least two other students’ posts.

Note that this is a conversation and not an assignment like “write 250-500 words about ‘x,'” where “x” is some kind of reading or whatever. That’s the sort of “one-shot deal” writing assignment that AI can do really really well. Rather, it’s an interaction with other students participating in the same discussion in the same space. AI can’t do that very well.

Hobson’s last suggestion is to “critique” AI, which has been a big part of my teaching lately because I’m teaching a lot about AI in my classes. But there are all kinds of ways to do this in small ways too; for example, have students and AI both complete a short writing task and then have the class compare them.

I tend to critique AI while demonstrating writing things I think it can do fairly well, along with things it can’t do well– or it can’t do, period. For example, I think AI is good at proofreading, and, with a detailed prompt, it is okay to pretty good at giving feedback akin to peer review. Interestingly, when I ask my students to compare AI feedback and human feedback, they generally prefer the human feedback. AI provides more feedback than human peers, but that feedback can be misleading, in part because AI has no presence in the class (or on the online discussion) where we talked about the assignment.

But my favorite approach to critiquing AI is finding stuff it cannot do, which often can serve as a not-so-subtle warning to students. For example, in my experience, when I upload a PDF of an academic article and ask AI to summarize it for me, it can do a pretty good job. Even more useful is AI can do well at explaining complex passages from articles for non-experts. But if you ask AI to give you some good sentences from the article to quote in a paper, it will frequently make those quotes up. Seriously, give it a try.

The last strategy for dealing with AI that a few folks talk about in Whitford’s article might be best described as having honest and earnest discussions with students about the whole point of college is learning something. This is kind of what I blogged about back in July 2025, and what I think of as “the AI talk.” I think the only thing worse than limiting any discussion about AI to something like “don’t use it because it’s bad” is not saying anything at all, which (according to my students) still seems to be what happens in most classes.

Whitford talked to Carlo Rotella, a professor at Boston College who doesn’t ban students from using AI because he realizes it can be impossible to detect. “I explain to my students why it’s a waste of their time and mine. I explain that they’re paying $5 a minute for classes at Boston College, and to spend that time practicing to be replaceable by AI is a complete waste of their money and time, and my time.” Later in this article, Rotella said, “The entire point of this class is the labor, so a labor-saving device would be beside the point. It’s like joining the track team and doing your laps on an electric scooter. You went around the track. Congratulations.”

Of course, one of the reasons why this works for Rotella is that he bans technology in his classes– no devices, and bring hard copies of the reading– and a lot of tests and quizzes are based only on class discussions. That’s a bridge too far for me. I also assume that Rotella is able to get away with this because (like me) he’s teaching comparably small classes.

That said, having honest and frank discussions about the whole academic enterprise– that we’re here not just to get through the class and collect the credits but to actually learn something– does help. Like Purser at San Francisco State, I teach a lot of working class, first gen, and older/returning students. When I bring up the Rotella argument, that trying to cheat with AI is really self-sabotaging and it defeats the whole purpose of college, these students (well, at least the better ones) completely get it.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.