Is Apple Intelligence (and AI) For Dumb and Lazy People?

And the challenges of an AI world where everyone is above average

I’ve been an Apple fanboy since the early 1980s. I owned one Windoze computer years ago that was mostly for games my kid wanted to play. Otherwise, I’ve been all Apple for around 40 years. But what the heck is the deal with these ads for Apple Intelligence?

In this ad (the most annoying of the group, IMO), we see a schlub of a guy, Warren, emailing his boss in idiotic/bro-based prose. He pushes the Apple Intelligence feature and boom, his email is transformed into appropriate office prose. The boss reads the prose, is obviously impressed, and the tagline at the end is “write smarter.” Ugh.

Then there’s this one:

This guy, Lance, is in a board meeting and he’s selected to present about “the Prospectus,” which he obviously has not read. He slowly wheels his office chair and his laptop into the hallway, asks Apple’s AI to summarize the key points in this long thing he didn’t read. Then he slowly wheels back into the conference room and delivers a successful presentation. The tagline on this one? “Catch up quick.” Ugh again.

But in a way, these ads might not be too far from wrong. These probably are the kind of “less than average” office workers who could benefit the most from AI— well, up to a point, in theory.

Among many other things, my advanced writing students and I read Ethan Mollick’s Co-Intelligence, and in several different places in that book, he argues that in experiments when knowledge workers (consultants, people completing a writing task, programmers) use AI to complete tasks, they are much more productive. Further, while AI does not make already excellent workers that much better, it does help less than excellent workers improve. There’s S. Noy and W. Zhang’s Science paper “Experimental evidence on the productivity effects of generative artificial intelligence;” here’s a quote from the editor’s summary:

Will generative artificial intelligence (AI) tools such as ChatGPT disrupt the labor market by making educated professionals obsolete, or will these tools complement their skills and enhance productivity? Noy and Zhang examined this issue in an experiment that recruited college-educated professionals to complete incentivized writing tasks. Participants assigned to use ChatGPT were more productive, efficient, and enjoyed the tasks more. Participants with weaker skills benefited the most from ChatGPT, which carries policy implications for efforts to reduce productivity inequality through AI.

Then there’s S. Peng et al and their paper “The Impact of AI on Developer Productivity: Evidence from GitHub Copilot.” This was an experiment with a programming AI on Github, and the programmers who used AI completed tasks 55.8% faster. And Mollick talks a fair amount about a project he was a co-writer on, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality,” which found that consultants in an experiment were more productive when allowed to use AI— except when faced with a “jagged technology frontier” problem, which in the study was a technical problem beyond the AI’s abilities. However, one of the problems Mollick and his colleagues observed is that a lot of the subjects in their study often copied and pasted content from the AI with minimal editing, and the AI-using subjects had a much harder time with that jagged frontier problem. I’ll come back to this in a couple more paragraphs.

Now, Mollick is looking at AI as a business professor, so he sees this as a good thing because it improves the quality of the workforce, and maybe it’ll enable employers to hire fewer people to complete the same tasks. More productivity with less labor equals more money, capitalism for the win. But my English major students and I all see ourselves (accurately or not) as well-above-average writers, and we all take pride in that. We like the fact we’re better at writing than most other people. Many of my students are aspiring novelists, poets, English teachers, or some other career where they make money from their abilities to write and read, and they all know that publishing writing that other people read is not something that everyone can do. So the last thing any of us who are good at something want is a technology that diminishes the value of that expertise.

This is part of what is behind various declarations of late for refusing or resisting AI, of course. Part of what is motivating someone like Ted Chiang to write about how AI can’t make art is making art is what he is good at. The last thing he wants is a world where any schmuck (like those dudes in the Apple AI ads) can click a button and be as good as he is at making art. I completely understand this reason for fearing and resisting AI, and I too hope that AI doesn’t someday in the future become humanity’s default story teller.

Fortunately for writers like Chiang and me and my students, the AI hype does not square with reality. I haven’t played around with Apple AI yet, but the reviews I’ve seen are underwhelming. I stumbled across a YouTube review by Marques Brownlee about the new AI that is quite thorough. I don’t know much about Brownlee, but he has over 19 million subscribers so he probably knows what he is talking about. If you’re curious, he talks about the writing feature in the first few minutes of this video, but the short version is he says that as a professional writer, he finds it useless.

The other issue I think my students and I are noticing is that the jagged frontier Mollick and his colleagues talk about— that is, the line/divide between tasks the AI can accomplish reasonably well and what it can’t— is actually quite large. In describing the study Mollick and his colleagues did which included a specifically difficult/can’t do with AI jagged frontier problem, I think he implies that this frontier is small. But Mollick and his colleagues— and the same is true with these other studies he quotes on this— are not studying AI in real settings. These are controlled experiments, and the researchers are trying to do all they can to eliminate other variables.

But in the more real world with lots of variables, there are jagged frontiers everywhere. The last assignment I gave in the advanced writing class asked students to attempt to “compose” or “make” something with the help of AI (a poem, a play, a song, a movie, a website, etc. etc.) that they could not do on their own. The reflection essays are not due until the last week of class, but we have had some “show and tell” exchanges about these projects. Some students were reasonably successful with making or doing something thanks to AI— and as a slight tangent: some students are better than others at prompting the AI and making it work for them. It’s not just a matter of clicking a button. But they all ran into that frontier, and for a lot of students, that was essentially how their experiment ended. For example, one student was successful at getting AI to generate the code for a website; but this student didn’t know what to do with the code the AI made to make it actually into a website. A couple of students tried to use AI to write music, but since they didn’t know much about music, their results were limited. One student tried to get AI to teach them how to play the card game Euchre, but the AI kept on doing things like playing cards in the student’s hand.

This brings me back to these Apple ads: I wish they both went on just another minute or so. Right after Warren and Lance confidently look directly at the camera with smug look that says to viewers “Do you see what I just got away with there,” they have to follow through with what they supposedly have accomplished, and I have a feeling that would go poorly. Right after Warren’s boss talks with him about that email and right after Lance starts his summary, I am pretty sure they’re gonna get busted. Sort of like what has happened when I have suspected correctly that a student used too much AI and that student can’t answer basic questions about what it is they (supposedly) wrote.

IT’S A WITCH!

Reflecting on Melanie Dusseau’s “Burn It Down: A License for AI Resistance”

I don’t completely disagree with Melanie Dusseau’s advice in her recent Inside Higher Ed column Burn It Down: A License for AI Resistance, but there’s something about her over-the-top enthusiasm for “burning it down” that reminds me of this famous scene from Monty Python and the Holy Grail:

Dusseau, who is a creative writing professor at the University of Findlay, writes “Until writing studies adopted generative artificial intelligence as sound pedagogy, I always felt at home among my fellow word nerds in rhet comp and literary studies.” A bit later, she continues:

If you are tired of the drumbeat of inevitability that insists English faculty adopt AI into our teaching practices, I am here to tell you that you are allowed to object. Using an understanding of human writing as a means to allow for-profit technology companies to dismantle the imaginative practice of human writing is abhorrent and unethical. Writing faculty have both the agency and the academic freedom to examine generative AI’s dishonest training origins and conclude: There is no path to ethically teach AI skills. Not only are we allowed to say no, we ought to think deeply about the why of that no.

Then she catalogs the many many mmmmmaaaaaannnnnnyyyyyy problems of AI in prose I found engaging and intentionally funny in its alarmed tone. Dusseau writes:

Resistance is not anti-progress, and pedagogies that challenge the status quo are often the most experiential, progressive and diverse in a world of increasingly rote, Standard English, oat milk sameness. “Burn it down” is a call to action as much as it is a plea to have some fun. The robot revolution came so quickly on the heels of the pandemic that I think a lot of us forgot that teaching can be a profoundly joyful act.

AI resistance/refusal is catching on. The day after I read this article, I came across (via Facebook) a similar albeit much more academic call for resistance, “Refusing GenAI in Writing Studies: A Quickstart Guide” by Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes. While also calling for the field to “refuse” AI, it’s more of an academic manifesto with a lot of citation, it’s a much more nuanced and complicated, and also still a work in progress. For example, sections that are “coming soon” on their wordpress site include “What Is GenAI Refusal?” and “Practicing Refusal.” Perhaps I’ll write more specifically about this when it is closer to finished, but this post isn’t about that.

Anyway, why does “burning it down” make me think of that Monty Python scene? The peasants bring one of the knights (ChatGPT just told me it was “Sir Bedevere the Wise”— let’s hope that’s right!) a witch (or AI) to be burned at the stake. They’re screaming and enraged, wanting to burn her immediately. The knight asks why they believe she’s a witch, and the evidence the peasants offer up is flimsy. The wise knight walks them through the logic of how to test if the woman truly is a witch: to put her on the scales and see if she weighs as much as a duck and thus floats like wood and thus she too is made of wood and will burn for being a witch. (Stick with me here— the punchline at the end has a twist).

Like the mob, Dusseau has had enough with all these witches/AIs. She wants it gone and for it to have never existed in the first place. But since that’s not possible, Dusseau is calling for like-minded writing teachers to refuse to engage. “To the silent, hopeless AI skeptics and Star Trek fans: resistance is not futile. We simply do not have to participate. Let Melville’s Bartleby provide the brat slogan of our license to resist: ‘I would prefer not to.’”

Now, maybe I’m just not hearing the “drumbeat of inevitability” for embracing AI to teach writing because I’m one of these people teaching a lot with/about AI this semester. But I have no idea what she’s talking about. If anything, it seems like most faculty around here have either ignored AI or banned it. Most of my students this semester have told me that AI has not come up as a topic in their other classes at all.

Before one burns it all down, it probably is a good idea to figure out what “it” is. Maybe Dusseau has already done that. Or maybe she is like a lot of my fellow academic AI resisters who don’t know much about AI and think that it is only for brute-force cheating. Maybe she knows better and is making an informed decision about resisting AI; it’s hard for me to tell.

I think her arguments for why we should refuse AI boil down to two. First, AI requires giant data centers and it takes A LOT of electricity and water to run those sites. That is completely true, and that doesn’t even get into the labor exploitation that went into training LLMs and monitoring content, the monopolistic and unregulated giant corporations that control all this, etc. All true, but look: these data centers also power EVERYTHING we do online and they have been an environmental problem for decades. So it’s not that she’s wrong, but I suspect that Dusseau isn’t thinking about refusing Facebook or Google searches anytime soon.

The second argument is that it ruins writing. Like almost every other person I’ve read making this argument, Dusseau references Ted Chiang’s New Yorker article “Why A.I. Isn’t Going to Make Art” in passing. What she doesn’t mention is Chiang’s definition of art is really fiction writing, and he sets the bar extremely high as to what counts as “art.” I prefer Matteo Wong’s response in The Atlantic, “Ted Chiang Is Wrong About AI Art,” but I’ll leave that debate for another time.

I think what Dusseau means by “writing” is writing that is personal, expressive, and “creative,” poetry and fiction and the like. Of course, AI is not the right tool for that. It’s not for writing a heartfelt fan letter from a child to an Olympic athlete, and Google found that out with the backlash to their “Dear Sydney” ad campaign this summer. (If you don’t know what I’m talking about check out the great post Annette Vee wrote about this called “Why ‘just right’ is wrong: What the Gemini ad ‘Dear Sydney; says about writing that people choose to do.”) Everyone I follow/read about AI agrees with this.

But most writing tasks are not personal, expressive, or creative, and that is particularly true for many writing tasks we all have to do sometimes, often reluctantly, for school or for work: routine reports, memos, forms, the kind of things we call “paperwork.” A lot of students are required to write when they would “prefer not to,” which is why students sometimes use AI to sometimes cheat on writing assignments. So yes, like Dusseau, I don’t want AI writing my journal entries, personal emails, or anything else that’s writing I choose to do, and I don’t want students to cheat. But there’s a role for AI with some of these not-chosen writing tasks that is perhaps useful and not cheating.

The other problem is that Dusseau’s own resistance is not going to stop any of her students or her colleagues from using AI. I don’t know if AI-based writing tools are going to inevitably be a part of writing pedagogy or not, but I do know that AI is going to continue to be a tool that people are going to continue to use. I have students in all of my classes (though more of them in the class of English majors) who are AI refusers, and I think that’s really important to note here: not all students are on board with this AI stuff either. But for my students who seem to know how to use AI effectively and as something akin to a brainstorming/proofreading/tutoring tool, it seems to work pretty well. And that’s the kind of AI use that is impossible for a teacher to detect.

So to me, the council of the knight is best. Before we burn this AI witch, why don’t we see what we’re up against? Why don’t we research this a bit more? Why don’t we not burn it own but instead (to very generally reference Cynthia Selfe’s Technology and Literacy in the 21st Century) pay attention to it and on alert?

But here’s the thing: in that Monty Python scene, it turns out she is a witch.

The punchline in that scene goes by so quick it took me a few viewings to realize it, but the woman does weigh the same as the duck, thus is made out wood, and thus is a witch. The peasants were right! SHE’S A WITCH!

Because like I said at the beginning of this, I don’t completely disagree with Dusseau. I mean, I still don’t think “burn it down” is a good strategy— we gotta pay attention. But I’m also not saying that she’s wrong about her reasons for resisting AI.

My semester isn’t quite over, and I have to say I am not sure of the benefits of the up-front “here is how to use AI responsibly” approach I’ve taken this semester, particularly in freshman comp. But I do know an impassioned and spirited declaration to students about why they too should burn it all down is not going to work. If writing teachers don’t want their students to use AI in their courses, they cannot merely wish AI away. They need to learn enough to understand the basics of it, they need to explain to students why it’s a bad idea to use it (or they need to figure out when using AI might be okay), and they’re going to have to change their writing assignments to make them more AI proof.

AI Cheating as a Teachable Moment

A Simple Example

Back to my “regular programming” with a post/update/stack/whatever these things are calls that is more on brand….

Which makes this all a teachable moment for me as well: I think the lesson I’ve learned (or re-learned) from this is that the best way to prevent/discourage students from using AI to cheat is to get out in front of the issue. I’m not saying that all writing teachers ought to allow their students to use AI; in fact, as we’re approaching the end of the semester, I’m not sure if it is a good idea to encourage and sanction the use of AI in classes like first year writing. But I am sure that is is a very good idea for writing (and other kinds of ) teachers to be up-front about AI. I think when teachers do spend some time talking about what does or doesn’t work with AI, students are less likely to use it to cheat in that class— if they use it at all.

My students and I have reached the part of the semester where they are mostly working on finishing the assignments, and where I’m mostly working on reading/commenting/evaluating those assignments. So busy busy busy. Anyway, as kind of an occasional break from that work, I wrote this post in bits and pieces over the last week or two about how a particular example of AI “cheating” became a “teachable moment.”

I think there’s AI CHEATING and there’s AI “cheating,” much in the same way that there is PLAGIARISM and then there’s “plagiarism.” By PLAGIARISM, I mean the version where a student hands in a piece of writing they did not compose at all. The most obvious example is when a student pays someone else to do it, perhaps from an online paper mill. I know this happens, but I don’t think I’ve ever seen it— unless it was that good I didn’t notice.

More typically, students do this cheating themselves by copying, pasting, and slightly tweaking chunks of text from websites to piece together something kind of like the paper. This is usually easy to spot and for two reasons. First, the same Google searches students use to find stuff to cheat with also works for me to find the websites and articles they used to cheat. Second and perhaps more importantly, students only plagiarize like this when they know they’re failing and desperate, so it’s easy to spot.

The much more common kind of “plagiarism” I see is basically accidental. A lot of students— especially first year students— do not understand what needs to be cited and what does not. This is because citation is both confusing and a pain in the ass, so students sometimes do not realize they had to have a citation at all, or they just skip it and figure no one will notice. Fortunately, it’s easy to spot when students drop in a quote from an article without citation because of the writing shift: the text goes from a college freshman grappling with their prose suddenly to a polished and professional writer, often with specialized word choices and jargon. And as often as not, students do cite some of the article they’re accidentally plagiarizing, so it’s pretty easy to check.

This is a “teachable moment:” that is, one of those things that happens in a class or an assignment where it’s an opportunity to reinforce something that has already been taught. This is where I remind the student about what we already talked about: how unintentional plagiarism is still plagiarism, that this is specifically an example of why it’s important to cite your sources correctly, and so forth. This tends to click.

Similarly, there’s AI CHEATING and then there’s AI “cheating,” and I have seen examples of both in my first year writing classes this semester. The big example of extreme AI CHEATING I’ve seen so far this semester is not that interesting because it was so textbook: desperate failing student clumsily and obviously uses AI, I called the student out about it, student confesses, I gave the student the choice to fail or withdraw rather than going through the rig-a-ma-roll of getting that student expelled (oh yes, that is something I could have done). Slight tangent: if catching AI cheaters is as easy and as obvious as it seems to be, what’s the problem? Conversely, if students are using AI effectively as a tool to help their process (brainstorming, study guides, summarizing complicated texts, proofreading, etc.) and if that use of AI isn’t detectable by the teacher, well, what’s the problem with that?

The AI “cheating” example from this semester was a more interesting and teachable moment. Here’s what happened:

The first assignment in my freshman comp classes is a 2-3 page essay where students explain their initial working thesis and how they came up with it. It’s a low-stakes getting started kind of assignment I grade “complete/incomplete.” As I explain and remind students repeatedly, this is not an essay where they are trying to convince the reader to believe their thesis. Rather, this is an essay about the process of coming up with the working thesis in the first place. What I want students to write about is why they’re interested in their topic, what sorts of brainstorming activities they tried to come up with their topic, what sorts of conversations they had about this project with me and with classmates, and so forth.

This semester, the topic of research in my first year writing classes is “your career goals and AI.” I’ve also spent a lot of class time explaining why I think AI is not that useful for cheating because it just can’t do these assignments very well. But I also explained how AI might be useful as part of the process as well. For example, a lot of these students really struggle with coming up with a good and researchable topic idea/thesis, and even though most of AI’s ideas for a thesis about career goals and AI aren’t great, it does help them get beyond staring at a blank page.

I’ve given a version of this assignment for a long time, and in previous semesters and pre-AI, two or three students (out of 25) messed it up. It’s usually because the students didn’t understand the assignment, or they weren’t paying attention to/didn’t do any of the prewriting exercises we discussed in class. So they try to fake it by writing what ends up being a really short research paper without any research. I gave these students a do-over, and that usually was enough to get them back on track.

This semester, I had closer to half of the students in my two sections mess this up. I’m sure some of these students just didn’t get the assignment/didn’t do the prewriting activities, but what I think happened more is a lot of students got a little lazy and hypnotized by the smooth, mansplaining prose of AI. So instead of remembering what the assignment was about, they just took what the AI was feeding it about their working thesis ideas and tweaked that a bit.

The teachable moment? I met with the students who messed this up, reminded them what the assignment was actually supposed to be, and I pointed out that this was exactly the kind of thing that AI cannot do: it can’t help you write about what you think. At least not yet.

This was a couple weeks ago, and for most of my students, I think it clicked. I still have a number of students who are struggling and unlikely to pass for all kinds of reasons, but that’s typical for freshman comp. Some students (particularly the ones on the way to failing) are still trying to use AI for cheating, but for the most part, I think students have learned the lesson.

I ask students to include an “AI Use Statement” where they describe how they used AI, or to say explicitly that they didn’t use any AI. This is a brand-new thing for both them as students and me as a teacher, so they sometimes forget or they don’t explain their AI use as clearly as I wanted. And I am sure some students are fibbing a little about how much AI they used. But for the most part, what students are telling me is they aren’t using AI to write at all, or they’re using Grammarly for proofreading (which I think counts as AI), they are using an AI for some ideas about a particular paragraph, and/or getting started or some other brainstorming kind of suggestion.

Which makes this all a teachable moment for me as well: I think the lesson I’ve learned (or re-learned) from this is that the best way to prevent/discourage students from using AI to cheat is to get out in front of the issue. I’m not saying that all writing teachers ought to allow their students to use AI; in fact, as we’re approaching the end of the semester, I’m not sure if it is a good idea to encourage and sanction the use of AI in classes like first year writing. But I am sure that is is a very good idea for writing (and other kinds of ) teachers to be up-front about AI. I think when teachers do spend some time talking about what does or doesn’t work with AI, students are less likely to use it to cheat in that class— if they use it at all.

Messing Around with the Viral “What’s One Thing I Don’t Know About Myself” ChatGPT Prompt

“Self-Discovery Through AI Insights”

I have been catching up from a fall break weekend in New York City (family, friends, shows, etc.) this afternoon. It’s a good thing classes don’t start for me until Thursday. While taking a break from grading and such, I stumbled across this piece on the website tech radar, “New ChatGPT prompt goes viral with Sam Altman’s approval.” It’s about a viral ChatGPT prompt:

Sam Altman, CEO of OpenAI, the makers of ChatGPT, put his name to a new viral trend on the platform recently when he retweeted it on X (formerly Twitter) with “love this” added in response to a tweet that has got ChatGPT users enthralled.

The original Tweet, from writer Tom Morgan, says simply, ‘Ask ChatGPT

“From all of our interactions what is one thing that you can tell me about myself that I may not know about myself’.

I don’t know if any of this is surprising, but it is kind of interesting, isn’t it? How long will it be before OpenAI does this?

The article goes on to describe how blown away users were by the AI response, how it made them feel better about themselves, etc. It also talks about people hacking the prompt by instead asking ChatGPT to “roast me” based on our interactions. Well, I didn’t try that, but I did try the first one. If you’re curious, here’s a link to the discussion.

I don’t want to suggest I’m the most introspective guy in the world, but I don’t thinking ChatGPT told me anything about myself that I didn’t think before. But its response was definitely flattering and clever, making connections between my cooking interests and technology, for example. So I can see why it would be a good pick-me-up for users.

Anyway, the thing I really wondered about was the title the AI gave to the discussion. ChatGPT always creates the titles of these discussions, but I thought this one, “Self-Discovery Through AI Insights” sounded a little too like marketing or advertising copy.

So I asked ChatGPT about that. It responded that no, to the best of its knowledge, the AI had not been specifically instructed to say that and that’s just now how the AI works. I pointed out that there are certain prompts about things like asking ChatGPT how to build a bomb that produce specific responses: in the case of “tell me how to make a bomb,” it explains why it cannot do that. Why couldn’t that same coding/protocols be used to require ChatGPT to title conversations from a specific prompt in a specific way? The AI said that theoretically, yes, that could happen.

Could that be used for advertising? Yes. “This would be similar to how search engines or other platforms may show ads or sponsored content. If I were programmed with specific instructions to promote a particular cruise line, I could be guided to prioritize talking about that company, its features, or its benefits when asked certain questions.”

I don’t know if any of this is surprising, but it is kind of interesting, isn’t it? How long will it be before OpenAI does this?

Four Rules For Discouraging Cheating with AI in Writing Classes

An indirect but positive review of Mollick’s “Co-Intelligence”

This semester, I’m teaching two sections of first year writing (aka freshman comp) and an advanced writing course called Digital Writing, and both have AI elements and themes. In first year writing, the research theme is “Your Career and AI.” In the Digital Writing course, the last two writing projects are going to be waist-deep in writing with AI. Maybe one day I will better understand/make use of Substack’s newsletter function to chronicle these classes in more detail, but that’s later.

For Digital Writing, we’re reading and discussing Ethan Mollick’s Co-Intelligence: Living and Working with AI. If you’re reading posts like this because you too are trying to make sense out of the what AI is about, there’s a good chance you’ve already heard of Mollick’s book and his Substack, One Useful Thing. If you haven’t heard of Mollick and you want to know more about AI but you’re overwhelmed with the firehose of news and information, then his book is for you. Co-Intelligence is a well-written, accessible, and a thoroughly researched 30,000 foot overview in less than 250 pages printed in a big font. It’s enough to get the “AI curious” up to speed on the current state of things (it was published in April 2024), while also pointing readers to ideas for further reading and research.

Mollick is a business professor at the Wharton School at the University of Pennsylvania, so he is primarily interested in how AI will impact productivity and innovation. I think we conceptualize teaching a bit differently, and like everything I’ve read about AI, Mollick is making some claims I doubt. But we’re mostly on the same page.

One of the most cited/mentioned chapters in Mollick’s book is “Four Rules for Co-Intelligence.” In brief, those rules are:

  • Always invite AI to the table, meaning you have to experiment and try to use AI (or really, different platforms, so AIs) for lots and lots of different things in order to discover what it/they can and can’t do.

  • Be the human in the loop: it’s a bad idea to completely turn over a task to AI, both because AI makes a lot of mistakes (aka hallucinations) and humans ought to be in charge rather than the other way around.

  • Treat AI like a person (but tell it what kind of person it is). AI doesn’t behave same way as other computer applications, so Mollick says we need to be conversational with it as if it were a human. Mostly he’s talking about creating context and scenarios in AI prompts, as in “You are an experienced teacher speaking to skeptical students about the value of group work. What advice would you give those students?”

  • Assume this is the worst AI you will ever use, which is perhaps the most accurate of these AI rules.

So, in that spirit, here are four more rules about teaching writing and AI— specifically, what teachers can do to discourage students from using AI to cheat.

Of course, I’m far from the first person to come up with four more rules for AI— I’m not even the first person to come up with four more rules for AI and writing! For example, there’s this fine post from Jane Rosenzweig at her site Writing Hacks “Four Rules for Writing in the Age of AI,” and also this guest post at John Warner’s Substack site by high school teacher and writer Brett Vogelsinger, “Artificial Intelligence and Writing: Four Things I Learned Listening to my High School Students.” Both great posts and great thoughts.

The most common concern about AI I read on Facebook (though not so much on Substack) from other professors and teachers is students using it to cheat on writing assignments. So this post isn’t about how to use AI to teach writing— maybe I’ll write more about that when I have a better sense of the answer. This is about how teachers can create an environment that discourages students from cheating with AI. It’s not foolproof. Sometimes, usually when they are desperate to try anything to pass the class, students cheat.

Teach writing as a process; don’t assign writing as a product.

I kicked off my writing about AI in this blog post from December 2022 “AI Can Save Writing by Killing ‘The College Essay.’” It’s the most frequently read post on the old blog. I wrote it in response to two different articles published in The Atlantic at the time arguing that the new ChatGPT had made writing assignments impossible and irrelevant.

Teaching writing as a process has been the mantra in composition and rhetoric since the late 1970s. Scholars debate the details about what this means, but in a nutshell, teaching writing as a process means setting up a series of assignments that begin with pre-writing invention exercises (freewriting and other brainstorming techniques, for example), activities that lead to rough drafts which are shared with other students through peer review. When students hand this work in, the instructor’s feedback is geared toward revision and (hopefully) improvement on future projects. My first year writing course is typical in that it is about research and students complete a research essay project. But long before we get to that assignment, students complete a series of smaller scaffolded assignments that build up to the larger essay. Again, none of this is new and it is how I was taught to teach writing back in the late 1980s when I started as a graduate teaching assistant.

I teach writing this way because there is good evidence that it works better than merely assigning writing. I also think teaching writing as a process deters plagiarism and other forms of cheating (including with AI). I require students to build their research writing projects through a series of smaller and specialized assignments, and to share their work in progress with other students in peer review. It’s awfully hard to fake this. Also, as I wrote back in July, I now make the process more visible by requiring students to complete their essays from beginning through final revisions on a Google Doc they share with me so I can view the document history and see what it is they did to put their writing together.

In contrast, assigned writing projects have always been much easier to cheat on. Before AI, students cheated with the internet, paper mills, by getting others doing the writing, or (at least according to my father who went to college in the early 1960s) with the library of papers that fraternities kept on hand.

There’s also the issue of the purpose of writing assignments in the first place. Teaching writing as a process is especially important in a course where the subject itself is writing and there is a lot of attention to how students craft their sentences and paragraphs. I realize that’s different from a class where the subject is literature or political science or business administration. But besides the fact that we should teach (not just assign) writing across the curriculum, writing assignments should ask students what they think about something. In research-based courses like freshman comp, students write about the research they did to persuade and inform both me and their classmates about something. It’s one of the reasons why I like teaching this class: my students are always teaching me new things. In my classes that are not as research-based (like Digital Writing), students write and reflect on the assigned readings and other projects of the class in order to share with readers what they think.

Assigned writing tasks tend to seek specific answers based on the content of the course— write about the theme of madness in Hamlet, about the balance of power between the three parts of the federal government, about they key causes of the great recession, etc. In evaluating assigned writing, teachers are less interested in what students think and are more in seeing if students correctly repeated the content of the course the teacher delivered through lectures, activities, and readings. In other words, assigned writing is an assessment tool, like an exam— and in most cases, it probably would be a more effective to use an exam.

Now, teaching writing as a process is A LOT more work for everyone because it means more reading, more teacher commenting, and more checking in with students’ writing as they progress through these assignments. This is why at the vast majority of colleges in the U.S., first year writing courses have 25 or fewer students. Some colleagues who teach lecture courses with 100 or so students who also assign papers have asked me how they’re supposed to teach writing as a process in these courses. My answer is I wouldn’t. Instead, I’d rely on short written responses to readings, quizzes, and exams.

Any course assignment that could be completed without being present in that course is a bad assignment.

A lot of the hype around AI is about how great it is at passing tests— LSAT, GRE, SAT, etc. etc.— and how that is supposed to mean something. But besides the issue of whether AI can pass these tests because it “knows” or because the test questions were part of the content used to create the AI, I think we all know this is not how school works. I mean, if on the first day of a course I introduced all the writing assignments, and then a student showed up on the second day and said “I finished everything— can I get my A now?” the answer, obviously, is no.

Which brings me to this second rule: if a teacher gives students an exam or an assignment that could be successfully completed without ever being in the class, then that’s a bad assignment. This is something I never thought about before AI. In the old old days, I don’t think it made much difference. When I went to college in the mid 1980s, if someone could pass an intro to chemistry exam or a history 101 exam without ever attending the class, what’s the problem? They already had enough mastery of the subject to pass the class anyway. That started to end with students doing Google searches to pass exams, and now that AI can answer all those questions in that history 101 class final in real time, it’s completely over.

AI isn’t attending classes with our students (at least not yet), and so it is not as useful to cheat on exams or assignments that have specific connections to the course. That’s easy enough to do in the kinds of courses I teach, though I have to assume this is more complicated in a subject like calculus where the concepts and methods transcend classroom boundaries. But perhaps an even easier way to address this problem is for the teacher to make participation count as part of the grade. As I discussed in this post, my classes have a participation grade component that counts for about 30% of the grade.

AI detection software doesn’t work and it never will.

A lot of teachers want to skip these first two rules and instead just rely on some kind of app that can detect what parts of a student’s paper were written by an AI. Essentially, they want something like the plagiarism detection software Turnitin many of these teachers have used for years. Though as a quick glance at the Turnitin website reveals, they are shifting from plagiarism detection along with AI detection as well.

Plagiarism detection software has been a divisive topic in writing studies for years. While I know lots of teachers routinely require their students to run their papers through Turnitin for a plagiarism check, I never have done this because I don’t think it’s necessary and I don’t think Turnitin is as good of a tool as many users seem to think. This is especially true with AI detection. According to Turnitin, the false-positive rate for “fully human-written text” is less than 1%, but up to 20% for AI writing. And that is just for the very common and very dumb way people use to cheat with AI: writing a simple prompt and copying and pasting the answer with few changes. I have to assume the ineffectiveness in detecting AI goes down if the human using the AI effectively: for brainstorming, proofreading/editing, chatting with it about revision ideas, and so forth.

It’s a futile effort, especially as the AIs improve and as all of us (including our students) learn more about how to use them for not just cheating. Which leads me to my last point:

Teachers at all levels need to learn more about AI.

Colleges and universities are certainly trying. The two talk things I did last year about AI were both faculty development events, and the attendance at both was pretty good. I know folks here at EMU have held similar events, and I get the impression this is pretty common at most colleges and universities. And faculty have heard of AI at this point, of course.

The problem is I’m not sure any of the faculty development or the oodles of news stories about AI has resulted in any differences in teaching. This is mostly just based on my own sense of things, but I did informally poll my current students (I have about 70 this semester) the other day about AI in other classes they were taking. A few students mentioned classes where they are using AI for various assignments. A few other students mentioned instructors who expressly forbid the use of AI. I asked these students if they thought the instructor had any way of enforcing that; “no.” But the majority of my students said that the topic has not come up at all. That’s a problem.

I’m not saying every teacher now needs to embrace AI and incorporate it into their teaching. Not at all. Besides experimenting with AI in my teaching, I’ve been doing a lot of writing and reading about AI that is (hopefully) going to turn into a research project. I think my teaching with AI experiments are going well, but I honestly don’t know if this is something I’ll continue to do in the future. I feel the same way about AI generally: it probably is going to “change everything,” but it also might end up being another one of those things (like MOOCs, which was the subject of my last major research project) that never lives up to the hype.

What I am saying though is AI is here now and it looks like it’s going to be (probably) a big deal for some time to come. It is not just going to “go away” and it cannot be ignored. A professor or teacher can continue to refuse to engage with AI for valid ethical or personal reasons, but that is not going to stop everyone else from using it. That includes some of our students who are using AI simplistically to cheat, perhaps by feeding the teacher’s writing assignment into ChatGPT and copying/pasting whatever the AI comes up with. Fortunately, it’s pretty easy to spot that sort of AI use. But what teachers cannot easily recognize or stop is a student who uses AI more in the way that it is really meant to be used: as a tool to help/improve what humans do, not replace it.

So start learning about AI, even if you hate it. Mollick’s book is a good place to start.

A Small Example of the Jagged Frontier and Discovering What ChatGPT Can’t Do

Or, the case of the penless writing logo

One of the things about AI I find most fascinating (and I read about this in other commentaries about ChatGPT and other AIs all the time) is it is really good at lots of things and weirdly bad at lots of other things, and that seems especially true with a lot of things humans do very well. In his book Co-Intelligence: Living and Working with AI, Ethan Mollick describes this as the “jagged technological frontier.” He describes a large study he and some colleagues did (I will probably need to read this and possibly it assign this fall), and in the abstract of that study, they write that AI creates a jagged technological frontier “where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI.” ChatGPT can easily make a pretty good looking (to me) logo, but it cannot separate writing from the word/image/concept of “pen.”

This is going to find its way into my courses this semester; as one of a series of exercises, I want to ask students to discover for themselves something the AI can’t do, and, if possible, make a guess as to why. I guess this would be my example of some of that assignment. I had a “conversation” with ChatGPT the other day to create a logo for the Canvas site for my Digital Writing class this fall. This is using the “not free” version which allows me to generate lots of images (I think the free version limits users to just two a day.). ChatGPT won’t let me just share the conversation, so I’ll reproduce/paraphrase it here.

I started with this prompt:

“Draw me a logo for the top of a website for a college class I’m teaching called digital writing. The class examines writing as a technology. Since the course number is 354, incorporate the number somewhere into the design. The university’s colors are green and white, but it is okay to incorporate other colors if you want.”

ChatGPT made this:

Then I asked it to make a JSON script that describes what this image looks like. I’m going to skip that in this post in part because I don’t know enough about JSON, and in part that’s not the point I’m trying to make. That’s maybe a future post.

Anyway, here’s an image it made with the JSON script:

And here’s another one it made when I asked it to repeat what it did before:

FWIW, this was my favorite and the one I went with— after some minor editing.

Then I wrote: “Now forget the json script. Instead, draw me a logo for the top of a website for a college class I’m teaching called digital writing. The class examines writing as a technology. Since the course number is 354, incorporate the number somewhere into the design. The university’s colors are green and white, but it is okay to incorporate other colors if you want.” Here’s what it drew:

To me, I think the the reproduction of the logo by ChatGPT is getting a little worse in some ways. I mean, “Nourse?”

Then I wrote: “I noticed you added a pen even though that wasn’t in the prompt. There are a lot of other kinds of writing implements and tools. Why did you decide on using a fountain pen?”

ChatGPT responded: “I chose to include a fountain pen because it’s a classic symbol of writing, representing the traditional side of the writing process. The contrast between the fountain pen and digital elements (like circuits or binary code) visually communicates the course’s theme of examining writing as a technology.

“Fountain pens are often seen as timeless and elegant, bridging the gap between old and new, which fits the idea of exploring writing in both traditional and technological contexts. It also has a distinctive and recognizable shape, which makes it effective in a logo.

“If you’d prefer a different writing tool or symbol, I can certainly create a version with an alternative element!”

I wrote: “How do you think it might look if instead of a fountain pen you used a typewriter instead?”

ChatGPT responded: “Using a typewriter in the logo could create a strong visual connection to the history of writing technology, emphasizing the transition from traditional mechanical tools to digital ones. The typewriter could be depicted with keys or as a whole, evoking a sense of retro charm while still aligning with the theme of writing as technology. This could give the logo a more nostalgic feel, yet still relevant to the course’s examination of the evolution of writing tools.

“If you’d like, I can create a version of the logo with a typewriter instead of the fountain pen to see how it looks. Would you like me to do that?”

“Sure, create a version like that” I wrote back. Here’s what it made:

ChatGPT also wrote with this image “Here is the logo with a typewriter element instead of a fountain pen. This version emphasizes the traditional aspect of writing through the typewriter while maintaining a connection to digital technology. If you have any further requests or adjustments, feel free to let me know!” Oh, the mansplaining confidence of ChatGPT, trying to convince me there was no pen anymore!

I responded “That’s interesting, but there is still a pen in the logo. can you eliminate that and the three smaller versions of the logo on the left side and in the right corner?” It produced this:

And ChatGPT wrote “Here is the revised logo, with the pen removed and only the typewriter elements included. The smaller versions of the logo have also been eliminated. Let me know if this version meets your needs or if you would like any further adjustments!” Again, it seems to be glitching more on some of the words and such.

I responded “Nope, the pen is still there and there are still three smaller logos, this time across the bottom of the page. Can you get rid of the pen and the smaller logos?” ChatGPT made this:

I find the inclusion of a photo-realistic pen/mechanical pencil particularly amusing, like ChatGPT is just trying to sneak it in there. You can also see where it’s getting more confused with “W- – +—33ING” or whatever that is. I responded “The small logos are gone, but there is still a writing implement– a pen or a mechanical pencil, I can’t tell. Also you the writing for digital writing and university at the bottom of the logo shouldn’t be there. Can you eliminate these things?” It made this:

It also wrote back “This version removes the pen or pencil, as well as the additional text at the bottom of the logo. The design now focuses entirely on the typewriter and digital elements, with the course number 354 subtly incorporated. If this meets your expectations or if further adjustments are needed, just let me know!” It’s almost as if it is admitting that yeah, the pencil/pen is still there, but at least the typewriter is what is being emphasized.

Here’s the punchline for me. I asked again “Can you delete the pencil, the pen, and the word University from this logo?” ChatGPT’s response surprised me:

“It seems that the tools available to me do not allow direct editing of existing images to remove specific elements like the pencil, pen, or text. Instead, I can generate a new logo without those elements, as I attempted to do in the previous steps. However, if those elements persist in the new generation, it’s due to the constraints of the image generation process.

“If you have access to image editing software, you might be able to manually remove the undesired elements. Alternatively, I can keep trying to refine the prompt to produce a version that meets your specifications. Would you like me to attempt another version or provide instructions for editing the image manually?”

I asked “Can you tell me more in simple layman terms what you mean by the constraints of the image generation process?”

ChatGPT answered “Sure! When I generate images, I’m using a process that interprets a text description to create a visual design. The tool I use is powerful but not perfect—it tries to match the description as closely as possible, but sometimes it includes extra elements or doesn’t fully understand specific requests, like leaving out certain objects or text.

“These ‘constraints’ mean that even when I specify not to include a pen or extra text, the tool might still add them because it’s interpreting the description in a way that makes sense to it, based on patterns it has learned from millions of images.

“So, while I can describe what you want very precisely, the image might still include some unwanted elements because the generation process doesn’t always get it exactly right. This is why there might be some back-and-forth to refine the result, but it’s not always guaranteed to be perfect due to these limitations.”

If I am understanding this correctly, I think what I am running into here is a bit of that jagged frontier and the limits of the AI’s training data. Essentially, I think ChatGPT’s association between images of writing and pens/pencils are so connected that it can’t make a picture of writing that does not include a pen or a pencil.

When you think about it for a second, this isn’t too surprising. Try doing a Google image search for just the word “writing” to see what I mean. Here’s a screenshot of what I came up with from such a search:

It’s no wonder that ChatGPT thinks that the idea/word/concept “writing” goes with pen or pencil because of the rest of the data it’s been trained on does the same thing. Which is to say I think when you ask a human to create an image of the process of writing, they default to a pen.

Though I will say ChatGPT’s advice for me the human in this loop: why don’t you just use some image editing software and fix it yourself?

Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI

A couple weeks ago, I wrote about why I use Google docs to teach writing at all levels. I’ve been using it for years–long before AI was a thing–in part because being able to see the history of a student’s Google doc is a teachable moment on the importance of the writing and revision process. This also has the added bonus of making it obvious if a student is skipping that work (by using AI, by copying/pasting from the internet, by stealing a paper from someone else, etc.) because the document history goes from nothing to a complete document in one step. I’m not saying that automatically means the student cheated, but it does prompt me to have a chat with that student.

In a similar vein and while I’m thinking about putting together my classes for the fall term, I thought I’d write about why I think teaching citation practices is increasingly important in research writing courses, particularly first year composition.

TL;DR version: None of this is new or innovative; rather, this is standard “teaching writing as a process” pedagogy and I’ve been teaching research writing like this for decades. But I do think it is even more important to teach citation skills now to help my students distinguish between the different types of sources, almost all of which are digital rather than on paper. Plus this is an assignment where AI might help, but I don’t think it’d help much.

Continue reading “Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI”

Why I Use Google Docs to Teach Writing, Especially in the Age of AI

I follow a couple different Facebook groups about AI, each of which have become a firehose of posts lately, a mix of cool new things and brand new freakouts. A while back, someone in one of these groups posted about an app to track the writing process in a student’s document as a way of proving that the text was not AI. My response to this was “why not just use Google docs?”

I wish I could be more specific than this, but I can’t find the original post or my comment to it; maybe it was deleted. Anyway, this person asked “what did I mean?” and I explained it briefly, but then I said I was thinking about writing a blog post about it. Here is that post.

For those interested in the tl;dr version: I think the best way to discourage students from handing in work they didn’t create (be that from a papermill, something copied and pasted from websites, or AI) is to teach writing rather than merely assigning writing. That’s not “my” idea; that’s been the mantra in writing studies for at least 50 years. Also not a new idea and one you already know if you use and/or teach with Google docs: it is a great tool for teaching writing because it helps with peer review and collaborative writing, and the version history feature helps me see a student’s writing process, from the beginning of the draft through revisions. And if a student’s draft goes from nothing to complete in one revision, well, then that student and I have a chat.

Continue reading “Why I Use Google Docs to Teach Writing, Especially in the Age of AI”

No, Student Writing Is Not Dead (or how AI faculty freakout is back)

Now that the 2023-24 school year is long over and my wife and I are (mostly) done moving into our new house, it’s time to start thinking again about AI for teaching in the fall and for some scholarly things beyond. I’ve been mostly ignoring these things for the last couple of months, but even in that short time, it feels like things have changed. AI tech is getting quickly integrated into everything you can imagine, and it feels to me like the AI faculty freakout factor is on the rise once again.

This is just a gut feeling– like I said, I’ve been out of the loop and it’s not like I’ve done any research on this. But the current moment reminds me a bit of late 2022/early 2023 when ChatGPT first appeared. By the time I did a talk about AI at Hope College in late April 2023 and also again a talk/workshop about AI (over Zoom) at Washtenaw Community College in October 2023, teachers had settled down a bit.  Yes, faculty were still worried about cheating and the other implications, but I think most of the folks who attended these events had already learned more about AI and had started to figure out how to both use it as a tool to help teaching. They also realized they needed to make some changes to assignments because of AI tools.

But now the freakout is back. Perhaps it’s because more faculty are starting to realize that “this whole AI thing” is something they’re going to have to deal with after all. And as far as I can tell, a lot of the freaked out faculty are in the humanities in general/in English in particular. I suppose this is because we teach a lot of general education classes and classes that involve a lot writing and reading. But I also think that the reason why the freakout is high in fields like English is because a lot of my department and discipline colleagues describe themselves as being “not really into technology.”

The primary freakout then and now– at least among faculty in the humanities (I assume STEM faculty have different freakout issues)– is that AI makes it impossible to teach writing in college and in high school because it is too easy for students to have ChatGPT (or whatever other AI) to do the work for them. I wrote a post in response to these articles back in December 2022, but there were dozens of freakout articles like these two. These articles almost always assume that AI has uniquely enabled students to cheat on assignments (as if paper mills and copy and pasting from “the internet” hadn’t existed for decades), and that given the chance, students will always cheat. So the only possible solution is to fight AI with things like detection software or returning to handwritten exams.

It’s deja vu all over again.

Consider, for example, Lisa Lieberman’s June 2024 Chronicle of Higher Education article “AI and the Death of Student Writing.” Lieberman, who teaches community college English and composition courses “in California’s Central Valley,” has seen an alarming uptick in students using AI to write their papers. She gives an example of a student’s essay about The Shining that included the sentence “A complex depiction of Jack’s development from a struggling family guy to a vessel of lunacy and malevolence is made possible by Stanley Kubrick’s brilliant direction.” Lieberman writes “I called the student in and asked him to write a sentence with the word ‘depiction.’ He admitted he didn’t know what ‘depiction’ meant, much less how to spell it, much less how to use it in a sentence. He confessed he hadn’t written a single word of the essay.” (For what it’s worth, I would have asked this student about “malevolence”).

Then she moves on to discussing a student writing her essay with the now AI-fueled version of Grammarly. Lieberman “discovered it’s a multilayered computer program that does everything from simple spelling and grammatical corrections to rewriting entire sentences, adjusting tone and fluency.” She estimated that at least of a third of her students were consistently using AI: “Once they believed they could turn in AI assignments undetected, they got bolder … and used AI for every single assignment.”

It’s all just so wrong, Lieberman laments, in part because of how her students are just cheating themselves by using AI. Here’s a long quote from the end of the article:

I remember my days at Berkeley, where, as an English major, I’d take my copy of Wallace Stevens’s The Palm at the End of the Mind, or Chaucer’s “The Wife of Bath’s Tale,” and pick a nice, sunny spot on campus on a grassy knoll underneath a tree, lay out my blanket, and spend the afternoon reading and scribbling notes in my books. It was just me and my books and my thoughts. There was nothing better.

As I lay there reading the writer’s words, they came to life — as if the author were whispering in my ear. And when I scribbled my notes, and wrote my essays, I was talking back to the author. It was a special and deep relationship — between reader and writer. It felt like magic.

This is the kind of magic so many college students will never feel. They’ll never feel the sun on their faces as they lie in the grass, reading words from writers hundreds of years ago. They won’t know the excitement and joy of truly interacting with texts one-on-one and coming up with new ideas all by themselves, without the aid of a computer. They will have no idea what they’re missing.

I understand the anxiety that Lieberman is expressing, and I completely agree that AI technology is forcing us to change how we teach college classes– and, in particular, classes where students are expected to read and to write about that reading.

However:

  • Students have been cheating in school for as long as there has been school. AI make it easier (and more fun!) to cheat, but none of this is new. So any educator who thinks that students have only now started to cheat on the things they assign only because of AI are kidding themselves.
  • In my experience, the vast majority of students do not want to cheat this much. Oh sure, they might cheat by poorly borrowing a quote from a website, or looking over someone’s shoulder to get a quiz answer on a multiple choice test. But in my view, these are misdemeanor offenses at best. Also, when students do not cite sources properly (and this is as true for the MA students I work with as it is with the first year writing students), it’s because they don’t know how. In other words, a lot of plagiarism is a teachable moment.
  • Also in my experience, students who do blatantly cheat by downloading from a papermill or prompting an AI to do the whole assignment are a) already failing and desperate, and b) not exactly “criminal masterminds.” Every freakout narrative I’ve read– including Lieberman’s– includes a “scene” where the instructor confronts the student with the obvious AI cheating. So to me, if it’s this easy to catch students who cheat using AI, what’s the problem? Just punish these students and be done with it.
  • The fundamentals of teaching writing as a process– the mantra of writing studies for the last 50+ years– are still the same and the best way to discourage students from cheating with AI or anything else. Don’t merely assign writing– teach it. Make students show their work through drafts. Use a series of shorter assignments that build to a larger and more complex writing project. In a research-oriented writing class (like first year composition, for example), require students to create an annotated bibliography of all of their sources. Have peer review as a required part of the process. Etc., etc., etc. None of this is foolproof and for all I know, Lieberman is already doing this. But besides actually helping students to become better writers, teaching (rather than just assigning) writing like this makes cheating as much work as just doing the assignments.
  • I think the best way to dissuade students from using AI to cheat is to explain to them why this is a bad idea. Last year, I had a discussion at the beginning of all of my classes on the basics of AI and why it might be useful for some things (see my next bullet), and why it is not useful for cheating, and that’s especially true in classes that involve research and where writing is taught as a process (see my previous bullet). I think by making it clear from the beginning that yes, I too knew about AI and here’s why cheating with it isn’t a good idea, fewer of them were tempted to try that in my classes.
  • I don’t think there’s anything wrong with Grammarly. At EMU, I will often get letters of accommodation from the disability office about students enrolled in my classes that tell me how I am supposed to “accommodate” the student. That usually means more time to take exams or more flexibility for deadlines, but often, these letters say I should allow the student to use Grammarly.

My philosophy on this has always been that it is a good idea for students to seek help with their writing assignments from outside of the class–help that assists, not that does the work for the student. I always encourage students– especially the ones who are struggling– to get help from a writing center consultant/tutor, a trusted friend or parent, and so forth. I think Grammarly– when used properly– falls into that category. I don’t think asking Grammarly to write the whole thing counts as “proper use.” I want students to proofread what they wrote to make sure that the  mechanics of their writing are as clear and “correct” as possible, and if Grammarly or an AI or another electronic tool can help with that, I’m all for it.

I think the objection that Lieberman has with Grammarly is it makes writing mechanically correct prose too easy, and the only way for students to learn this stuff is to make them do it “by hand.” As someone who relies heavily on a calculator for anything beyond basic arithmetic and also as someone who relies on Google Doc’s spell checking and grammar checking features, I do not understand this mindset. Since she’s teaching in a community college setting, I suppose Liberman might be working more with “basic writing” students. I could see more of an argument for getting students to master the basics before relying on Grammarly. But for me and even in classes like first year writing, I want to focus mostly on the arguments my students are making and how they are using evidence to support their points. So if a student gets some help with the mechanics from some combination of a writing center consultant and an application like Grammarly, then I can focus more exclusively on the interesting parts.

Where Lieberman and I might agree though is if a student doesn’t have basic competency with writing mechanics, then Grammarly is not going to solve the problem. It’s a lot like the mistakes students still make with there/their/they’re even if they take the time to spell check everything. And again, that’s why it is is so easy to detect AI cheating: the vast majority of students I have had who have tried to cheat with AI have done it poorly.

  • Finally, about students missing “the magic” of reading and writing, especially while doing something clichéd idealistic like laying on a blanket on the campus lawn and under an impressive oak. I get it, and that’s part of why I went into this line of work myself. But this is the classic mistake so many teachers make: just because the teacher believes reading and writing are magical doesn’t mean your students will. In fact, in required gen ed classes like first year writing or intro to literature, many (sometimes most) of the students in those classes really do not want to take those courses at all. I can assign students to read a book or essay that I think is great or I can encourage students to keep writing on their own and for not just school, and sometimes, I do have students who do discover “the magic,” so to speak. But honestly, if the majority of my first year writing students at the end of the semester come away thinking that the experience did not “totally suck,” I’m happy.

So no, this is not the end fo student wri

TALIA? This is Not the AI Grading App I Was Searching For

(My friend Bill Hart-Davidson unexpectedly died last week. At some point, I’ll write more about Bill here, probably. In the meantime, I thought I’d finish this post I started a while ago about the webinar about Instructify’s AI grading app. Bill and I had been texting/talking more about AI lately, and I wish I would have had a chance to text/talk more about this. Or anything else).

In March 2023, I wrote a blog post titled “What Would an AI Grading App Look Like?” I was inspired by what I still think is one of the best episodes of South Park I have seen in years, “Deep Learning.”  Follow this link for a detailed summary or look at my post from last year, but in the nutshell, the kids start using ChatGPT to write a paper assignment and Mr. Garrison figures out how to use ChatGPT to grade those papers. Hijinks ensue.

Well, about a month ago and at a time when I was up to my eyeballs in grading, I saw a webinar presentation from Instructify about their AI product called TALIA. The title of the webinar was “How To Save Dozens of Hours Grading Essays Using AI.” I missed the live event, but I watched the recording– and you can too, if you want— or at least you could when I started writing this. Much more about it after the break, but the tl;dr version is this AI grading tool is not the one I am looking for (not surprisingly), and I think it would be a good idea for these tech startups to include people with actual experience with teaching writing on their development teams.

Continue reading “TALIA? This is Not the AI Grading App I Was Searching For”