An indirect but positive review of Mollick’s “Co-Intelligence”
This semester, I’m teaching two sections of first year writing (aka freshman comp) and an advanced writing course called Digital Writing, and both have AI elements and themes. In first year writing, the research theme is “Your Career and AI.” In the Digital Writing course, the last two writing projects are going to be waist-deep in writing with AI. Maybe one day I will better understand/make use of Substack’s newsletter function to chronicle these classes in more detail, but that’s later.
For Digital Writing, we’re reading and discussing Ethan Mollick’s Co-Intelligence: Living and Working with AI. If you’re reading posts like this because you too are trying to make sense out of the what AI is about, there’s a good chance you’ve already heard of Mollick’s book and his Substack, One Useful Thing. If you haven’t heard of Mollick and you want to know more about AI but you’re overwhelmed with the firehose of news and information, then his book is for you. Co-Intelligence is a well-written, accessible, and a thoroughly researched 30,000 foot overview in less than 250 pages printed in a big font. It’s enough to get the “AI curious” up to speed on the current state of things (it was published in April 2024), while also pointing readers to ideas for further reading and research.
Mollick is a business professor at the Wharton School at the University of Pennsylvania, so he is primarily interested in how AI will impact productivity and innovation. I think we conceptualize teaching a bit differently, and like everything I’ve read about AI, Mollick is making some claims I doubt. But we’re mostly on the same page.
One of the most cited/mentioned chapters in Mollick’s book is “Four Rules for Co-Intelligence.” In brief, those rules are:
-
Always invite AI to the table, meaning you have to experiment and try to use AI (or really, different platforms, so AIs) for lots and lots of different things in order to discover what it/they can and can’t do.
-
Be the human in the loop: it’s a bad idea to completely turn over a task to AI, both because AI makes a lot of mistakes (aka hallucinations) and humans ought to be in charge rather than the other way around.
-
Treat AI like a person (but tell it what kind of person it is). AI doesn’t behave same way as other computer applications, so Mollick says we need to be conversational with it as if it were a human. Mostly he’s talking about creating context and scenarios in AI prompts, as in “You are an experienced teacher speaking to skeptical students about the value of group work. What advice would you give those students?”
-
Assume this is the worst AI you will ever use, which is perhaps the most accurate of these AI rules.
So, in that spirit, here are four more rules about teaching writing and AI— specifically, what teachers can do to discourage students from using AI to cheat.
Of course, I’m far from the first person to come up with four more rules for AI— I’m not even the first person to come up with four more rules for AI and writing! For example, there’s this fine post from Jane Rosenzweig at her site Writing Hacks “Four Rules for Writing in the Age of AI,” and also this guest post at John Warner’s Substack site by high school teacher and writer Brett Vogelsinger, “Artificial Intelligence and Writing: Four Things I Learned Listening to my High School Students.” Both great posts and great thoughts.
The most common concern about AI I read on Facebook (though not so much on Substack) from other professors and teachers is students using it to cheat on writing assignments. So this post isn’t about how to use AI to teach writing— maybe I’ll write more about that when I have a better sense of the answer. This is about how teachers can create an environment that discourages students from cheating with AI. It’s not foolproof. Sometimes, usually when they are desperate to try anything to pass the class, students cheat.
Teach writing as a process; don’t assign writing as a product.
I kicked off my writing about AI in this blog post from December 2022 “AI Can Save Writing by Killing ‘The College Essay.’” It’s the most frequently read post on the old blog. I wrote it in response to two different articles published in The Atlantic at the time arguing that the new ChatGPT had made writing assignments impossible and irrelevant.
Teaching writing as a process has been the mantra in composition and rhetoric since the late 1970s. Scholars debate the details about what this means, but in a nutshell, teaching writing as a process means setting up a series of assignments that begin with pre-writing invention exercises (freewriting and other brainstorming techniques, for example), activities that lead to rough drafts which are shared with other students through peer review. When students hand this work in, the instructor’s feedback is geared toward revision and (hopefully) improvement on future projects. My first year writing course is typical in that it is about research and students complete a research essay project. But long before we get to that assignment, students complete a series of smaller scaffolded assignments that build up to the larger essay. Again, none of this is new and it is how I was taught to teach writing back in the late 1980s when I started as a graduate teaching assistant.
I teach writing this way because there is good evidence that it works better than merely assigning writing. I also think teaching writing as a process deters plagiarism and other forms of cheating (including with AI). I require students to build their research writing projects through a series of smaller and specialized assignments, and to share their work in progress with other students in peer review. It’s awfully hard to fake this. Also, as I wrote back in July, I now make the process more visible by requiring students to complete their essays from beginning through final revisions on a Google Doc they share with me so I can view the document history and see what it is they did to put their writing together.
In contrast, assigned writing projects have always been much easier to cheat on. Before AI, students cheated with the internet, paper mills, by getting others doing the writing, or (at least according to my father who went to college in the early 1960s) with the library of papers that fraternities kept on hand.
There’s also the issue of the purpose of writing assignments in the first place. Teaching writing as a process is especially important in a course where the subject itself is writing and there is a lot of attention to how students craft their sentences and paragraphs. I realize that’s different from a class where the subject is literature or political science or business administration. But besides the fact that we should teach (not just assign) writing across the curriculum, writing assignments should ask students what they think about something. In research-based courses like freshman comp, students write about the research they did to persuade and inform both me and their classmates about something. It’s one of the reasons why I like teaching this class: my students are always teaching me new things. In my classes that are not as research-based (like Digital Writing), students write and reflect on the assigned readings and other projects of the class in order to share with readers what they think.
Assigned writing tasks tend to seek specific answers based on the content of the course— write about the theme of madness in Hamlet, about the balance of power between the three parts of the federal government, about they key causes of the great recession, etc. In evaluating assigned writing, teachers are less interested in what students think and are more in seeing if students correctly repeated the content of the course the teacher delivered through lectures, activities, and readings. In other words, assigned writing is an assessment tool, like an exam— and in most cases, it probably would be a more effective to use an exam.
Now, teaching writing as a process is A LOT more work for everyone because it means more reading, more teacher commenting, and more checking in with students’ writing as they progress through these assignments. This is why at the vast majority of colleges in the U.S., first year writing courses have 25 or fewer students. Some colleagues who teach lecture courses with 100 or so students who also assign papers have asked me how they’re supposed to teach writing as a process in these courses. My answer is I wouldn’t. Instead, I’d rely on short written responses to readings, quizzes, and exams.
Any course assignment that could be completed without being present in that course is a bad assignment.
A lot of the hype around AI is about how great it is at passing tests— LSAT, GRE, SAT, etc. etc.— and how that is supposed to mean something. But besides the issue of whether AI can pass these tests because it “knows” or because the test questions were part of the content used to create the AI, I think we all know this is not how school works. I mean, if on the first day of a course I introduced all the writing assignments, and then a student showed up on the second day and said “I finished everything— can I get my A now?” the answer, obviously, is no.
Which brings me to this second rule: if a teacher gives students an exam or an assignment that could be successfully completed without ever being in the class, then that’s a bad assignment. This is something I never thought about before AI. In the old old days, I don’t think it made much difference. When I went to college in the mid 1980s, if someone could pass an intro to chemistry exam or a history 101 exam without ever attending the class, what’s the problem? They already had enough mastery of the subject to pass the class anyway. That started to end with students doing Google searches to pass exams, and now that AI can answer all those questions in that history 101 class final in real time, it’s completely over.
AI isn’t attending classes with our students (at least not yet), and so it is not as useful to cheat on exams or assignments that have specific connections to the course. That’s easy enough to do in the kinds of courses I teach, though I have to assume this is more complicated in a subject like calculus where the concepts and methods transcend classroom boundaries. But perhaps an even easier way to address this problem is for the teacher to make participation count as part of the grade. As I discussed in this post, my classes have a participation grade component that counts for about 30% of the grade.
AI detection software doesn’t work and it never will.
A lot of teachers want to skip these first two rules and instead just rely on some kind of app that can detect what parts of a student’s paper were written by an AI. Essentially, they want something like the plagiarism detection software Turnitin many of these teachers have used for years. Though as a quick glance at the Turnitin website reveals, they are shifting from plagiarism detection along with AI detection as well.
Plagiarism detection software has been a divisive topic in writing studies for years. While I know lots of teachers routinely require their students to run their papers through Turnitin for a plagiarism check, I never have done this because I don’t think it’s necessary and I don’t think Turnitin is as good of a tool as many users seem to think. This is especially true with AI detection. According to Turnitin, the false-positive rate for “fully human-written text” is less than 1%, but up to 20% for AI writing. And that is just for the very common and very dumb way people use to cheat with AI: writing a simple prompt and copying and pasting the answer with few changes. I have to assume the ineffectiveness in detecting AI goes down if the human using the AI effectively: for brainstorming, proofreading/editing, chatting with it about revision ideas, and so forth.
It’s a futile effort, especially as the AIs improve and as all of us (including our students) learn more about how to use them for not just cheating. Which leads me to my last point:
Teachers at all levels need to learn more about AI.
Colleges and universities are certainly trying. The two talk things I did last year about AI were both faculty development events, and the attendance at both was pretty good. I know folks here at EMU have held similar events, and I get the impression this is pretty common at most colleges and universities. And faculty have heard of AI at this point, of course.
The problem is I’m not sure any of the faculty development or the oodles of news stories about AI has resulted in any differences in teaching. This is mostly just based on my own sense of things, but I did informally poll my current students (I have about 70 this semester) the other day about AI in other classes they were taking. A few students mentioned classes where they are using AI for various assignments. A few other students mentioned instructors who expressly forbid the use of AI. I asked these students if they thought the instructor had any way of enforcing that; “no.” But the majority of my students said that the topic has not come up at all. That’s a problem.
I’m not saying every teacher now needs to embrace AI and incorporate it into their teaching. Not at all. Besides experimenting with AI in my teaching, I’ve been doing a lot of writing and reading about AI that is (hopefully) going to turn into a research project. I think my teaching with AI experiments are going well, but I honestly don’t know if this is something I’ll continue to do in the future. I feel the same way about AI generally: it probably is going to “change everything,” but it also might end up being another one of those things (like MOOCs, which was the subject of my last major research project) that never lives up to the hype.
What I am saying though is AI is here now and it looks like it’s going to be (probably) a big deal for some time to come. It is not just going to “go away” and it cannot be ignored. A professor or teacher can continue to refuse to engage with AI for valid ethical or personal reasons, but that is not going to stop everyone else from using it. That includes some of our students who are using AI simplistically to cheat, perhaps by feeding the teacher’s writing assignment into ChatGPT and copying/pasting whatever the AI comes up with. Fortunately, it’s pretty easy to spot that sort of AI use. But what teachers cannot easily recognize or stop is a student who uses AI more in the way that it is really meant to be used: as a tool to help/improve what humans do, not replace it.
So start learning about AI, even if you hate it. Mollick’s book is a good place to start.