Grading Participation Helps Students

But no, not this. In my experience, attendance (participation, actually) needs to be a part of a student’s grade.

Dyer’s argument against grading attendance is based on compassion for students and their complicated lives. I get that, and I hear a lot of similar things from many of my fellow writing teachers as well. I teach at an opportunity granting institution where my students are probably similar to hers (she teaches at a community college). We do have “traditional” students who are 18-21 and living on or near campus, the kind of student more typical at a place like U of Michigan (which is about 7 miles away from where I work, EMU). But we also have students who commute and some from quite a distance away, and that creates a variety of attendance problems. We also have a lot of students who have significant work and family obligations— and that isn’t just the older returning students, either.

Dyer mentions a “secret project” she’s working on that includes reviewing syllabi from dozens of other gen ed biology classes, and she highlights a couple of draconian policies where missing two or three classes could drop a student’s grade by a full letter. That seems crazy and unreasonable to me too.

That said, I don’t think it takes a lot of research for me to claim that students who miss too much class tend to fail. Sure, teachers need to have some compassion and understanding, and they need to remember students have lives where stuff happens sometimes. But to me, a reasonable attendance policy is just like all kinds of rules and laws for things people should do anyway, even if it is arguably a “personal choice.” Take seat belt laws, for example. I’m old enough to remember riding in a car and learning to drive myself before seat belt laws, and I rarely bothered to buckle up. The law requiring it (and the possible ticket, of course) gave me and many other drivers the nudge we needed.

At Eastern, legend has it that the Board of Regents once passed a policy that declared no student could fail a class based on attendance alone. I’ve never found evidence that this policy exists (though I haven’t looked very hard), but whatever. I don’t grade students on attendance; I grade students’ participation, and the first thing a student needs to do to successfully participate is to show up.

Now, Dyer and I are working in different disciplines. I teach writing and all of the classes I teach have 25 or fewer students. It’s obviously easier to take attendance with 25 students than in a lecture hall with 250, and it’s a lot easier for students in a small class to understand why they need to show up. I have no idea how many students Dyer is working with in her courses, but since she teaches biology, I assume it’s more than me.

I think also think we have different assumptions about what class meetings are for. Dyer writes:

Think about it this way – if a student misses a class, makes up what they missed and performs well on the assessment, should their grade really be lower than a student who attended class and performed equally as well on the assessment?

I think she thinks that the point of a class meeting is for an instructor to deliver content to students, and the measurement of a student’s success in the course is an exam. And I get that— as far as I can tell, this has been the STEM assumption about pedagogy and assessment forever.

In the courses I teach (and I think this is true in most courses in the humanities), we value the stuff students do in these class meetings. The new-ish innovation of the “flipped classroom” is how most people I know have been teaching writing forever. My courses involve a lot of discussion of readings, discussions and brainstorming about the writing assignments, and peer review of those assignments. So “being there” is part of process, and there’s no way to cram on an exam at the end of the semester to try to make up for not being there.

The other thing is that now that we have AIs that easily answer any question that might pop up on a gen ed intro to biology exam, it seems to me that this approach to assessing students’ success is going to have to change and change very soon. One of the many things AI has made me rethink about teaching and learning is if someone can successfully complete an assignment without attending the course, then that’s not a very good assignment. But that’s a slightly different conversation for a different time.

Anyway, here’s what I do:

Participation in my classes is 30% of the overall grade and it includes activities like reading responses, small group work, and peer reviews. I don’t have a good way of keeping track of the details of these things in f2f classes, so to figure out a grade for participation, I have students email and tell me what grade they think they have earned, I respond, and then I base the grade on that. I think this is a surprisingly accurate and effective way of doing this, but that too might be a different post.

Students can’t participate if they aren’t there, so I tell my students they shouldn’t miss any class at all. However, the reality is there are of course legitimate reasons why students have to miss. So my policy is students can miss up to four class meetings— or the equivalent of two weeks in a 15 week semester— for any reason whatsoever. Students can always tell me why they need to miss class, but that’s up to them and I do not ask for any sort of “note” from someone.

Students who miss five classes fail— or at least they usually fail. Since the age of Covid, I have lightened up on this a bit and I’ve made a handful of acceptions with a few students. I also recently started giving students with perfect attendance a very small bonus, often enough to make a half-letter grade difference.

I’ve had a version of a policy like this for my entire teaching career, and I am comfortable in asserting that students who miss two weeks of a 15 week semester are essentially fail themselves anyway. These students aren’t just absent a lot; they also don’t turn stuff in. So just like seat belt laws incentivized wearing a seat belt (and undoubtedly saved countless numbers of people), an attendance policy incentivizes the positive behavior of showing up. And I guarantee you that I have had students in classes who grumbled about being required to show up who would have otherwise failed themselves.

And the first step to participating is attendance

In my new (mis)adventures on Substack, I stumbled across “Grading attendance hurts students” from Jayme Dyer in the Threads feed. Dyer teaches biology and based on my very brief browsing of her site (stack? sub? newsletter? what the hell is this called again?), I am pretty sure we’d agree about most things.

But no, not this. In my experience, attendance (participation, actually) needs to be a part of a student’s grade.

Dyer’s argument against grading attendance is based on compassion for students and their complicated lives. I get that, and I hear a lot of similar things from many of my fellow writing teachers as well. I teach at an opportunity granting institution where my students are probably similar to hers (she teaches at a community college). We do have “traditional” students who are 18-21 and living on or near campus, the kind of student more typical at a place like U of Michigan (which is about 7 miles away from where I work, EMU). But we also have students who commute and some from quite a distance away, and that creates a variety of attendance problems. We also have a lot of students who have significant work and family obligations— and that isn’t just the older returning students, either.

Dyer mentions a “secret project” she’s working on that includes reviewing syllabi from dozens of other gen ed biology classes, and she highlights a couple of draconian policies where missing two or three classes could drop a student’s grade by a full letter. That seems crazy and unreasonable to me too.

That said, I don’t think it takes a lot of research for me to claim that students who miss too much class tend to fail. Sure, teachers need to have some compassion and understanding, and they need to remember students have lives where stuff happens sometimes. But to me, a reasonable attendance policy is just like all kinds of rules and laws for things people should do anyway, even if it is arguably a “personal choice.” Take seat belt laws, for example. I’m old enough to remember riding in a car and learning to drive myself before seat belt laws, and I rarely bothered to buckle up. The law requiring it (and the possible ticket, of course) gave me and many other drivers the nudge we needed.

At Eastern, legend has it that the Board of Regents once passed a policy that declared no student could fail a class based on attendance alone. I’ve never found evidence that this policy exists (though I haven’t looked very hard), but whatever. I don’t grade students on attendance; I grade students’ participation, and the first thing a student needs to do to successfully participate is to show up.

Now, Dyer and I are working in different disciplines. I teach writing and all of the classes I teach have 25 or fewer students. It’s obviously easier to take attendance with 25 students than in a lecture hall with 250, and it’s a lot easier for students in a small class to understand why they need to show up. I have no idea how many students Dyer is working with in her courses, but since she teaches biology, I assume it’s more than me.

I think also think we have different assumptions about what class meetings are for. Dyer writes:

Think about it this way – if a student misses a class, makes up what they missed and performs well on the assessment, should their grade really be lower than a student who attended class and performed equally as well on the assessment?

I think she thinks that the point of a class meeting is for an instructor to deliver content to students, and the measurement of a student’s success in the course is an exam. And I get that— as far as I can tell, this has been the STEM assumption about pedagogy and assessment forever.

In the courses I teach (and I think this is true in most courses in the humanities), we value the stuff students do in these class meetings. The new-ish innovation of the “flipped classroom” is how most people I know have been teaching writing forever. My courses involve a lot of discussion of readings, discussions and brainstorming about the writing assignments, and peer review of those assignments. So “being there” is part of process, and there’s no way to cram on an exam at the end of the semester to try to make up for not being there.

The other thing is that now that we have AIs that easily answer any question that might pop up on a gen ed intro to biology exam, it seems to me that this approach to assessing students’ success is going to have to change and change very soon. One of the many things AI has made me rethink about teaching and learning is if someone can successfully complete an assignment without attending the course, then that’s not a very good assignment. But that’s a slightly different conversation for a different time.

Anyway, here’s what I do:

Participation in my classes is 30% of the overall grade and it includes activities like reading responses, small group work, and peer reviews. I don’t have a good way of keeping track of the details of these things in f2f classes, so to figure out a grade for participation, I have students email and tell me what grade they think they have earned, I respond, and then I base the grade on that. I think this is a surprisingly accurate and effective way of doing this, but that too might be a different post.

Students can’t participate if they aren’t there, so I tell my students they shouldn’t miss any class at all. However, the reality is there are of course legitimate reasons why students have to miss. So my policy is students can miss up to four class meetings— or the equivalent of two weeks in a 15 week semester— for any reason whatsoever. Students can always tell me why they need to miss class, but that’s up to them and I do not ask for any sort of “note” from someone.

Students who miss five classes fail— or at least they usually fail. Since the age of Covid, I have lightened up on this a bit and I’ve made a handful of acceptions with a few students. I also recently started giving students with perfect attendance a very small bonus, often enough to make a half-letter grade difference.

I’ve had a version of a policy like this for my entire teaching career, and I am comfortable in asserting that students who miss two weeks of a 15 week semester are essentially fail themselves anyway. These students aren’t just absent a lot; they also don’t turn stuff in. So just like seat belt laws incentivized wearing a seat belt (and undoubtedly saved countless numbers of people), an attendance policy incentivizes the positive behavior of showing up. And I guarantee you that I have had students in classes who grumbled about being required to show up who would have otherwise failed themselves.

Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI

A couple weeks ago, I wrote about why I use Google docs to teach writing at all levels. I’ve been using it for years–long before AI was a thing–in part because being able to see the history of a student’s Google doc is a teachable moment on the importance of the writing and revision process. This also has the added bonus of making it obvious if a student is skipping that work (by using AI, by copying/pasting from the internet, by stealing a paper from someone else, etc.) because the document history goes from nothing to a complete document in one step. I’m not saying that automatically means the student cheated, but it does prompt me to have a chat with that student.

In a similar vein and while I’m thinking about putting together my classes for the fall term, I thought I’d write about why I think teaching citation practices is increasingly important in research writing courses, particularly first year composition.

TL;DR version: None of this is new or innovative; rather, this is standard “teaching writing as a process” pedagogy and I’ve been teaching research writing like this for decades. But I do think it is even more important to teach citation skills now to help my students distinguish between the different types of sources, almost all of which are digital rather than on paper. Plus this is an assignment where AI might help, but I don’t think it’d help much.

Continue reading “Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI”

Why I Use Google Docs to Teach Writing, Especially in the Age of AI

I follow a couple different Facebook groups about AI, each of which have become a firehose of posts lately, a mix of cool new things and brand new freakouts. A while back, someone in one of these groups posted about an app to track the writing process in a student’s document as a way of proving that the text was not AI. My response to this was “why not just use Google docs?”

I wish I could be more specific than this, but I can’t find the original post or my comment to it; maybe it was deleted. Anyway, this person asked “what did I mean?” and I explained it briefly, but then I said I was thinking about writing a blog post about it. Here is that post.

For those interested in the tl;dr version: I think the best way to discourage students from handing in work they didn’t create (be that from a papermill, something copied and pasted from websites, or AI) is to teach writing rather than merely assigning writing. That’s not “my” idea; that’s been the mantra in writing studies for at least 50 years. Also not a new idea and one you already know if you use and/or teach with Google docs: it is a great tool for teaching writing because it helps with peer review and collaborative writing, and the version history feature helps me see a student’s writing process, from the beginning of the draft through revisions. And if a student’s draft goes from nothing to complete in one revision, well, then that student and I have a chat.

Continue reading “Why I Use Google Docs to Teach Writing, Especially in the Age of AI”

TALIA? This is Not the AI Grading App I Was Searching For

(My friend Bill Hart-Davidson unexpectedly died last week. At some point, I’ll write more about Bill here, probably. In the meantime, I thought I’d finish this post I started a while ago about the webinar about Instructify’s AI grading app. Bill and I had been texting/talking more about AI lately, and I wish I would have had a chance to text/talk more about this. Or anything else).

In March 2023, I wrote a blog post titled “What Would an AI Grading App Look Like?” I was inspired by what I still think is one of the best episodes of South Park I have seen in years, “Deep Learning.”  Follow this link for a detailed summary or look at my post from last year, but in the nutshell, the kids start using ChatGPT to write a paper assignment and Mr. Garrison figures out how to use ChatGPT to grade those papers. Hijinks ensue.

Well, about a month ago and at a time when I was up to my eyeballs in grading, I saw a webinar presentation from Instructify about their AI product called TALIA. The title of the webinar was “How To Save Dozens of Hours Grading Essays Using AI.” I missed the live event, but I watched the recording– and you can too, if you want— or at least you could when I started writing this. Much more about it after the break, but the tl;dr version is this AI grading tool is not the one I am looking for (not surprisingly), and I think it would be a good idea for these tech startups to include people with actual experience with teaching writing on their development teams.

Continue reading “TALIA? This is Not the AI Grading App I Was Searching For”

Workin’ 9 to 5 (sort of) and Other Adventures of All FY Writing/All the Time!

As I blogged about earlier this year, I’m doing something this semester that I have never done as a tenure-track professor: I’m teaching a full load (three sections) of first year writing. I’ve had semesters where I’ve taught multiple sections of the same class, but I think the last time I did that was in the early 2000s where I taught two sections of a 300-level course while also having a course release to  do quasi-administrative work. As I explained earlier, my current schedule is a fluke based on the circumstances this semester and I jumped at the chance to just teach first year writing. In other words, this was my idea: I wanted to have one prep for a change of pace, and I also like to teach first year writing.

(Incidentally, when I was hired at EMU in 1998, my primary teaching assignments were an earlier version of this 300-level course and a graduate course on teaching with computers. Times and curriculums have changed and I haven’t taught that 300 level class in eight years and that grad course in at least 15 years, maybe more).

Having only one course to prepare– as opposed to three different classes– has been nice, and it’s especially nice that it’s first year composition, a course that I have literally been teaching regularly in my dreams for most of my life at this point. I’ve been able to keep all three different classes on the same schedule, so with a bit of tweaking and customization for each section, it still is one prep. And not surprisingly, one prep is easier than three.

The downsides? Well, all three of sections are f2f (as is the case with all of the first year writing courses at EMU) and all three sections are on Tuesdays and Thursdays. Now, I haven’t taught three f2f classes since I started teaching online for part of my load, and that was almost 20 years ago. I also haven’t taught this early for a while (my first section is at 9:30 in the morning), and I haven’t taught back-to-back sections with no break between in a long time either. So on Tuesdays and Thursdays, I am in the office by 9 am and working pretty steadily until I’m done at 5 pm.

Because those days end up being nothing but teaching and preparing for teaching, I have also had to come into the office a lot more on other days during the week. I ran into an especially intense stretch in late January/early February when I had conferences with all 60 (or so) of my students– along with having a bunch of other “life” appointments and family stuff. I was on campus and mostly in my office for just about two weeks back then, and almost all day each of those days.

I realize this isn’t a work schedule most people would complain about– and I’m not complaining, at least not exactly. It’s just a very different rhythm from teaching a mix of f2f and online. The upside of teaching a mix of f2f and online is it gives me a lot more scheduling flexibility for when I do things. I do most of my online teaching while at home and in pajamas or sweats, plus I can take a break once in a while to do laundry or something else that needs to be done around the house.

But if I’m not disciplined about scheduling myself about when I do the work– planning, grading, and interacting with the class discussion boards– teaching asynchronously online can become an all day/all night thing where I’m constantly working in a not so efficient multitasking kind of way. So while teaching f2f means I’m spending a lot more time on campus, it does create at least more separation between life and work. That’s a good thing.

And I do like teaching f2f– not really more than teaching online (I like doing that too), but I like it. I like the live performance of f2f teaching and after having taught a zillion sections of first year writing, I have a refined schtick. I like putting on the show three times a day right in a row.

I’ve also been struck by the differences in these three sections. It’s not news to me that different groups of students taking the same course can have very different personalities, dynamics, and responses to readings and assignments. But teaching the same thing to three different classes (back to back to back) makes this very visible. Without getting into any details, it’s pretty clear that these different sections are not equally capable.

It does get a little boring doing the same thing three times in a row. If I’m scheduled to teach three sections of first year writing again like this, I would probably be okay with it. But I think I’d prefer two preps with an online class in the mix. Get back with me when I’m at the end of the semester to see if I feel the same way.

 

Starting 2024 With All First Year Writing/All the Time!

This coming winter term (what every other university calls spring term), I’m going to be doing something I have never done in my career as a tenure-track professor. I’m going to be teaching first year composition and only first year composition.  It’ll be quite a change.

When I came to EMU in 1998, my office was right next to a very senior colleague, Bob Kraft. Bob, who retired from EMU in 2004 and who passed away in December 2022, came back to the department to teach after having been in some administrative positions for quite a while. His office was right next to mine and we chatted with each other often about teaching, EMU politics, and other regular faculty chit-chat. He was a good guy; used to call me “Steve-O!”

Bob taught the same three courses every semester: three sections of a 300-level course called Professional Writing. It was a class he was involved in developing back in the early 1980s and I believe he assigned a course pack that had the complete course in it– and I mean everything: all the readings, in-class worksheets, the assignments, rubrics, you name it. Back in those days and before a university shift to “Writing Intensive” courses within majors, this was a class that was a “restricted elective” in lots of different majors, and we offered plenty of sections of it and similar classes. (In retrospect, the shift away from courses like this one to a “writing in the disciplines” approach/philosophy was perhaps a mistake both because of the way these classes have subsequently been taught in different disciplines and because it dramatically reduced the credit hour production in the English department– but all this is a different topic).

Anyway, Bob essentially did exactly the same thing three times a semester every semester, the same discussions, the same assignments, and the same kinds of papers to grade. Nothing– or almost nothing– changed. I’m pretty sure the only prep Bob had to do was change the dates on the course schedule.

I thought “Jesus, that’d be so boring! I’d go crazy with that schedule.” I mean, he obviously liked the arrangement and I have every reason to believe it was a good class and all, but the idea of teaching the same class the same way every semester for years just gave me hives. Of course, I was quite literally in the opposite place in my career: rather than trying to make the transition into retirement, I was an almost freshly-minted PhD who was more than eager to develop and teach new classes and do new things.

For my first 20 years at EMU (give or take), my work load was a mix of advanced undergraduate writing classes, a graduate course almost every semester, and various quasi-administrative duties. I occasionally have had semesters where I taught two sections of the same course, but most semesters, I taught three different courses– or two different ones plus quasi-admin stuff. I rarely taught first year composition during the regular school year (though I taught it in the summer for extra money while our son Will was still at home) because I was needed to teach the advanced undergrad and MA-level writing classes we had. And this was all a good thing: I got to teach a lot of different courses, I got a chance to do things like help direct the first year writing program or to coordinate our major and grad program, and I had the opportunity to work closely with a lot of MA students who have gone on to successful careers of their own.

But around six or seven years ago, the department (the entire university, actually) started to change and I started to change as well. Our enrollments have fallen across the board, but especially for upper-level undergraduate and MA level courses, which means instead of a grad course every semester, I tend to teach one a school year, along with fewer advanced undergrad writing classes, and now I teach first year writing every semester. One of the things I’ve come to appreciate about this arrangement is the students I work with in first year composition are different from the students I work with on their MA projects– but they’re really not that different, in the big picture of things.

And of course, as I move closer to thinking about retirement myself, Bob’s teaching arrangement seems like a better and better idea. So, scheduling circumstances being what they are, when it became clear I’d have a chance to just teach three sections of first year comp this coming winter, I took it.

We’ll see what happens. I’m looking forward to greatly reducing my prep time because this is the only course I’m teaching this semester (just three times), and also because first year writing is something I’ve taught and thought about A LOT. I’m also looking forward to experimenting with requiring students to use ChatGPT and other AI tools to at least brainstorm and copy-edit– maybe more. What I’m not looking forward to is kind of just repeating the same thing three times in a row each day I teach. Along these lines, I am not looking forward to teaching three classes all on the same days (Tuesdays and Thursdays) and all face to face. I haven’t done that in a long time (possibly never) because I’ve either taught two and been on reassigned time, or I have taught at least a third of my load online. And I’m also worried about keeping all three of these classes in synch. If one group falls behind for some reason, it’ll mess up my plans (this is perhaps inevitable).

What I’m not as worried about is all the essays I’ll have to read and grade. I’m well-aware that the biggest part of the work for anyone teaching first year writing is all the reading and commenting and grading student work, and I’ve figured out a lot over the years about how to do it. Of course, I might be kidding myself with this one….

So, What About AI Now? (A talk and an update)

A couple of weeks ago, I gave a talk/lead a discussion called “So, What About AI Now?” That’s a link to my slides. The talk/discussion was for a faculty development program at Washtenaw Community College, a program organized by my friend, colleague, and former student, Hava Levitt-Phillips.

I covered some of the territory I’ve been writing about here for a while now and I thought both the talk and discussion went well. I think most of the people at this thing (it was over Zoom, so it was a little hard to read the room) had seen enough stories like this one on 60 Minutes the other night: Artificial Intelligence is going to at least be as transformative of a technology as “the internet,” and there is not a zero percent chance that it could end civilization as we know it. All of which is to say we probably need to put the dangers of a few college kids using AI (badly) to cheat on poorly designed assignments into perspective.

I also talked about how we really need to question some of the more dubious claims in the MSM about the powers of AI, such as the article in the Chronicle of Higher Education this past summer, “GPT-4 Can Already Pass Freshman Year at Harvard.”  I blogged about that nonsense a couple months ago here, but the gist of what I wrote there is that all of these claims of AI being able to pass all these tests and freshman year at Harvard (etc.) are wrong. Besides the fact that the way a lot of these tests are run make the claims bogus (and that is definitely the case with this CHE piece), students in our classes still need to show up– and I mean that for both f2f and online courses.

And as we talked about at this session, if a teacher gives students some kind of assignment (an essay, an exam, whatever) that can be successfully completed without ever attending class, then that’s a bad assignment.

So the sense that I got from this group– folks teaching right now the kinds of classes where (according to a lot of the nonsense that’s been in MSM for months) the cheating with ChatGPT et al was going to just make it impossible to assign writing anymore, not in college and not in high school— is it hasn’t been that big of a deal. Sure, a few folks talked about students who tried to cheat with AI who were easily caught, but for the most part it hadn’t been much of a problem. The faculty in this group seemed more interested in trying to figure out a way to make use of AI in their teaching than they were in cheating.

I’m not trying to suggest there’s no reason to worry about what AI means for the future of… well, everything, including education. Any of us who are “knowledge workers”– that is, teachers, professors, lawyers, scientists, doctors, accountants, etc. etc.– needs to pay attention to AI because there’s no question this shit is going to change the way we do our jobs. But my sense from this group (and just the general vibe I get on campus and in social media) is that the freak-out about AI is over, which is good.

One last thing though:  just the other day (long after this talk), I saw what I believe to be my first case of a student trying to cheat with ChatGPT– sort of. I don’t want to go into too many details since this is a student in one of my classes right now. But basically, this student (who is struggling quite a bit) turned in a piece of writing that was first and foremost not the assignment I gave, and it also just happened this person used ChatGPT to generate a lot of the text. So as we met to talk about what the actual assignment was and how this student needed to do it again, etc., I also started asking about what they turned in.

“Did you actually write this?” I asked. “This kind of seems like ChatGPT or something.”

“Well, I did use it for some of it, yes.”

“But you didn’t actually read this book ChatGPT is citing here, did you?”

“Well, no…”

And so forth.  Once again, a good reminder that students who resort to cheating with things like AI are far from criminal masterminds.

A Belated “Beginning of the School Year” Post: Just Teaching

I don’t always write a “beginning of the school year” post and when I do, it’s usually before school starts, some time in August, and not at the end of the second week of classes. But here we are, at what seasonally always feels to me a lot more like the start of the new year than January.

This is the start of my 25th year at EMU. This summer, I selected another one of those goofy “thanks for your service” gifts they give out in five year increments. Five years ago, I picked out a pretty nice casserole dish; this time, I picked out a globe, one which lights up.

I wrote a new school year post like this was in 2021, and back then, I (briefly) contemplated the faculty buyout offer. “Briefly” because as appealing as it was at the time to leave my job behind, there’s just no way I could afford it and I’m not interested in starting some kind of different career. But here in 2023, I’m feeling good about getting back to work. Maybe it’s because I had a busy summer with lots of travel, some house guests, and a touch of Covid. After all of that, it’s just nice to have a change of pace and get back to a job. Or maybe it’s because (despite my recent case) we really are “past” Covid in the sense that EMU (like everywhere else) is no longer going through measures like social distancing, check-ins noting you’re negative, vax cards, free testing, etc. etc. This is not to say Covid is “over” of course because it’s still important for people to get vaxxed and to test.  And while I know the people I see all the time who are continuing to wear masks everywhere think lowering our defenses to Covid is foolish and it is true that cases right now are ticking up, the reality is Covid has become something more or less like the flu: it can potentially kill you, sure, but it is also one of those things we have to live with.

Normally in these kinds of new school year posts, I mention various plans and resolutions for the upcoming year. I have a few personal and not unusual ones– lose weight, exercise more, read more, and so on– but I don’t have any goals that relates to work. I’m not involved in any demanding committees or other service things, and I’d kind of like to keep it that way. I’m also not in the midst of any scholarly projects, and I can’t remember the last time that was the case. And interestingly (at least for me), I don’t know if I’ll be doing another scholarly project at this point. Oh, I will go to conferences that are in places I want to visit, and I’ll keep blogging about AI and other academic-like things I find interesting. That’s a sort of scholarship, I suppose. I’d like to write more commentaries for outlets like IHE or CHE, maybe also something more MSM. But writing or editing another book or article? Meh.

(Note that this could all change on a dime.)

So that leaves teaching as my only focus as far as “the work” goes. I suppose that isn’t that unusual since even when I’ve got a lot going on in terms of scholarly projects and service obligations, teaching is still the bulk of my job. I’ll have plenty to do this semester because I’ve got three different classes (with three different preps), and one of them is a new class I’m sort of/kind of making up as I go.

Still, it feels a little different. I’ve always said that if being a professor just involved teaching my classes– that is, no real service or scholarly obligations– then that wouldn’t be too hard of a job. I guess I’ll get to test that this term.

No, an AI could not pass “freshman year” in college

I am fond of the phrase/quote/mantra/cliché “Ninety percent of success in life is just showing up,” which is usually attributed to Woody Allen. I don’t know if Woody was “the first” person to make this observation (probably not, and I’d prefer if it was someone else), but in my experience, this is very true.

This is why AIs can’t actually pass a college course or their freshmen year or law school or whatever: they can’t show up. And it’s going to stay that way, at least until we’re dealing with advanced AI robots.

This is on my mind because my friend and colleague in the field, Seth Kahn, posted the other day on Facebook about this recent article from The Chronicle of Higher Education by Maya Bodnick, “GPT-4 Can Already Pass Freshman Year at Harvard.” (Bodnick is an undergraduate student at Harvard). It is yet another piece claiming that the AI is smart enough to do just fine on its own at one of the most prestigious universities in the world.

I agreed with all the other comments I saw on Seth’s post. In my comment (which I wrote before I actually read this CHE article), I repeated three points I’ve written about here or on social media before. First, ChatGPT and similar AIs can’t evaluate and cite academic research at even the modest levels I expect in a first year writing class. Second, while OpenAI proudly lists all the “simulated exams” where ChatGPT has excelled (LSAT, SAT, GRE, AP Art History, etc.), you have to click the “show more exams” button on that page to see that none of the versions of their AI has managed better than a “2” on the AP English Language (and also Literature) and Composition exams. It takes a “3” on this exam to get any credit at EMU, and probably a “4” at a lot of other universities.

Third, I think mainstream media and all the rest of us really need to question these claims of AIs passing whatever tests and classes and whatnot much MUCH more carefully than I think most of us have to date.  What I was thinking about when I made that last comment was another article published in CHE and in early July, “A Study Found That AI Could Ace MIT. Three MIT Students Beg to Differ,” by Tom Bartlett. In this article, Bartlett discusses  a study (which I don’t completely understand because it’s too much math and details) conducted by 3 MIT students (class of 2024) who researched the claim that an AI could “ace” MIT classes. The students determined this was bullshit. What were the students’ findings (at least the ones I could understand)? In some of the classes where the AI supposedly had a perfect score, the exams include unsolvable problems, so it’s not even possible to get a perfect score. In other examples, the exam questions the AI supposedly answered correctly did not provide enough information for that to be possible either. The students posted their results online and at least some of the MIT professors who originally made the claims agreed and backtracked.

But then I read this Bodnick article, and holy-moly, this is even more bullshitty than I originally thought. Let me quote at length Bodnick describing her “methodology”:

Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)

Here are the prompts with links to the essays, the names of instructors, and the grades each essay received:

  • Microeconomics and Macroeconomics (Jason Furman and David Laibson): Explain an economic concept creatively. (300-500 words for Micro and 800-1000 for Macro). Grade: A-
  • Latin American Politics (Steven Levitsky): What has caused the many presidential crises in Latin America in recent decades? (5-7 pages) Grade: B-
  • The American Presidency (Roger Porter): Pick a modern president and identify his three greatest successes and three greatest failures. (6-8 pages) Grade: A
  • Conflict Resolution (Daniel Shapiro): Describe a conflict in your life and give recommendations for how to negotiate it. (7-9 pages). Grade: A
  • Intermediate Spanish (Adriana Gutiérrez): Write a letter to activist Rigoberta Menchú. (550-600 words) Grade: B
  • Freshman Seminar on Proust (Virginie Greene): Close read a passage from In Search of Lost Time. (3-4 pages) Grade: Pass

I told these instructors that each essay might have been written by me or the AI in order to minimize response bias, although in fact they were all written by GPT-4, the recently updated version of the chatbot from OpenAI.

In order to generate these essays, I inputted the prompts (which were much more detailed than the summaries above) word for word into GPT-4. I submitted exactly the text GPT-4 produced, except that I asked the AI to expand on a couple of its ideas and sequenced its responses in order to meet the word count (GPT-4 only writes about 750 words at a time). Finally, I told the professors and TAs to grade these essays normally, except to ignore citations, which I didn’t include.

Not only can GPT-4 pass a typical social science and humanities-focused freshman year at Harvard, but it can get pretty good grades. As shown in the list above, GPT-4 got all A’s and B’s and one Pass.

JFC. Okay, let’s just think about this for a second:

  • We’re talking about three “essays” that are less than 1000 words and another three that are slightly longer, and based on this work alone, GPT-4 “passed” a year of college at Harvard. That’s all it takes. Really; really?! That’s it?
  • I would like to know more about what Bodnick means when she says that the writing prompts were “much more detailed than the summaries above” because those details matter a lot. But as summarized, these are terrible assignments. They aren’t connected with the context of the class or anything else.  It would be easy to try to answer any of these questions with a minimal amount of Google searching and making educated guesses. I might be going out on a limb here, but I don’t think most writing assignments at Harvard or any other college– even badly assigned ones– are as simplistic as these.
  • It wasn’t just ChatGPT: she had to do some significant editing to put together ChatGPT’s short responses into longer essays. I don’t think the AI could have done that on its own. Unless it hired a tutor.
  • Asking instructors to not pay any attention to the lack of citation (and I am going to guess the need for sources to back up claims in the writing) is giving the AI way WAAAAYYY too much credit, especially since ChatGPT (and other AIs) usually make shit up hallucinate when citing evidence. I’m going to guess that even at Harvard, handing in hallucinations would result in a failing grade. And if the assignment required properly cited sources and the student didn’t do that, then that student would also probably fail.
  • It’s interesting (and Bodnick points this out too) that the texts that received the lowest grades are ones that ask students to “analyze” or to provide their opinions/thoughts, as opposed to assignments that were asking for an “information dump.” Again, I’m going to guess that, even at Harvard, there is a higher value placed on students demonstrating with their writing that they thought about something.

I could go on, but you get the idea. This article is nonsense. It proves literally nothing.

But I also want to return to where I started, the idea that a lot of what it means to succeed in anything (perhaps especially education) is showing up and doing the work. Because after what seems like the zillionth click-bait headline about how ChatGPT could graduate from college or be a lawyer or whatever because it passed a test (supposedly), it finally dawned on me what has been bothering me the most about these kinds of articles: that’s just not how it works! To be a college graduate or a lawyer or damn near anything else takes more than passing a test; it takes the work of showing up.

Granted, there has been a lot more interest and willingness in the last few decades to consider “life experience” credit as part of degrees, and some of these places are kind of legitimate institutions– Southern New Hampshire and the University of Phoenix immediately come to mind. But “life experience” credit is still considered mostly bullshit and the approach taken by a whole lot of diploma mills, and real online universities (like SNHU and Phoenix) still require students to mostly take actual courses, and that requires doing more than writing a couple papers and/or taking a couple of tests.

And sure, it is possible to become a lawyer in California, Vermont, Virginia and Washington without a law degree, and it is also possible to become a lawyer in New York or Maine with just a couple years of law school or an internship. But even these states still require some kind of experience with a law office, most states do require attorneys to have law degrees, and it’s not exactly easy to pass the bar without the experience you get from earning a law degree. Ask Kim Kardashian. 

Bodnick did not ask any of the faculty who evaluated her AI writing examples if it would be possible for a student to pass that professor’s class based solely on this writing sample because she already knew the answer: of course not.

Part of the grade in the courses I teach is based on attendance, participation in the class discussions and peer review, short responses to readings, and so forth. I think this is pretty standard– at least in the humanities. So if some eager ChatGPT enthusiast came to one of my classes– especially one like first year writing, where I post all of the assignments at the beginning of the semester (mainly because I’ve taught this course at least 100 times at this point)– and said to me “Hey Krause, I finished and handed in all the assignments! Does that mean I get an A and go home now?” Um, NO! THAT IS NOT HOW IT WORKS! And of course anyone familiar with how school works knows this.

Oh, and before anyone says “yeah, but what about in an online class?” Same thing! Most of the folks I know who teach online have a structure where students have to regularly participate and interact with assignments, discussions, and so forth. My attendance and participation policies for online courses are only slightly different from my f2f courses.

So please, CHE and MSM in general: stop. Just stop. ChatGPT can (sort of) pass a lot of tests and classes (with A LOT of prompting from the researchers who really really want ChatGPT to pass), but until that AI robot walks/rolls into  a class or sets up its profile on Canvas all on its own, it can’t go to college.

What Counts as Cheating? And What Does AI Smell Like?

Cheating is at the heart of the fear too many academics have about ChatGPT, and I’ve seen a lot of hand-wringing articles from MSM posted on Facebook and Twitter. One of the more provocative screeds on this I’ve seen lately was in the Chronicle of Higher Education, “ChatGPT is a Plagiarism Machine” by Joseph M. Keegin. In the nutshell, I think this guy is unhinged, but he’s also not alone.

Keegin claims he and his fellow graduate student instructors (he’s a PhD candidate in Philosophy at Tulane) are encountering loads of student work that “smelled strongly of AI generation,” and he and some of his peers have resorted to giving in-class handwritten tests and oral exams to stop the AI cheating. “But even then,” Keegin writes, “much of the work produced in class had a vague, airy, Wikipedia-lite quality that raised suspicions that students were memorizing and regurgitating the inaccurate answers generated by ChatGPT.”

(I cannot help but to recall one of the great lines from [the now problematically icky] Woody Allen in Annie Hall: “I was thrown out of college for cheating on a metaphysics exam; I looked into the soul of the boy sitting next to me.” But I digress.)

If Keegin is exaggerating in order to rattle readers and get some attention, then mission accomplished. But if he’s being sincere– that is, if he really believes his students are cheating everywhere on everything all the time and the way they’re cheating is by memorizing and then rewriting ChatGPT responses to Keegin’s in-class writing prompts– then these are the sort of delusions which should be discussed with a well-trained and experienced therapist. I’m not even kidding about that.

Now, I’m not saying that cheating is nothing to worry about at all, and if a student were to turn in whatever ChatGPT provided for a class assignment with no alterations, then a) yes, I think that’s cheating, but b) that’s the kind of cheating that’s easy to catch, and c) Google is a much more useful cheating tool for this kind of thing. Keegin is clearly wrong about ChatGPT being a “Plagiarism Machine” and I’ve written many many many different times about why I am certain of this. But what I am interested in here is what Keegin thinks does and doesn’t count as cheating.

The main argument he’s trying to make in this article is that administrators need to step in to stop this never ending-battle against the ChatGPT plagiarism. Universities should “devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.”

Keegin doesn’t define what he means by cheating (though he does give some examples that don’t actually seem like cheating to me), but I think we can figure it out by reading what he means by a “meaningful education.” He writes (I’ve added the emphasis) “A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment.”

So, I think Keegin sees education as an activity where students labor alone at mastering the material delivered by the instructor. Knowledge is not something shared or communal, and it certainly isn’t created through interactions with others. Rather, students receive knowledge, do the work they are asked to do by the instructor, they do that work alone, and then students reproduce that knowledge investment provided by the instructor– with interest. So any work a student might do that involves anyone or anything else– other students, a tutor, a friend, a google search, and yes ChatGPT– is an opportunity for cheating.

More or less, this what Paulo Freire meant by the ineffective and unjust  “banking model of education” which he wrote about over 50 years ago in Pedagogy of the Oppressed. Friere’s work remains very important in many fields specifically interested in pedagogy (including writing studies), and Pedagogy of the Oppressed is one of the most cited books in the social sciences. And yet, I think a lot of people in higher education– especially in STEM fields, business-oriented and other technical majors, and also in disciplines in the humanities that have not been particularly invested in pedagogy (philosophy, for example)– are okay with this system. These folks think education really is a lot like banking and “investing,” and they don’t see any problem with that metaphor. And if that’s your view of education, then getting help from anyone or anything that is not from the teacher is metaphorically like robbing a bank.

But I think it’s odd that Keegin is also upset with “credentialing” in higher education. That’s a common enough complaint, I suppose, especially when we talk about the problems with grading. But if we were to do away with degrees and grades as an indication of successful learning (or at least completion) and if we instead decided students should learn solely for the intrinsic value of learning, then why would it even matter if students cheated or not? That’d be completely their problem. (And btw, if universities did not offer credentials that have financial, social, and cultural value in the larger society, then universities would cease to exist– but that’s a different post).

Perhaps Keegin might say “I don’t have a problem with students seeking help from other people in the writing center or whatever. I have a problem with students seeking help from an AI.” I think that’s probably true with a lot of faculty. Even when professors have qualms about students getting a little too much help from a tutor, they still generally do see the value and usually encourage students to take advantage of support services, especially for students at the gen-ed levels.

But again, why is that different? If a student asks another human for help brainstorming a topic for an assignment, suggesting some ideas for research, creating an outline, suggesting some phrases to use, and/or helping out with proofreading, citation, and formatting, how is that not cheating when this help comes from a human but it is cheating when it comes from ChatGPT? And suppose a student instead turns to the internet and consults things like CliffsNotes, Wikipedia, Course Hero, other summaries and study guides, etc. etc.; is that cheating?

I could go on, but you get the idea. Again, I’m not saying that cheating in general and with ChatGPT in particular is nothing at all to worry about. And also to be fair to Keegin, he even admits “Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right.” But the more of these paranoid and shrill commentaries I read about “THE END” of writing assignments and how we have got to come up with harsh punishments for students so they stop using AI, the more I think these folks are just scared that they’re not going to be able to give students the same bullshitty non-teaching writing assignments that they’ve been doing for years.