Six Things I Learned After a Semester of Lots of AI

Two years ago (plus about a week!), I wrote about how “AI Can Save Writing by Killing ‘The College Essay,'” meaning that if AI can be used to respond to bad writing assignments, maybe teachers will focus more on teaching writing as a process the way that scholars in writing studies have been talking about for over 50 years. That means an emphasis on “showing your work” through a series of scaffolded assignments, peer review activities, opportunities for revision, and so forth.

This past semester, I decided to really lean into AI in my classes. I taught two sections of first-year writing where the general research topic for everyone was “your career goals and AI,” and where I allowed (even encouraged) the use of AI under specific circumstances. I also taught an advanced class for majors called “Digital Writing” where the last two assignments were all about trying to use AI to “create” or “compose” “texts” (the scare quotes are intentional there). I’ve been blogging/substacking about this quite a bit since summer and there are more details I’m not getting to here because it’s likely to be part of a scholarly project in the near future.

But since the fall semester is done and I have popped the (metaphorical) celebratory bottle of bubbly, I thought I’d write a little bit about some of the big-picture lessons about teaching writing with (and against) AI I learned this semester.

Teachers can “refuse” or “resist” or “deny” AI all they want, but they should not ignore it.

As far as I can tell from talking with my students, most of my colleagues did not address AI in their classes at all. A few students reported that they did discuss and use AI in some of their other classes. I had several students in first-year writing who were interior design majors and all taking a course where the instructor introduced them to AI design tools– sounded like an interesting class. I had a couple of students tell me an instructor “forbid” the use of AI but with no explanation of what that meant. Most students told me the teacher never brought up the topic of AI at all.

Look, you can love AI and think it is going to completely transform learning and education, you can hate AI all you want and wish it had never been invented and do all you can to break that AI machine with your Great Enoch sledgehammers. But ignoring it or wishing it away is ridiculous.

For my first-year writing students, most of whom readily admitted they used AI a lot in high school to do things that were probably cheating, I spent some time explaining how they could and could not use AI. I did so in part to teach about how I think AI can be a useful tool as part of the process of writing, but I also did this to establish my credibility. I think a lot of students end up cheating with AI because they think that the teacher is clueless about it– and I think a lot of times, students are right.

You’re gonna need some specific rules and guidelines about AI– especially if you want to “refuse” or “resist” it.

I have always included on my syllabi an explicit policy about plagiarism, and this year I added language that makes it clear that copying and pasting large chunks of text from AI is cheating. I did allow and encourage first-year writing students to use AI as part of their process, and I required my advanced writing students to use AI as part of their “experiments” in that class. But I also asked students to include an “AI Use Statement” with their final drafts, one that explained what AI systems they used (and that included Grammarly), what prompts they used, how they used the AI feedback in their essay, and so forth. Because this was completely new to them (and me too), these AI Use Statements were sometimes a lot less complete and accurate than I would have preferred.

I also insisted that students write with Google Docs for each writing assignment and for all steps in the process, from the very start of the first hint of a first draft until they hand it into me. Students need to share this with me so I can edit it. I take a look at the “version history” of the Google Doc, and if I suddenly see pages of clear prose magically appear in the essay, we have a discussion. That seemed to work well.

Still, sometimes students are still going to cheat with AI, and often without realizing that they’re cheating.

Even with the series of scaffolded assignments and using Google Docs and all of my warnings, I did catch a few students cheating with AI in both intentional and not as intentional ways. Two of these examples were similar to old-school plagiarism. One was from a student from another country who had some cultural and language disconnections about the expectations of American higher education (to put it mildly); I think first-year writing was too advanced and this student should have been advised into an ESL class. Another was a student who was late on a short assignment and handed in an obviously AI-generated text (thanx, Google Docs!). I gave this person a stern warning and another chance, and they definitely didn’t do that again.

As I wrote about in this post about a month ago, I also had a bunch of students who followed the AI more closely than the first assignment, the Topic Proposal. This is a short essay where students write about how they came up with their topic and initial thesis for their research for the semester. Instead, a lot of students asked AI what it “thought” of their topic and thesis, and then they more or less summarized the AI responses, which were inevitably about why the thesis was correct. Imagine a mini research paper but without any research.

The problem was that wasn’t the assignment.  Rather, the assignment asked students to describe how they came up with their thesis idea: why they were interested in the topic in the first place, what kinds of other topics they considered, what sorts of brainstorming techniques they used, what their peers told them, and so forth. In other words, students tried to use the AI to tell them what they thought, and that just didn’t work. It ended up being a good teachable moment.

A lot of my students do not like AI and don’t use it that much. 

This was especially true in my more advanced writing class– where, as far as I can tell, no one used AI to blatantly cheat. For two of the three major projects of the semester, I required students to experiment with AI and then to write essays where they reflected/debriefed on their experiments while making connections to the assigned readings. Most of these students, all of whom were some flavor of an English major or writing minor, did not use AI for the reflection essays. They either felt that AI was just “wrong” in so many different ways (unethical, gross, unfair, bad for the environment, etc.), or they didn’t think the AI advice on their writing (other than some Grammarly) was all that useful for them.

This was not surprising; after all, students who major or minor in something English-related usually take pride in their writing and they don’t want to turn that over to AI. In the freshman composition classes, I had a few students who never used AI either–judging from what they told me in their AI Use statements. But a lot of students’ approaches to AI evolved as the semester went on, and by the time they were working on the larger research-driven essay where all the parts from the previous assignments come together, they said things like they asked ChatGPT for advice on “x” part of the essay, but it wasn’t useful advice so they ignored it.

But some students used AI in smart and completely undetectable ways.

This was especially true in the first year writing class. Some of the stronger writers articulated in some detail in their AI Use Statements how they used ChatGPT (and other platforms) to brainstorm, to suggest outlines for assignments, to go beyond Grammarly proofreading, to get more critical feedback on their drafts, and so forth. I did not consider this cheating at all because they weren’t getting AI to do the work for them; rather, they were getting some ideas and feedback on their work.

And here’s the thing that’s important: when a student (or anyone else) uses AI effectively and for what it’s really for, there is absolutely no way for the teacher (or any other reader) to possibly know.

The more time I have spent studying and teaching about AI, the more skeptical I have become about it. 

I think my students feel the same way, and this was especially true with the students in my advanced class who were directly studying and experimenting with many different AI platforms and tasks. The last assignment for the course asked students to use AI to do or make something that they could not have possibly done by themselves. For example, one student taught themself to play chess and was fairly successful with that– at least up to a point. Another student tried to get ChatGPT to teach them how to play the card game Euchre, though less successfully because the AI kept “cheating.” Another student asked the AI to code a website, and the AI was pretty good at that. Several students tried to use AI tools to compose music; similar to me I guess, they listen to lots of music and wished they could play an instrument and/or compose songs.

What was interesting to me and I think most of my students was how quickly they typically ran into the AI’s and their own limitations. Sometimes students wanted the AI to do something the AI simply could not do; for example, the problem with playing Euchre with the AI (according to the student) is it didn’t keep track of what cards had already been played– thus the cheating. But the bigger problem was that without any knowledge of how to accomplish the task on their own, the AI was of little use. For example, the student who used AI to code a website still had no idea at all what any of the code meant, nor did they know what to do with it to make it into a real website. Students who knew nothing about music who tried to write/create songs couldn’t get very far. In other words, it was not that difficult for students to discover ways AI fails at a task, which in many ways is far more interesting than discovering what it can accomplish.

I’m also increasingly skeptical of the hype and role of AI in education, mainly because I spent most of the 2010s studying MOOCs. Remember them? They were going to be the delivery method for general education offerings everywhere, and by 2030 or 2040 or so, MOOCs were going to completely replace all but the most prestigious universities all over the world. Well, that obviously didn’t happen. But that didn’t mean the end of MOOCs; in fact, there are more people taking MOOC “courses” now than there were during the height of the MOOC “panic” around 2014. It’s just that nowadays, MOOCs are mostly for training (particularly in STEM fields), certificates, and as “edutainment” along the lines of Master Class.

I think AI is different in all kinds of ways, not the least of which is AI is likely to be significantly more useful than a chat box or to check grammar. I had several first-year students this semester write about AI and their future careers in engineering, logistics, and finance, and they all had interesting evidence about both how AI is being used right now and how it will likely be used in the future. The potential of AI changing the world at least as much as another recent General Purpose Technology, “the internet,” is certainly there.

Does that mean AI is going to have as great of an impact on education as the internet did? Probably, and teachers have had to make all kinds of big and small changes to how they teach things because of the internet, which was also true when writing classes first took up computers and word processing software.  But I think the fundamentals of teaching (rather than merely assigning) writing still work.

Is Apple Intelligence (and AI) For Dumb and Lazy People?

And the challenges of an AI world where everyone is above average

I’ve been an Apple fanboy since the early 1980s. I owned one Windoze computer years ago that was mostly for games my kid wanted to play. Otherwise, I’ve been all Apple for around 40 years. But what the heck is the deal with these ads for Apple Intelligence?

In this ad (the most annoying of the group, IMO), we see a schlub of a guy, Warren, emailing his boss in idiotic/bro-based prose. He pushes the Apple Intelligence feature and boom, his email is transformed into appropriate office prose. The boss reads the prose, is obviously impressed, and the tagline at the end is “write smarter.” Ugh.

Then there’s this one:

This guy, Lance, is in a board meeting and he’s selected to present about “the Prospectus,” which he obviously has not read. He slowly wheels his office chair and his laptop into the hallway, asks Apple’s AI to summarize the key points in this long thing he didn’t read. Then he slowly wheels back into the conference room and delivers a successful presentation. The tagline on this one? “Catch up quick.” Ugh again.

But in a way, these ads might not be too far from wrong. These probably are the kind of “less than average” office workers who could benefit the most from AI— well, up to a point, in theory.

Among many other things, my advanced writing students and I read Ethan Mollick’s Co-Intelligence, and in several different places in that book, he argues that in experiments when knowledge workers (consultants, people completing a writing task, programmers) use AI to complete tasks, they are much more productive. Further, while AI does not make already excellent workers that much better, it does help less than excellent workers improve. There’s S. Noy and W. Zhang’s Science paper “Experimental evidence on the productivity effects of generative artificial intelligence;” here’s a quote from the editor’s summary:

Will generative artificial intelligence (AI) tools such as ChatGPT disrupt the labor market by making educated professionals obsolete, or will these tools complement their skills and enhance productivity? Noy and Zhang examined this issue in an experiment that recruited college-educated professionals to complete incentivized writing tasks. Participants assigned to use ChatGPT were more productive, efficient, and enjoyed the tasks more. Participants with weaker skills benefited the most from ChatGPT, which carries policy implications for efforts to reduce productivity inequality through AI.

Then there’s S. Peng et al and their paper “The Impact of AI on Developer Productivity: Evidence from GitHub Copilot.” This was an experiment with a programming AI on Github, and the programmers who used AI completed tasks 55.8% faster. And Mollick talks a fair amount about a project he was a co-writer on, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality,” which found that consultants in an experiment were more productive when allowed to use AI— except when faced with a “jagged technology frontier” problem, which in the study was a technical problem beyond the AI’s abilities. However, one of the problems Mollick and his colleagues observed is that a lot of the subjects in their study often copied and pasted content from the AI with minimal editing, and the AI-using subjects had a much harder time with that jagged frontier problem. I’ll come back to this in a couple more paragraphs.

Now, Mollick is looking at AI as a business professor, so he sees this as a good thing because it improves the quality of the workforce, and maybe it’ll enable employers to hire fewer people to complete the same tasks. More productivity with less labor equals more money, capitalism for the win. But my English major students and I all see ourselves (accurately or not) as well-above-average writers, and we all take pride in that. We like the fact we’re better at writing than most other people. Many of my students are aspiring novelists, poets, English teachers, or some other career where they make money from their abilities to write and read, and they all know that publishing writing that other people read is not something that everyone can do. So the last thing any of us who are good at something want is a technology that diminishes the value of that expertise.

This is part of what is behind various declarations of late for refusing or resisting AI, of course. Part of what is motivating someone like Ted Chiang to write about how AI can’t make art is making art is what he is good at. The last thing he wants is a world where any schmuck (like those dudes in the Apple AI ads) can click a button and be as good as he is at making art. I completely understand this reason for fearing and resisting AI, and I too hope that AI doesn’t someday in the future become humanity’s default story teller.

Fortunately for writers like Chiang and me and my students, the AI hype does not square with reality. I haven’t played around with Apple AI yet, but the reviews I’ve seen are underwhelming. I stumbled across a YouTube review by Marques Brownlee about the new AI that is quite thorough. I don’t know much about Brownlee, but he has over 19 million subscribers so he probably knows what he is talking about. If you’re curious, he talks about the writing feature in the first few minutes of this video, but the short version is he says that as a professional writer, he finds it useless.

The other issue I think my students and I are noticing is that the jagged frontier Mollick and his colleagues talk about— that is, the line/divide between tasks the AI can accomplish reasonably well and what it can’t— is actually quite large. In describing the study Mollick and his colleagues did which included a specifically difficult/can’t do with AI jagged frontier problem, I think he implies that this frontier is small. But Mollick and his colleagues— and the same is true with these other studies he quotes on this— are not studying AI in real settings. These are controlled experiments, and the researchers are trying to do all they can to eliminate other variables.

But in the more real world with lots of variables, there are jagged frontiers everywhere. The last assignment I gave in the advanced writing class asked students to attempt to “compose” or “make” something with the help of AI (a poem, a play, a song, a movie, a website, etc. etc.) that they could not do on their own. The reflection essays are not due until the last week of class, but we have had some “show and tell” exchanges about these projects. Some students were reasonably successful with making or doing something thanks to AI— and as a slight tangent: some students are better than others at prompting the AI and making it work for them. It’s not just a matter of clicking a button. But they all ran into that frontier, and for a lot of students, that was essentially how their experiment ended. For example, one student was successful at getting AI to generate the code for a website; but this student didn’t know what to do with the code the AI made to make it actually into a website. A couple of students tried to use AI to write music, but since they didn’t know much about music, their results were limited. One student tried to get AI to teach them how to play the card game Euchre, but the AI kept on doing things like playing cards in the student’s hand.

This brings me back to these Apple ads: I wish they both went on just another minute or so. Right after Warren and Lance confidently look directly at the camera with smug look that says to viewers “Do you see what I just got away with there,” they have to follow through with what they supposedly have accomplished, and I have a feeling that would go poorly. Right after Warren’s boss talks with him about that email and right after Lance starts his summary, I am pretty sure they’re gonna get busted. Sort of like what has happened when I have suspected correctly that a student used too much AI and that student can’t answer basic questions about what it is they (supposedly) wrote.

Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI

A couple weeks ago, I wrote about why I use Google docs to teach writing at all levels. I’ve been using it for years–long before AI was a thing–in part because being able to see the history of a student’s Google doc is a teachable moment on the importance of the writing and revision process. This also has the added bonus of making it obvious if a student is skipping that work (by using AI, by copying/pasting from the internet, by stealing a paper from someone else, etc.) because the document history goes from nothing to a complete document in one step. I’m not saying that automatically means the student cheated, but it does prompt me to have a chat with that student.

In a similar vein and while I’m thinking about putting together my classes for the fall term, I thought I’d write about why I think teaching citation practices is increasingly important in research writing courses, particularly first year composition.

TL;DR version: None of this is new or innovative; rather, this is standard “teaching writing as a process” pedagogy and I’ve been teaching research writing like this for decades. But I do think it is even more important to teach citation skills now to help my students distinguish between the different types of sources, almost all of which are digital rather than on paper. Plus this is an assignment where AI might help, but I don’t think it’d help much.

Continue reading “Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI”

Why I Use Google Docs to Teach Writing, Especially in the Age of AI

I follow a couple different Facebook groups about AI, each of which have become a firehose of posts lately, a mix of cool new things and brand new freakouts. A while back, someone in one of these groups posted about an app to track the writing process in a student’s document as a way of proving that the text was not AI. My response to this was “why not just use Google docs?”

I wish I could be more specific than this, but I can’t find the original post or my comment to it; maybe it was deleted. Anyway, this person asked “what did I mean?” and I explained it briefly, but then I said I was thinking about writing a blog post about it. Here is that post.

For those interested in the tl;dr version: I think the best way to discourage students from handing in work they didn’t create (be that from a papermill, something copied and pasted from websites, or AI) is to teach writing rather than merely assigning writing. That’s not “my” idea; that’s been the mantra in writing studies for at least 50 years. Also not a new idea and one you already know if you use and/or teach with Google docs: it is a great tool for teaching writing because it helps with peer review and collaborative writing, and the version history feature helps me see a student’s writing process, from the beginning of the draft through revisions. And if a student’s draft goes from nothing to complete in one revision, well, then that student and I have a chat.

Continue reading “Why I Use Google Docs to Teach Writing, Especially in the Age of AI”

The Problem is Not the AI

The other day, I heard the opening of this episode of the NPR call-in show 1A, “Know It All: ChatGPT In the Classroom.” It opened with this recorded comment from a listener named Kate:

“I teach freshman English at a local university, and three of my students turned in chatbot papers written this past week. I spent my entire weekend trying to confirm they were chatbot written, then trying to figure out how to confront them, to turn them in as plagiarist, because that is what they are, and how I’m going penalize their grade. This is not pleasant, and this is not a good temptation. These young men’s academic careers now hang in the balance because now they’ve been caught cheating.”

Now, I didn’t listen to the show for long beyond this opener (I was driving around running errands), and based on what’s available on the website, the discussion  also included information about incorporating ChatGPT into teaching. Also, I don’t want to be too hard on poor Kate; she’s obviously really flustered and I am guessing there were a lot of teachers listening to Kate’s story who could very personally relate.

But look, the problem is not the AI.

Perhaps Kate was teaching a literature class and not a composition and rhetoric class, but let’s assume whatever “freshman English” class she was teaching involved a lot of writing assignments. As I mentioned in the last post I had about AI and teaching with GPT-3 back in December, there is a difference between teaching writing and assigning writing. This is especially important in classes where the goal is to help students become better at the kind of writing skills they’ll need in other classes and “in life” in general.

Teaching writing means a series of assignments that build on each other, that involve brainstorming and prewriting activities, and that involve activities like peer reviews, discussions of revision, reflection from students on the process, and so forth. I require students in my first year comp/rhet classes to “show their work” through drafts that is in a way they similar to how they’d be expected to in an Algebra or Calculus course. It’s not just the final answer that counts. In contrast, assigning writing is when teachers give an assignment (often a quite formulaic one, like write a 5 paragraph essay about ‘x’) with no opportunities to talk about getting started, no consideration of audience or purpose, no interaction with the other students who are trying to do the same assignment, and no opportunity to revise or reflect.

While obviously more time-consuming and labor-intensive, teaching writing has two enormous advantages over only assigning writing. First, we know it “works” in that this approach improves student writing– or at least we know it works better than only assigning writing and hoping for the best. We know this because people in my field have been studying this for decades, despite the fact that there are still a lot of people just assigning writing, like Kate. Second, teaching writing makes it extremely difficult to cheat in the way Kate’s students have cheated– or maybe cheated. When I talk to my students about cheating and plagiarism, I always ask “why do you think I don’t worry much about you doing that in this class?” Their answer typically is “because we have to turn in all this other stuff too” and “because it would be too much work,” though I also like to believe that because of the way the assignments are structured, students become interested in their own writing in a way that makes cheating seem silly.

Let me just note that what I’m describing has been the conventional wisdom among specialists in composition and rhetoric for at least the last 30 (and probably more like 50) years. None of this is even remotely controversial in the field, nor is any of this “new.”

But back to Kate: certain that these three students turned in “chatbot papers,” she spent the “entire weekend” working to prove these students committed the crime of plagiarism and they deserve to be punished. She thinks this is a remarkably serious offense– their “academic careers now hang in the balance”– but I don’t think she’s going through all this because of some sort of abstract and academic ideal. No, this is personal. In her mind, these students did this to her and she’s going to punish them. This is beyond a sense of justice. She’s doing this to get even.

I get that feeling, that sense that her students betrayed her. But there’s no point in making teaching about “getting even” or “winning” because as the teacher, you create the game and the rules, you are the best player and the referee, and you always win. Getting even with students is like getting even with a toddler.

Anyway, let’s just assume for a moment that Kate’s suspicions are correct and these three students handed in essays created entirely by ChatGPT. First off, anyone who teaches classes like “Freshman English” should not need an entire weekend or any special software to figure out if these essays were written by an AI. Human writers– at all levels, but especially comparatively inexperienced human writers– do not compose the kind of uniform, grammatically correct, and robotically plodding prose generated by ChatGPT. Every time I see an article with a passage of text that asks “was this written by a robot or a student,” I always guess right– well, almost always I guess right.

Second,  if Kate did spend her weekend trying to find “the original” source ChatGPT used to create these essays, she certainly came up empty handed. That was the old school way of catching plagiarism cheats: you look for the original source the student plagiarized and confront the student with it, court room drama style. But ChatGPT (and other AI tools) do not “copy” from other sources; rather, the AI creates original text every time. That’s why there have been several different articles crediting an AI as a “co-author.”

Instead of wasting a weekend, what Kate should have done is called each of these students into her office or taken them aside one by one in a conference and asked them about their essays. If the students cheated,  they would not be able to answer basic questions about what they handed in, and 99 times out of 100, the confronted cheating student will confess.

Because here’s the thing: despite all the alarm out there that all students are cheating constantly, my experience has been the vast majority do not cheat like this, and they don’t want to cheat like this. Oh sure, students will sometimes “cut corners” by looking over to someone else’s answers on an exam, or maybe by adding a paragraph or two from something without citing it. But in my experience, the kind of over-the-top sort of cheating Kate is worried about is extremely rare. Most students want to do the right thing by doing the work, trying to learn something, and by trying their best– plus students don’t want to get in trouble from cheating either.

Further, the kinds of students who do try to blatantly plagiarize are not “criminal masterminds.” Far from it. Rather, students blatantly plagiarize when they are failing and desperate, and they are certainly not thinking of their “academic careers.” (And as a tangent: seems to me Kate might be overestimating the importance of her “Freshman English” class a smidge).

But here’s the other issue: what if Kate actually talked to these students, and what if it turned out they either did not realize using ChatGPT was cheating, and/or they used ChatGPT in a way that wasn’t significantly different from getting some help from the writing center or a friend? What do you do then? Because– and again, I wrote about this in December— when I asked students to use GPT-3 (OpenAI’s software before ChatGPT) to write an essay and to then reflect on that process, a lot of them described the software as being a brainstorming tool, sort of like a “coach,” and not a lot different from getting help from others in peer review or from a visit to the writing center.

So like I said, I don’t want to be too hard on Kate. I know that there are a lot of teachers who are similarly freaked out about students using AI to cheat, and I’m not trying to suggest that there is nothing to worry about either. I think a lot of what is being predicted as the “next big thing” with AI is either a lot further off in the future than we might think, or it is in the same category as other famous “just around the corner” technologies like flying cars. But no question that this technology is going to continue to improve, and there’s also no question that it’s not going away. So for the Kates out there: instead of spending your weekend on the impossible task of proving that those students cheated, why not spend a little of that time playing around with ChatGPT and seeing what you find out?

AI Can Save Writing by Killing “The College Essay”

I finished reading and grading the last big project from my “Digital Writing” class this semester, an assignment that was about the emergence of openai.com’s artificial intelligence technologies GPT-3 and DALL-E. It was interesting and I’ll probably write more about it later, but the short version for now is my students and I have spent the last month or so noodling around with software and reading about both the potentials and dangers of rapidly improving AI, especially when it comes to writing.

So the timing of of Stephen Marche’s recently published commentary with the clickbaity title “The College Essay Is Dead” in The Atlantic could not be better– or worse? It’s not the first article I’ve read this semester along these lines, that GPT-3 is going to make cheating on college writing so easy that there simply will not be any point in assigning it anymore. Heck, it’s not even the only one in The Atlantic this week! Daniel Herman’s “The End of High-School English” takes a similar tact. In both cases, they claim, GPT-3 will make the “essay assignment” irrelevant.

That’s nonsense, though it might not be nonsense in the not so distant future. Eventually, whatever comes after GPT-3 and ChatGPT might really mean teachers can’t get away with only assigning writing. But I think we’ve got a ways to go before that happens.

Both Marche and Herman (and just about every other mainstream media article I’ve read about AI) make it sound like GPT-3, DALL-E, and similar AIs are as easy as working the computer on the Starship Enterprise: ask the software for an essay about some topic (Marche’s essay begins with a paragraph about “learning styles” written by GPT-3), and boom! you’ve got a finished and complete essay, just like asking the replicator for Earl Grey tea (hot). That’s just not true.

In my brief and amateurish experience, using GPT-3 and DALL-E is all about entering a carefully worded prompt. Figuring out how to come up with a good prompt involved trial and error, and I thought it took a surprising amount of time. In that sense, I found the process of experimenting with prompts similar to the kind of  invention/pre-writing activities  I teach to my students and that I use in my own writing practices all the time.  None of my prompts produced more than about two paragraphs of useful text at a time, and that was the case for my students as well. Instead, what my students and I both ended up doing was entering in several different prompts based on the output we were hoping to generate. And my students and I still had to edit the different pieces together, write transitions between AI generated chunks of texts, and so forth.

In their essays, some students reflected on the usefulness of GPT-3 as a brainstorming tool.  These students saw the AI as a sort of “collaborator” or “coach,” and some wrote about how GPT-3 made suggestions they hadn’t thought of themselves. In that sense, GPT-3 stood in for the feedback students might get from peer review, a visit to the writing center, or just talking with others about ideas. Other students did not think GPT-3 was useful, writing that while they thought the technology was interesting and fun, it was far more work to try to get it to “help” with writing an essay than it was for the student to just write the thing themselves.

These reactions square with the results in more academic/less clickbaity articles about GPT-3. This is especially true about  Paul Fyfe’s “How to cheat on your final paper: Assigning AI for student writing.” The assignment I gave my students was very similar to what Fyfe did and wrote about– that is, we both asked students to write (“cheat”) with AI (GPT-2 in the case of Fyfe’s article) and then reflect on the experience. And if you are a writing teacher reading this because you are curious about experimenting with this technology, go and read Fyfe’s article right away.

Oh yeah, one of the other major limitations of GPT-3’s usefulness as an academic writing/cheating tool: it cannot do even basic “research.” If you ask GPT-3 to write something that incorporates research and evidence, it either doesn’t comply or it completely makes stuff up, citing articles that do not exist. Let me share a long quote from a recent article at The Verge by James Vincent on this:

This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”

I think this limitation (along with the limitation that GPT-3 and ChatGPT are not capable of searching the internet) makes using GPT-3 as a plagiarism tool in any kind of research writing class kind of a deal-breaker. It certainly would not get students far in most sections of freshman comp where they’re expected to quote from other sources.

Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre. Perhaps when Marche was still teaching Shakespeare (before he was a novelist/cultural commentator, Marche earned a PhD specializing in early English drama), he assigned his students to write an essay about one of Shakespeare’s plays. Perhaps he gave his students some basic requirements about the number of words and some other mechanics, but that was about it. This is what I mean by only assigning writing: there’s no discussion of audience or purpose, there are no opportunities for peer review or drafts, there is no discussion of revision.

Teaching writing is a process. It starts by making writing assignments that are specific and that require an investment in things like prewriting and a series of assignments and activities that are “scaffolding” for a larger writing assignment. And ideally, teaching writing includes things like peer reviews and other interventions in the drafting process, and there is at least an acknowledgment that revision is a part of writing.

Most poorly designed assigned writing tasks are good examples of prompts that you enter into GPT-3. The results are definitely impressive, but I don’t think it’s quite useful enough to produce work a would-be cheater can pass off as their own. For example, I asked ChatGPT (twice) to “write a 1000 word college essay about the theme of insanity in Hamlet” and it came up with this and this essay. ChatGPT produced some impressive results, sure, but besides the fact that both of these essays are significantly shorter than 1000 word requirement, they both kind of read like… well, like a robot wrote them. I think that most instructors who received this essay from a student– particularly in an introductory class– would suspect that the student cheated. When I asked ChatGPT to write a well researched essay about the theme of insanity in Hamlet, it managed to produce an essay that quoted from the play, but not any research about Hamlet.

Interestingly, I do think ChatGPT has some potential for helping students revise. I’m not going to share the example here (because it was based on actual student writing), but I asked ChatGPT to “revise the following paragraph so it is grammatically correct” and I then added a particularly pronounced example of “basic” (developmental, grammatically incorrect, etc.) writing. The results didn’t improve the ideas in the writing and it changed only a few words. But it did transform the paragraph into a series of grammatically correct (albeit not terribly interesting) sentences.

In any event, if I were a student intent on cheating on this hypothetical assignment, I think I’d just do a Google search for papers on Hamlet instead. And that’s one of the other things Marche and these other commentators have left out: if a student wants to complete a badly designed “college essay” assignment by cheating, there are much much better and easier ways to do that right now.

Marche does eventually move on from “the college essay is dead” argument by the end of his commentary, and he discusses how GPT-3 and similar natural language processing technologies will have a lot of value to humanities scholars. Academics studying Shakespeare now have a reason to talk to computer science-types to figure out how to make use of this technology to analyze the playwright’s origins and early plays. Academics studying computer science and other fields connected to AI will now have a reason to maybe talk with the English-types as to how well their tools actually can write. As Marche says at the end, “Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.”

Plus I have to acknowledge that I have only spent so much time experimenting with my openai.com account because I still only have the free version. That was enough access for my students and me to noodle around enough to complete a short essay composed with the assistance of GPT-3 and to generate an accompanying image with GPT-3. But that was about it. Had I signed up for openai.com’s “pay as you go” payment plan, I might learn more about how to work this thing, and maybe I would have figured out better prompts for that Hamlet assignment. Besides all that, this technology is getting better alarmingly fast. We all know whatever comes after ChatGPT is going to be even more impressive.

But we’re not there yet. And when it is actually as good as Marche fears it might be, and if that makes teachers rethink how they might teach rather than assign writing, that would be a very good thing.

“Synch Video is Bad,” perhaps a new research project?

As Facebook has been reminding me far too often lately, things were quite different last year. Last fall, Annette and I both had “faculty research fellowships,” which meant that neither of us were teaching because we were working on research projects. (It also meant we did A LOT of travel, but that’s a different post). I was working on a project that was officially called “Investigating Classroom Technology bans Through the Lens of Writing Studies,” a project I always referred to as the “Classroom Tech Bans are Bullshit” project.

It was going along well, albeit slowly. I gave a conference presentation about it all in fall at the Great Lakes Writing and Rhetoric Conference  in September, and by early October, I was circulating a snowball sampling survey to students and instructors (via mailing lists, social media, etc.) about their attitudes about laptops and devices in classes. I blogged about it some in December, and while I wasn’t making as much progress as quickly as I would have preferred, I was getting together a presentation for the CCCCs and ready to ramp up the next steps of this: sorting through the results of the survey and contacting individuals for follow-up case study interviews.

Then Covid.

Then the mad dash to shove students and faculty into the emergency lifeboats of makeshift online classes, kicking students out of the dorms with little notice, and a long and troubling summer of trying to plan ahead for the fall without knowing exactly what universities were going to do about where/in what mode/how to hold classes. Millions of people got sick, hundreds of thousands died, the world economy descended into chaos. And Black Lives Matter protests, Trump descending further into madness, forest fires, etc., etc.

It all makes the debate about laptops and cell phones in classes seem kind of quaint and old-fashioned and irrelevant, doesn’t it? So now I’m mulling over starting a new different but similar project about faculty (and perhaps students) attitudes about online courses– specifically about synchronous video-conference online classes (mostly Zoom or Google Meetings).

Just to back up a step: after teaching online since about 2005, after doing a lot of research on best practices for online teaching, after doing a lot of writing and research about MOOCs, I’ve learned at least two things about teaching online:

  • Asynchronous instruction works better than synchronous instruction because of the affordances (and limitations) of the medium.
  • Video– particularly videos of professors just lecturing into a webcam while students (supposedly) sit and pay attention– is not very effective.

Now, conventional wisdom often turns out to be wrong, and I’ll get to that. Nonetheless, for folks who have been teaching online for a while, I don’t think either of these statements are remotely controversial or in dispute.

And yet, judging from what I see on social media, a lot of my colleagues who are teaching online this fall for the first time are completely ignoring these best practices: they’re teaching synchronous classes during the originally scheduled time of the course and they are relying heavily on Zoom. In many cases (again, based on what I’ve seen on the internets), instructors have no choice: that is, the institution is requiring that what were originally scheduled f2f classes be taught with synch video regardless of what the instructor wants to do, what the class is, and if it makes any sense. But a lot of instructors are doing this to themselves (which, in a lot of ways, is even worse). In my department at EMU, all but a few classes are online this fall, and as far as I can tell, many (most?) of my colleagues have decided on their own to teach their classes with Zoom and synchronously.

It doesn’t make sense to me at all. It feels like a lot of people are trying to reinvent the wheel, which in some ways is not that surprising because that’s exactly what happened with MOOCs. When the big for-profit MOOC companies like Coursera and Udacity and EdX and many others got started, they didn’t reach out to universities that were already experienced with online teaching. Instead, they reached out to themselves and peer institutions– Stanford, Harvard, UC-Berkeley, Michigan, Duke, Georgia Tech, and lots of other high profile flagships. In those early TED talks (like this one from Daphne Koller and this one from Peter Norvig), it really really seems like these people sincerely believe that they were the first ones to ever actually think about teaching online, that they had stumbled across an undiscovered country. But I digress.

I think requiring students to meet online but synchronously for a class via Zoom simply is putting a round peg into a square hole. Imagine the logical opposite situation: say I was scheduled to teach an asynchronous online class that was suddenly changed into a traditional f2f class, something that meets Tuesdays and Thursdays from 10 am to 11:45 am. Instead of changing my approach to this now different mode/medium, I decided I was going to teach the class as an asynch online class anyway. I’d require everyone to physically show up to the class on Tuesdays and Thursdays at 10 am (I have no choice about that), but instead of taking advantage of the mode of teaching f2f, I did everything all asynch and online. There’d be no conversation or acknowledgement that we were sitting in the same room. Students would only be allowed to interact with each other in the class LMS. No one would be allowed to actually talk to each other, though texting would be okay. Students would sit there for 75 minutes, silently doing their work but never allowed to speak with each other, and as the instructor, I would sit in the front of the room and do the same. We’d repeat this at all meetings the entire semester.

A ridiculous hypothetical, right? Well, because I’m pretty used to teaching online, that’s what an all Zoom class looks like like to me.

The other problem I have with Zoom is its part in policing and surveilling both students and teachers. Inside Higher Ed and the Chronicle of Higher Education both published inadvertently hilarious op-eds written to an audience of faculty about how they should maintain their own appearances and of their “Zoom backgrounds” to project professionalism and respect. And consider this post on Twitter:


I can’t verify the accuracy of these rules, but it certainly sounds like it could be true. When online teaching came up in the first department meeting of the year (held on Zoom, of course), the main concern voiced by my colleagues who had never taught online before was dealing with students who misbehave in these online forums. I’ve seen similar kinds of discussions about how to surveil students from other folks on social media. And what could possibly motivate a teacher’s need to have bodily control over what their students do in their own homes to the point of requiring them to wear fucking shoes?

This kind of “soft surveillance” is bad enough, but as I understand it, one of Zoom’s features it sells to institutions is robust data on what users do with it: who is logged in, when, for how long, etc. I need to do a little more research on this, but as I was discussing on Facebook with my friend Bill Hart-Davidson (who is in a position to know more about this both as an administrator and someone who has done the scholarship), this is clearly data that can be used to effectively police both teachers’ and students’ behavior. The overlords might have the power to make us to wear shoes at all times on Zoom after all.

On the other hand…

The conventional wisdom about teaching online asynchronously and without Zoom might be wrong, and that makes it potentially interesting to study. For example, the main reason why online classes are almost always asynchronous is the difficulty of scheduling and the flexibility helps students take classes in the first place. But if you could have a class that was mostly asynchronous but with some previously scheduled synchronous meetings as a part of the mix, well, that might be a good thing. I’ve tried to teach hybrid classes in the past that approach this, though I think Zoom might make this a lot easier in all kinds of ways.

And I’m not a complete Zoom hater. I started using it (or Google Meetings) last semester in my online classes for one-on-one conferences, and I think it worked well for that. I actually prefer our department meetings on Zoom because it cuts down on the number of faculty who just want to pontificate about something for no good reason (and I should note I am very very much one of these kind of faculty members, at least once in a while). I’ve read faculty justifying their use of Zoom based on what they think students want, and maybe that turns out to be true too.

So, what I’m imagining here is another snowball sample survey of faculty (maybe students as well) about their use of Zoom. I’d probably continue to focus on small writing classes because it’s my field and also because of different ideas about what teaching means in different disciplines. As was the case with the laptop bans are bullshit project, I think I’d want to continue to focus on attitudes about online teaching generally and Zoom in particular, mainly because I don’t have the resources or skills as a researcher to do something like an experimental design that compares the effectiveness of a Zoom lecture versus a f2f one versus an asynchronous discussion on a topic– though as I type that, I think that could be a pretty interesting experiment. Assuming I could get folks to respond, I’d also want to use the survey to recruit participants in one on one interviews, which I think would be more revealing and relevant data, at least to the basic questions I have now:

  • Why did you decide to use a lot of Zoom and do things synchronously?
  • What would you do differently next time?

What do you think, is this an idea worth pursuing?

Learning how to write is like learning how to roast a chicken. And vice-versa

I tried a new way to roast a chicken the other night, closely resembling this “Herbed Faux-tisserie Chicken and Potatoes” recipe from Bon Appétit. I’ve roasted a chicken with one recipe or another hundreds of times, but experimenting with a different recipe got me thinking about how learning to cook a simple meal suitable for sharing with others is like learning how to write. And vice-versa.

First, both are things that can be learned and/or taught. I think a lot of people– particularly people who don’t think they can cook or write– believe you either “have it” or you don’t. I’ve met lots of struggling students who have convinced themselves of this about writing, and I’ve also met a lot of creative writing types (from my MFA days long ago and into the present) who ought to know better but still believe this in a particularly naive way.

I believe everyone who manages to get themselves admitted to a college or university can learn from (the typically required) writing classes how to write better and also how to write well enough to express themselves to readers in college classes and beyond. I also believe that everyone with access to some basic tools– I’m thinking here of pots and pans, a rudimentary kitchen, pantry items, not to mention the food itself– can learn how to cook a meal they could serve to others.  Learning how to both write and cook might be more difficult for some people than others and the level of success different writers and cooks can reach will vary (and I’ll come back to this point), but that’s not the same thing as believing some  people “just can’t” cook or write.

Second, I think people who doubt their potential as cooks or writers make things more complicated than necessary, mainly because they just want to skip to the meal or completed essay. Trust the process, take your time, and go through the steps. If an inexperienced writer (and I’m thinking here of students in a class like first year writing) starts with something relatively simple and does the pre-writing, the research, the drafting, the peer review, all the stuff we do and talk about in contemporary writing classes, then they will be able to successfully complete that essay. If an inexperienced cook starts with something relatively simple– say roasting a chicken– and follows a well-written recipe and/or some of the many cooking tutorials on YouTube, then they will be able to roast that chicken.

Third, both writing and cooking take practice and self-reflection in order to improve. This seems logical enough since this is how we improve at almost anything– sports or dancing or painting or writing or cooking. But one of the longstanding challenges in writing pedagogy is “transference,” which is the idea that what a student learns in a first year writing class helps that student in other writing classes and situations.  Long and complex story short, the research suggests  this doesn’t work as well as you might think, possibly because students too often treat their required composition course as just another hoop, and possibly because teachers have to do more to make all this visible to students. Whether or not it gets taken up by students or conveyed by teachers, the goal of any college course (writing and otherwise) is to get better at something.

In my experience, the way this works with food is when you’re first trying to learn how to roast a chicken, you do it for yourself (or close family and/or roommates who basically have the choice to eat what you cook or to not eat anything at all), and you make note of what you would do differently the next time you try to roast that chicken. Next time, I’ll cook it longer or shorter or with more salt or to a different temperature or whatever. A lot of my recipes have notes I’ve added for next time. Then the next time, you make different adjustments; repeat, make different adjustments; and before you know it, you can roast a chicken confidently enough to invite over guests for a dinner party.  Also, the trial and error approach to following a recipe for chicken helps informs other recipes and foods so you can serve those guests some mashed potatoes and green beans with that chicken, maybe even a little gravy.

Both writing and cooking involve skills and practices which build on each other and that then allow you to both improve on those basic skills and also to develop more advanced skills and practices. It was not easy for me to truss a chicken the first time I did it; now it’s no big deal. Writing a good short summary of a piece of an article and incorporating that into a short critique is very hard for a lot of first year writing students. But keep practicing it becomes second nature. I routinely have students in my first year writing class who gasp when I tell them the first essay assignment should be around five pages because they never wrote anything that long in high school. By the end of the semester, it’s no big deal.

Finally, there are limits to teaching and not everyone can succeed at becoming a “great” writer or cook. Never say never of course, but I do not think there is much chance my cooking or recipes will ever be compared to the likes of Julia Childs or Thomas Keller, nor do I think my writing is going to be assigned reading for generations to come. I don’t like words like “gifted” or “genius” because people aren’t better at things because of something magical. But for the top 1% of writers/cooks/athletes/actors/etc., there is something. At the same time, it’s also extremely clear that the top 1% of writers/cooks/whatever get to that level through hard work and obsession. It’s a feedback loop.

So for example: it’d be silly to describe myself as a “gifted” writer, but I am good at it and I have always had a knack for it.  I’ve been praised for my writing since I was in grade school (though I did fail handwriting, but that’s another story) and it isn’t surprising to me that I’ve ended up in this profession and I’m still writing. That praise and reward motivates me to continue to like writing and to work to improve at it. I spend a lot of time revising and changing and obsessing and otherwise fiddling around with things I write (I have revised this post about a dozen or more times since I started it a week ago).

In any event, even if I have some kind of “gift,” it ends up being just one part of a chicken vs. egg argument. Being praised for being a good writer motivates me to write more; writing more improves my writing and earns me praise as I get better. A knack alone is not enough for anything, including writing or cooking.

Oh, and for what it’s worth: I thought that recipe was just okay. I liked the idea of the rotisserie-like spice rub and I can see doing that again, maybe putting it on a few hours or the day before. But cooking at 300 degrees (instead of starting it at say 425 and then dropping it back to 350 after about 20 minutes) meant not a while lot of browning and kind of rubbery skin.

A post about an admittedly not thought out idea: very low-bar access

The other day, I came across this post on Twitter from Derek Krissoff, who is the director of the West Virginia University Press:

I replied to Derek’s Tweet “Really good point and reminds me of a blog post I’ve been pondering for a long time on not ‘Open Access’ but something like ‘Very Low Bar Access,'” and he replied to my reply “Thanks, and I’d love to see a post along those lines. It’s always seemed to me access is best approached as a continuum instead of a binary.” (By the way, click on that embedded Twitter link and you’ll see there are lots of interesting replies to his post).

So, that’s why I’m writing this now.

Let me say three things at the outset: first, while I think I have some expertise and experience in this area, I’m not a scholar studying copyright or Open Educational Resources” (OER) or similar things. Second, this should in no way be interpreted as me saying bad things about Parlor Press or Utah State University Press. Both publishers have been delightful to work with and I’d recommend them to any academic looking for a home for a manuscript– albeit different kinds of homes. And third, my basic idea here is so simple it perhaps already exists in academic publishing and I just don’t know better, and I know this exists outside of academia with the many different self-publishing options out there.

Here’s my simple idea: instead of making OER/open-sourced publications completely free and open to anyone (or any ‘bot) with an internet connection, why not publish materials for a low cost, say somewhere between $3 and $5?

The goal is not to come up with a way for writers and publishers to “make money” exactly, though I am not against people being paid for their work nor am I against publishers and other entities being compensated for the costs of distributing “free” books. Rather, the idea is to make access easy for likely interested readers while maintaining a modest amount of control as to how a text travels and is repurposed on the internet.

I’ve been kicking this idea around ever since the book I co-edited Invasion of the MOOCs was published in 2014.  My co-editor (Charlie Lowe) and I wanted to simultaneously publish the collection in traditional print and as a free PDF, both because we believed (still do, I think) in the principles of open access academic publishing and because we frankly thought it would sell books. We also knew the force behind Parlor Press, David Blakesley (this Amazon author page has the most extensive bio, so that’s why I’m linking to that), was committed to the concept of OER and alternatives to “traditional” publishing– which is one of the reasons he started Parlor Press in the first place.

It’s also important to recognize that Invasion of the MOOCs was a quasi-DYI project. Among other things, I (along with the co-authors) managed most of the editing work of the book, and Charlie managed most of the production aspects of the book, paying a modest price for the cover art and doing the typesetting and indexing himself thanks to his knowledge of Adobe’s InDesign. In other words, the up-front costs of producing this book from Parlor Press’ point of view were small, so there was little to lose in making it available for free.

Besides being about a timely topic when it came out, I think distributing it free electronically helped sell the print version of the book. I don’t know exactly how many copies it has sold, but I know it has ended up in libraries all over the world. I’m pretty sure a lot (if not most) of the people/libraries who went ahead and bought the print book did so after checking out the free PDF. So giving away the book did help, well, sell books.

But in hindsight, I think there were two problems with the “completely free” download approach. First, when a publisher/writer puts something like a PDF up on the web for any person or any web crawling ‘bot to download, they get a skewed perspective on readership. Like I said, Invasion of the MOOCs has been downloaded thousands of times– which is great, since I can now say I edited a book that’s been downloaded thousands of times (aren’t you impressed?) But the vast majority of those downloads just sat on a user’s hard drive and then ended up in the (electronic) trash after never being read at all. (Full disclosure: I have done this many times). I don’t know if this is irony or what, but it’s worth pointing out this is exactly what happened with MOOCs: tens of thousands of would-be students signed up and then never once returned to the course.

Second and more important, putting the PDF up there as a free download means the publisher/writer loses control over how the text is redistributed. I still have a “Google alert” that sends me an email whenever it comes across a new reference to Invasion of the MOOCs on the web, and most of the alerts I have gotten over the years are harmless enough. The book gets redistributed by other OER sites, linked to on bookmarking sites like Pinterest, and embedded into SlideShare slide shows.

But sometimes the re-publishing/redistribution goes beyond the harmless and odd. I’ve gotten Google alerts to the book linked to/embedded in web sites like this page from Ebook Unlimited, which (as far as I can tell) is a very sketchy site where you can sigh up for a “free trial” to their book service.  In the last couple years, most of the Google alert notices I’ve received are links to broken links,  paper mill sites, “congratulations you won” pop-up/virus sites,  and similarly weirdo sites decidedly not about the book I edited or anything about MOOCs (despite what the Google alert says).

In contrast, the book I have coming out very soon called More Than A Moment, is being published by Utah State University Press and will not be available for a free download– at least not for a while.  On the positive side of things, working with USUP (which is an imprint of University Press of Colorado) means this book has had a more thorough (and traditional) editorial review, and the copyediting, indexing, and typesetting/jacket design have all been done by professionals. On the downside, a lack of a free to download version will mean this book will probably end up having fewer readers (thus less reach and fewer sales), and, as is the case with most academic books, I’ve had to pay for some of the production costs with grant money from EMU and/or out of my own pocket.

These two choices put writers/publishers in academia in a no-win situation. Open access publishing is a great idea, but besides the fact that nothing is “free” in the sense of having no financial costs associated with it (even maintaining a web site for distributing open access texts costs some money), it becomes problematic when a free text is repurposed by a bad actor to sell a bad service or to get users to click on a bad link. Traditional print publishing costs money and necessarily means fewer potential readers. At the same time, the money spent on publishing these more traditional print publications does show up in a “better” product, and it does offer a bit more reasonable control of the book. Maybe I’m kidding myself, but I do not expect to see a Google alert for the More than a Moment MOOC book lead me to a web site where clicking on the link will sign me up for some service I don’t want or download a virus.

So this is where I think “very low-bar access” publishing could split the difference between the “completely free and online” and the “completely not free and in print” options in academic publishing. Let’s say publishers charged as small of a fee as possible for downloading a PDF of the book. I don’t know exactly how much, but to pay the costs for running a web site capable of selling PDFs in the first place and for the publisher/writer to make at least a little bit of money for their labor, I’d guess around $3 to $5.

The disadvantage of this is (obviously) any amount of money charged is going to be more than “free,” and it is also going to require a would-be reader to pass through an additional step to pay before downloading the text. That’s going to cut down on downloads A LOT. On the other hand, I think it’s fair to say that if someone bothers to fill out the necessary online form and plunks down $5, there’s a pretty good chance that person is going to at least take a look at it. And honestly, 25-100 readers/book skimmers is worth more to me than 5,000 people who just download the PDF. It’s especially worth it if this low-bar access proves to be too much for the dubious redirect sites, virus makers, and paper mill sites.

I suppose another disadvantage of this model is if someone can download a PDF version of an academic book for $5 to avoid spending $20-30 (or, in some cases, a lot more than that) for the paper version, then that means the publisher will sell less paper books. That is entirely possible. The opposite is also possible though: the reader spends $5 on the PDF, finds the book useful/interesting, and then that reader opts to buy the print book. I do this often enough, especially with texts I want/need for teaching and scholarship.

So, there you have it, very low-bar access. It’s an idea– maybe not a particularly original one, maybe even not a viable one. But it’s an idea.

The “Grievance Studies” Hoax and the IRB Process

From Inside Higher Ed comes “Blowback Against a Hoax.” The “hoax” in question happened last fall, and it was described in a very long read on the web site Areo, “Academic Grievance Studies and the Corruption of Scholarship.” In the nutshell, three academics created some clearly ridiculous articles and sent them to a variety of journals to see if they could be published. Their results garnered a lot of MSM attention (I think there were articles in The Wall Street Journal and The New York Times). And, judging from a quick glance at the shared Google Drive folder for this project,  it is very clear that the authors (James A. Lindsay, Peter Boghossian, and Helen Pluckrose) were trying to “expose” and (I’d argue) humiliate the academics that they believe are publishing or not publishing kinds of scholarship because of “political correctness.”

Well, now Boghossian (who is an assistant professor at Portland State) is in trouble with that institution because he didn’t follow the rules for dealing with human subjects, aka IRB (Institutional Review Board) approval.

Read the article of course, but I’d also recommend watching the video the group posted as a defense to this on January 5. I think it says a lot about the problem here– and, IMO, Boghossian and his colleagues do not exactly look like they knew what they were doing:


(I posted what follows here– more or less– as a comment on the article which might or might not show up there, but I thought I’d copy and paste it here too):

It’s a fascinating problem and one I’m not quite sure what to do with. On the one hand, I think the Sokal 2.0 folks engaged in a project designed to expose some of the problems with academic publishing, a real and important topic for sure. On the other hand, they did it in way that was kind of jerky and also in a way that was designed to embarrass and humiliate editors and reviewers for these journals.

The video that accompanies this article is definitely worth watching, and to me it reveals that these people knew very VERY little about IRB protocols. Now, I’m not an expert on all the twists and turns of IRB, but I do teach a graduate-level course in composition and rhetoric research methods (I’m teaching it this semester), I’m “certified” to conduct human subject research, I teach my students how to be certified, I regularly interact with the person who is in charge of IRB process, and I also have gone through the process with a number of my own projects. In my field, the usual goal is to be “exempt” from IRB oversight: in other words, the usual process in my field is to fill out the paperwork and explain to the IRB people “hey, we’re doing this harmless thing but it involves people and we might not be able to get consent, is that okay” and for their response to be “sure, you can do that.”

So the first mistake these people made was they didn’t bother to tell their local IRB, I presume because these researchers had never done this kind of thing before, and, given their academic backgrounds, they probably didn’t know a whole lot about what does or doesn’t fall under IRB. After all, the three folks who did this stuff have backgrounds in math, philosophy, and “late medieval/early modern religious writing by and about women,” not exactly fields where learning about IRB and the rules for human subjects is a part of graduate training.

If these folks had followed the rules, I have no idea what the Portland State IRB would have said about this study. The whole situation will make for an interesting topic of discussion in the research methods course I’m teaching this term and a really interesting topic of discussion for when the local director of IRB visits class. But I do know three things:

  • It is possible to put together an IRB approved study where you don’t have to get participant approval if you explain why it wouldn’t be possible to get participant approval and/or where the risk to participants is minimal.
  • If you put together a study where you purposefully deceive subjects (like sending editors and reviewers fake scholarship trying to get them to publish it), then that study is going to be supervised by the IRB board. And if that study potentially embarrasses or humiliates its subjects and thus cause them harm (which, as far as I can tell from what I’ve read, was actually the point of this project), then there’s a good chance the IRB folks would not allow that project to continue.
  • Saying something along the lines of “We didn’t involve the IRB process because they probably wouldn’t have approved anyway” (as they more or less say in this video, actually) is not an acceptable excuse.

I don’t think Boghossian should lose his job. But I do think he should apologize and, if I was in a position of power at Portland State, I’d insist that he go through the IRB training for faculty on that campus.