Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI

A couple weeks ago, I wrote about why I use Google docs to teach writing at all levels. I’ve been using it for years–long before AI was a thing–in part because being able to see the history of a student’s Google doc is a teachable moment on the importance of the writing and revision process. This also has the added bonus of making it obvious if a student is skipping that work (by using AI, by copying/pasting from the internet, by stealing a paper from someone else, etc.) because the document history goes from nothing to a complete document in one step. I’m not saying that automatically means the student cheated, but it does prompt me to have a chat with that student.

In a similar vein and while I’m thinking about putting together my classes for the fall term, I thought I’d write about why I think teaching citation practices is increasingly important in research writing courses, particularly first year composition.

TL;DR version: None of this is new or innovative; rather, this is standard “teaching writing as a process” pedagogy and I’ve been teaching research writing like this for decades. But I do think it is even more important to teach citation skills now to help my students distinguish between the different types of sources, almost all of which are digital rather than on paper. Plus this is an assignment where AI might help, but I don’t think it’d help much.

Continue reading “Why Teaching Citation Practices (yes, I’m talking MLA/APA style) is Even More Important with AI”

Why I Use Google Docs to Teach Writing, Especially in the Age of AI

I follow a couple different Facebook groups about AI, each of which have become a firehose of posts lately, a mix of cool new things and brand new freakouts. A while back, someone in one of these groups posted about an app to track the writing process in a student’s document as a way of proving that the text was not AI. My response to this was “why not just use Google docs?”

I wish I could be more specific than this, but I can’t find the original post or my comment to it; maybe it was deleted. Anyway, this person asked “what did I mean?” and I explained it briefly, but then I said I was thinking about writing a blog post about it. Here is that post.

For those interested in the tl;dr version: I think the best way to discourage students from handing in work they didn’t create (be that from a papermill, something copied and pasted from websites, or AI) is to teach writing rather than merely assigning writing. That’s not “my” idea; that’s been the mantra in writing studies for at least 50 years. Also not a new idea and one you already know if you use and/or teach with Google docs: it is a great tool for teaching writing because it helps with peer review and collaborative writing, and the version history feature helps me see a student’s writing process, from the beginning of the draft through revisions. And if a student’s draft goes from nothing to complete in one revision, well, then that student and I have a chat.

Continue reading “Why I Use Google Docs to Teach Writing, Especially in the Age of AI”

The Problem is Not the AI

The other day, I heard the opening of this episode of the NPR call-in show 1A, “Know It All: ChatGPT In the Classroom.” It opened with this recorded comment from a listener named Kate:

“I teach freshman English at a local university, and three of my students turned in chatbot papers written this past week. I spent my entire weekend trying to confirm they were chatbot written, then trying to figure out how to confront them, to turn them in as plagiarist, because that is what they are, and how I’m going penalize their grade. This is not pleasant, and this is not a good temptation. These young men’s academic careers now hang in the balance because now they’ve been caught cheating.”

Now, I didn’t listen to the show for long beyond this opener (I was driving around running errands), and based on what’s available on the website, the discussion  also included information about incorporating ChatGPT into teaching. Also, I don’t want to be too hard on poor Kate; she’s obviously really flustered and I am guessing there were a lot of teachers listening to Kate’s story who could very personally relate.

But look, the problem is not the AI.

Perhaps Kate was teaching a literature class and not a composition and rhetoric class, but let’s assume whatever “freshman English” class she was teaching involved a lot of writing assignments. As I mentioned in the last post I had about AI and teaching with GPT-3 back in December, there is a difference between teaching writing and assigning writing. This is especially important in classes where the goal is to help students become better at the kind of writing skills they’ll need in other classes and “in life” in general.

Teaching writing means a series of assignments that build on each other, that involve brainstorming and prewriting activities, and that involve activities like peer reviews, discussions of revision, reflection from students on the process, and so forth. I require students in my first year comp/rhet classes to “show their work” through drafts that is in a way they similar to how they’d be expected to in an Algebra or Calculus course. It’s not just the final answer that counts. In contrast, assigning writing is when teachers give an assignment (often a quite formulaic one, like write a 5 paragraph essay about ‘x’) with no opportunities to talk about getting started, no consideration of audience or purpose, no interaction with the other students who are trying to do the same assignment, and no opportunity to revise or reflect.

While obviously more time-consuming and labor-intensive, teaching writing has two enormous advantages over only assigning writing. First, we know it “works” in that this approach improves student writing– or at least we know it works better than only assigning writing and hoping for the best. We know this because people in my field have been studying this for decades, despite the fact that there are still a lot of people just assigning writing, like Kate. Second, teaching writing makes it extremely difficult to cheat in the way Kate’s students have cheated– or maybe cheated. When I talk to my students about cheating and plagiarism, I always ask “why do you think I don’t worry much about you doing that in this class?” Their answer typically is “because we have to turn in all this other stuff too” and “because it would be too much work,” though I also like to believe that because of the way the assignments are structured, students become interested in their own writing in a way that makes cheating seem silly.

Let me just note that what I’m describing has been the conventional wisdom among specialists in composition and rhetoric for at least the last 30 (and probably more like 50) years. None of this is even remotely controversial in the field, nor is any of this “new.”

But back to Kate: certain that these three students turned in “chatbot papers,” she spent the “entire weekend” working to prove these students committed the crime of plagiarism and they deserve to be punished. She thinks this is a remarkably serious offense– their “academic careers now hang in the balance”– but I don’t think she’s going through all this because of some sort of abstract and academic ideal. No, this is personal. In her mind, these students did this to her and she’s going to punish them. This is beyond a sense of justice. She’s doing this to get even.

I get that feeling, that sense that her students betrayed her. But there’s no point in making teaching about “getting even” or “winning” because as the teacher, you create the game and the rules, you are the best player and the referee, and you always win. Getting even with students is like getting even with a toddler.

Anyway, let’s just assume for a moment that Kate’s suspicions are correct and these three students handed in essays created entirely by ChatGPT. First off, anyone who teaches classes like “Freshman English” should not need an entire weekend or any special software to figure out if these essays were written by an AI. Human writers– at all levels, but especially comparatively inexperienced human writers– do not compose the kind of uniform, grammatically correct, and robotically plodding prose generated by ChatGPT. Every time I see an article with a passage of text that asks “was this written by a robot or a student,” I always guess right– well, almost always I guess right.

Second,  if Kate did spend her weekend trying to find “the original” source ChatGPT used to create these essays, she certainly came up empty handed. That was the old school way of catching plagiarism cheats: you look for the original source the student plagiarized and confront the student with it, court room drama style. But ChatGPT (and other AI tools) do not “copy” from other sources; rather, the AI creates original text every time. That’s why there have been several different articles crediting an AI as a “co-author.”

Instead of wasting a weekend, what Kate should have done is called each of these students into her office or taken them aside one by one in a conference and asked them about their essays. If the students cheated,  they would not be able to answer basic questions about what they handed in, and 99 times out of 100, the confronted cheating student will confess.

Because here’s the thing: despite all the alarm out there that all students are cheating constantly, my experience has been the vast majority do not cheat like this, and they don’t want to cheat like this. Oh sure, students will sometimes “cut corners” by looking over to someone else’s answers on an exam, or maybe by adding a paragraph or two from something without citing it. But in my experience, the kind of over-the-top sort of cheating Kate is worried about is extremely rare. Most students want to do the right thing by doing the work, trying to learn something, and by trying their best– plus students don’t want to get in trouble from cheating either.

Further, the kinds of students who do try to blatantly plagiarize are not “criminal masterminds.” Far from it. Rather, students blatantly plagiarize when they are failing and desperate, and they are certainly not thinking of their “academic careers.” (And as a tangent: seems to me Kate might be overestimating the importance of her “Freshman English” class a smidge).

But here’s the other issue: what if Kate actually talked to these students, and what if it turned out they either did not realize using ChatGPT was cheating, and/or they used ChatGPT in a way that wasn’t significantly different from getting some help from the writing center or a friend? What do you do then? Because– and again, I wrote about this in December— when I asked students to use GPT-3 (OpenAI’s software before ChatGPT) to write an essay and to then reflect on that process, a lot of them described the software as being a brainstorming tool, sort of like a “coach,” and not a lot different from getting help from others in peer review or from a visit to the writing center.

So like I said, I don’t want to be too hard on Kate. I know that there are a lot of teachers who are similarly freaked out about students using AI to cheat, and I’m not trying to suggest that there is nothing to worry about either. I think a lot of what is being predicted as the “next big thing” with AI is either a lot further off in the future than we might think, or it is in the same category as other famous “just around the corner” technologies like flying cars. But no question that this technology is going to continue to improve, and there’s also no question that it’s not going away. So for the Kates out there: instead of spending your weekend on the impossible task of proving that those students cheated, why not spend a little of that time playing around with ChatGPT and seeing what you find out?

AI Can Save Writing by Killing “The College Essay”

I finished reading and grading the last big project from my “Digital Writing” class this semester, an assignment that was about the emergence of openai.com’s artificial intelligence technologies GPT-3 and DALL-E. It was interesting and I’ll probably write more about it later, but the short version for now is my students and I have spent the last month or so noodling around with software and reading about both the potentials and dangers of rapidly improving AI, especially when it comes to writing.

So the timing of of Stephen Marche’s recently published commentary with the clickbaity title “The College Essay Is Dead” in The Atlantic could not be better– or worse? It’s not the first article I’ve read this semester along these lines, that GPT-3 is going to make cheating on college writing so easy that there simply will not be any point in assigning it anymore. Heck, it’s not even the only one in The Atlantic this week! Daniel Herman’s “The End of High-School English” takes a similar tact. In both cases, they claim, GPT-3 will make the “essay assignment” irrelevant.

That’s nonsense, though it might not be nonsense in the not so distant future. Eventually, whatever comes after GPT-3 and ChatGPT might really mean teachers can’t get away with only assigning writing. But I think we’ve got a ways to go before that happens.

Both Marche and Herman (and just about every other mainstream media article I’ve read about AI) make it sound like GPT-3, DALL-E, and similar AIs are as easy as working the computer on the Starship Enterprise: ask the software for an essay about some topic (Marche’s essay begins with a paragraph about “learning styles” written by GPT-3), and boom! you’ve got a finished and complete essay, just like asking the replicator for Earl Grey tea (hot). That’s just not true.

In my brief and amateurish experience, using GPT-3 and DALL-E is all about entering a carefully worded prompt. Figuring out how to come up with a good prompt involved trial and error, and I thought it took a surprising amount of time. In that sense, I found the process of experimenting with prompts similar to the kind of  invention/pre-writing activities  I teach to my students and that I use in my own writing practices all the time.  None of my prompts produced more than about two paragraphs of useful text at a time, and that was the case for my students as well. Instead, what my students and I both ended up doing was entering in several different prompts based on the output we were hoping to generate. And my students and I still had to edit the different pieces together, write transitions between AI generated chunks of texts, and so forth.

In their essays, some students reflected on the usefulness of GPT-3 as a brainstorming tool.  These students saw the AI as a sort of “collaborator” or “coach,” and some wrote about how GPT-3 made suggestions they hadn’t thought of themselves. In that sense, GPT-3 stood in for the feedback students might get from peer review, a visit to the writing center, or just talking with others about ideas. Other students did not think GPT-3 was useful, writing that while they thought the technology was interesting and fun, it was far more work to try to get it to “help” with writing an essay than it was for the student to just write the thing themselves.

These reactions square with the results in more academic/less clickbaity articles about GPT-3. This is especially true about  Paul Fyfe’s “How to cheat on your final paper: Assigning AI for student writing.” The assignment I gave my students was very similar to what Fyfe did and wrote about– that is, we both asked students to write (“cheat”) with AI (GPT-2 in the case of Fyfe’s article) and then reflect on the experience. And if you are a writing teacher reading this because you are curious about experimenting with this technology, go and read Fyfe’s article right away.

Oh yeah, one of the other major limitations of GPT-3’s usefulness as an academic writing/cheating tool: it cannot do even basic “research.” If you ask GPT-3 to write something that incorporates research and evidence, it either doesn’t comply or it completely makes stuff up, citing articles that do not exist. Let me share a long quote from a recent article at The Verge by James Vincent on this:

This is one of several well-known failings of AI text generation models, otherwise known as large language models or LLMs. These systems are trained by analyzing patterns in huge reams of text scraped from the web. They look for statistical regularities in this data and use these to predict what words should come next in any given sentence. This means, though, that they lack hard-coded rules for how certain systems in the world operate, leading to their propensity to generate “fluent bullshit.”

I think this limitation (along with the limitation that GPT-3 and ChatGPT are not capable of searching the internet) makes using GPT-3 as a plagiarism tool in any kind of research writing class kind of a deal-breaker. It certainly would not get students far in most sections of freshman comp where they’re expected to quote from other sources.

Anyway, the point I’m trying to make here (and this is something that I think most people who teach writing regularly take as a given) is that there is a big difference between assigning students to write a “college essay” and teaching students how to write essays or any other genre. Perhaps when Marche was still teaching Shakespeare (before he was a novelist/cultural commentator, Marche earned a PhD specializing in early English drama), he assigned his students to write an essay about one of Shakespeare’s plays. Perhaps he gave his students some basic requirements about the number of words and some other mechanics, but that was about it. This is what I mean by only assigning writing: there’s no discussion of audience or purpose, there are no opportunities for peer review or drafts, there is no discussion of revision.

Teaching writing is a process. It starts by making writing assignments that are specific and that require an investment in things like prewriting and a series of assignments and activities that are “scaffolding” for a larger writing assignment. And ideally, teaching writing includes things like peer reviews and other interventions in the drafting process, and there is at least an acknowledgment that revision is a part of writing.

Most poorly designed assigned writing tasks are good examples of prompts that you enter into GPT-3. The results are definitely impressive, but I don’t think it’s quite useful enough to produce work a would-be cheater can pass off as their own. For example, I asked ChatGPT (twice) to “write a 1000 word college essay about the theme of insanity in Hamlet” and it came up with this and this essay. ChatGPT produced some impressive results, sure, but besides the fact that both of these essays are significantly shorter than 1000 word requirement, they both kind of read like… well, like a robot wrote them. I think that most instructors who received this essay from a student– particularly in an introductory class– would suspect that the student cheated. When I asked ChatGPT to write a well researched essay about the theme of insanity in Hamlet, it managed to produce an essay that quoted from the play, but not any research about Hamlet.

Interestingly, I do think ChatGPT has some potential for helping students revise. I’m not going to share the example here (because it was based on actual student writing), but I asked ChatGPT to “revise the following paragraph so it is grammatically correct” and I then added a particularly pronounced example of “basic” (developmental, grammatically incorrect, etc.) writing. The results didn’t improve the ideas in the writing and it changed only a few words. But it did transform the paragraph into a series of grammatically correct (albeit not terribly interesting) sentences.

In any event, if I were a student intent on cheating on this hypothetical assignment, I think I’d just do a Google search for papers on Hamlet instead. And that’s one of the other things Marche and these other commentators have left out: if a student wants to complete a badly designed “college essay” assignment by cheating, there are much much better and easier ways to do that right now.

Marche does eventually move on from “the college essay is dead” argument by the end of his commentary, and he discusses how GPT-3 and similar natural language processing technologies will have a lot of value to humanities scholars. Academics studying Shakespeare now have a reason to talk to computer science-types to figure out how to make use of this technology to analyze the playwright’s origins and early plays. Academics studying computer science and other fields connected to AI will now have a reason to maybe talk with the English-types as to how well their tools actually can write. As Marche says at the end, “Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance.”

Plus I have to acknowledge that I have only spent so much time experimenting with my openai.com account because I still only have the free version. That was enough access for my students and me to noodle around enough to complete a short essay composed with the assistance of GPT-3 and to generate an accompanying image with GPT-3. But that was about it. Had I signed up for openai.com’s “pay as you go” payment plan, I might learn more about how to work this thing, and maybe I would have figured out better prompts for that Hamlet assignment. Besides all that, this technology is getting better alarmingly fast. We all know whatever comes after ChatGPT is going to be even more impressive.

But we’re not there yet. And when it is actually as good as Marche fears it might be, and if that makes teachers rethink how they might teach rather than assign writing, that would be a very good thing.

“Synch Video is Bad,” perhaps a new research project?

As Facebook has been reminding me far too often lately, things were quite different last year. Last fall, Annette and I both had “faculty research fellowships,” which meant that neither of us were teaching because we were working on research projects. (It also meant we did A LOT of travel, but that’s a different post). I was working on a project that was officially called “Investigating Classroom Technology bans Through the Lens of Writing Studies,” a project I always referred to as the “Classroom Tech Bans are Bullshit” project.

It was going along well, albeit slowly. I gave a conference presentation about it all in fall at the Great Lakes Writing and Rhetoric Conference  in September, and by early October, I was circulating a snowball sampling survey to students and instructors (via mailing lists, social media, etc.) about their attitudes about laptops and devices in classes. I blogged about it some in December, and while I wasn’t making as much progress as quickly as I would have preferred, I was getting together a presentation for the CCCCs and ready to ramp up the next steps of this: sorting through the results of the survey and contacting individuals for follow-up case study interviews.

Then Covid.

Then the mad dash to shove students and faculty into the emergency lifeboats of makeshift online classes, kicking students out of the dorms with little notice, and a long and troubling summer of trying to plan ahead for the fall without knowing exactly what universities were going to do about where/in what mode/how to hold classes. Millions of people got sick, hundreds of thousands died, the world economy descended into chaos. And Black Lives Matter protests, Trump descending further into madness, forest fires, etc., etc.

It all makes the debate about laptops and cell phones in classes seem kind of quaint and old-fashioned and irrelevant, doesn’t it? So now I’m mulling over starting a new different but similar project about faculty (and perhaps students) attitudes about online courses– specifically about synchronous video-conference online classes (mostly Zoom or Google Meetings).

Just to back up a step: after teaching online since about 2005, after doing a lot of research on best practices for online teaching, after doing a lot of writing and research about MOOCs, I’ve learned at least two things about teaching online:

  • Asynchronous instruction works better than synchronous instruction because of the affordances (and limitations) of the medium.
  • Video– particularly videos of professors just lecturing into a webcam while students (supposedly) sit and pay attention– is not very effective.

Now, conventional wisdom often turns out to be wrong, and I’ll get to that. Nonetheless, for folks who have been teaching online for a while, I don’t think either of these statements are remotely controversial or in dispute.

And yet, judging from what I see on social media, a lot of my colleagues who are teaching online this fall for the first time are completely ignoring these best practices: they’re teaching synchronous classes during the originally scheduled time of the course and they are relying heavily on Zoom. In many cases (again, based on what I’ve seen on the internets), instructors have no choice: that is, the institution is requiring that what were originally scheduled f2f classes be taught with synch video regardless of what the instructor wants to do, what the class is, and if it makes any sense. But a lot of instructors are doing this to themselves (which, in a lot of ways, is even worse). In my department at EMU, all but a few classes are online this fall, and as far as I can tell, many (most?) of my colleagues have decided on their own to teach their classes with Zoom and synchronously.

It doesn’t make sense to me at all. It feels like a lot of people are trying to reinvent the wheel, which in some ways is not that surprising because that’s exactly what happened with MOOCs. When the big for-profit MOOC companies like Coursera and Udacity and EdX and many others got started, they didn’t reach out to universities that were already experienced with online teaching. Instead, they reached out to themselves and peer institutions– Stanford, Harvard, UC-Berkeley, Michigan, Duke, Georgia Tech, and lots of other high profile flagships. In those early TED talks (like this one from Daphne Koller and this one from Peter Norvig), it really really seems like these people sincerely believe that they were the first ones to ever actually think about teaching online, that they had stumbled across an undiscovered country. But I digress.

I think requiring students to meet online but synchronously for a class via Zoom simply is putting a round peg into a square hole. Imagine the logical opposite situation: say I was scheduled to teach an asynchronous online class that was suddenly changed into a traditional f2f class, something that meets Tuesdays and Thursdays from 10 am to 11:45 am. Instead of changing my approach to this now different mode/medium, I decided I was going to teach the class as an asynch online class anyway. I’d require everyone to physically show up to the class on Tuesdays and Thursdays at 10 am (I have no choice about that), but instead of taking advantage of the mode of teaching f2f, I did everything all asynch and online. There’d be no conversation or acknowledgement that we were sitting in the same room. Students would only be allowed to interact with each other in the class LMS. No one would be allowed to actually talk to each other, though texting would be okay. Students would sit there for 75 minutes, silently doing their work but never allowed to speak with each other, and as the instructor, I would sit in the front of the room and do the same. We’d repeat this at all meetings the entire semester.

A ridiculous hypothetical, right? Well, because I’m pretty used to teaching online, that’s what an all Zoom class looks like like to me.

The other problem I have with Zoom is its part in policing and surveilling both students and teachers. Inside Higher Ed and the Chronicle of Higher Education both published inadvertently hilarious op-eds written to an audience of faculty about how they should maintain their own appearances and of their “Zoom backgrounds” to project professionalism and respect. And consider this post on Twitter:


I can’t verify the accuracy of these rules, but it certainly sounds like it could be true. When online teaching came up in the first department meeting of the year (held on Zoom, of course), the main concern voiced by my colleagues who had never taught online before was dealing with students who misbehave in these online forums. I’ve seen similar kinds of discussions about how to surveil students from other folks on social media. And what could possibly motivate a teacher’s need to have bodily control over what their students do in their own homes to the point of requiring them to wear fucking shoes?

This kind of “soft surveillance” is bad enough, but as I understand it, one of Zoom’s features it sells to institutions is robust data on what users do with it: who is logged in, when, for how long, etc. I need to do a little more research on this, but as I was discussing on Facebook with my friend Bill Hart-Davidson (who is in a position to know more about this both as an administrator and someone who has done the scholarship), this is clearly data that can be used to effectively police both teachers’ and students’ behavior. The overlords might have the power to make us to wear shoes at all times on Zoom after all.

On the other hand…

The conventional wisdom about teaching online asynchronously and without Zoom might be wrong, and that makes it potentially interesting to study. For example, the main reason why online classes are almost always asynchronous is the difficulty of scheduling and the flexibility helps students take classes in the first place. But if you could have a class that was mostly asynchronous but with some previously scheduled synchronous meetings as a part of the mix, well, that might be a good thing. I’ve tried to teach hybrid classes in the past that approach this, though I think Zoom might make this a lot easier in all kinds of ways.

And I’m not a complete Zoom hater. I started using it (or Google Meetings) last semester in my online classes for one-on-one conferences, and I think it worked well for that. I actually prefer our department meetings on Zoom because it cuts down on the number of faculty who just want to pontificate about something for no good reason (and I should note I am very very much one of these kind of faculty members, at least once in a while). I’ve read faculty justifying their use of Zoom based on what they think students want, and maybe that turns out to be true too.

So, what I’m imagining here is another snowball sample survey of faculty (maybe students as well) about their use of Zoom. I’d probably continue to focus on small writing classes because it’s my field and also because of different ideas about what teaching means in different disciplines. As was the case with the laptop bans are bullshit project, I think I’d want to continue to focus on attitudes about online teaching generally and Zoom in particular, mainly because I don’t have the resources or skills as a researcher to do something like an experimental design that compares the effectiveness of a Zoom lecture versus a f2f one versus an asynchronous discussion on a topic– though as I type that, I think that could be a pretty interesting experiment. Assuming I could get folks to respond, I’d also want to use the survey to recruit participants in one on one interviews, which I think would be more revealing and relevant data, at least to the basic questions I have now:

  • Why did you decide to use a lot of Zoom and do things synchronously?
  • What would you do differently next time?

What do you think, is this an idea worth pursuing?

Learning how to write is like learning how to roast a chicken. And vice-versa

I tried a new way to roast a chicken the other night, closely resembling this “Herbed Faux-tisserie Chicken and Potatoes” recipe from Bon Appétit. I’ve roasted a chicken with one recipe or another hundreds of times, but experimenting with a different recipe got me thinking about how learning to cook a simple meal suitable for sharing with others is like learning how to write. And vice-versa.

First, both are things that can be learned and/or taught. I think a lot of people– particularly people who don’t think they can cook or write– believe you either “have it” or you don’t. I’ve met lots of struggling students who have convinced themselves of this about writing, and I’ve also met a lot of creative writing types (from my MFA days long ago and into the present) who ought to know better but still believe this in a particularly naive way.

I believe everyone who manages to get themselves admitted to a college or university can learn from (the typically required) writing classes how to write better and also how to write well enough to express themselves to readers in college classes and beyond. I also believe that everyone with access to some basic tools– I’m thinking here of pots and pans, a rudimentary kitchen, pantry items, not to mention the food itself– can learn how to cook a meal they could serve to others.  Learning how to both write and cook might be more difficult for some people than others and the level of success different writers and cooks can reach will vary (and I’ll come back to this point), but that’s not the same thing as believing some  people “just can’t” cook or write.

Second, I think people who doubt their potential as cooks or writers make things more complicated than necessary, mainly because they just want to skip to the meal or completed essay. Trust the process, take your time, and go through the steps. If an inexperienced writer (and I’m thinking here of students in a class like first year writing) starts with something relatively simple and does the pre-writing, the research, the drafting, the peer review, all the stuff we do and talk about in contemporary writing classes, then they will be able to successfully complete that essay. If an inexperienced cook starts with something relatively simple– say roasting a chicken– and follows a well-written recipe and/or some of the many cooking tutorials on YouTube, then they will be able to roast that chicken.

Third, both writing and cooking take practice and self-reflection in order to improve. This seems logical enough since this is how we improve at almost anything– sports or dancing or painting or writing or cooking. But one of the longstanding challenges in writing pedagogy is “transference,” which is the idea that what a student learns in a first year writing class helps that student in other writing classes and situations.  Long and complex story short, the research suggests  this doesn’t work as well as you might think, possibly because students too often treat their required composition course as just another hoop, and possibly because teachers have to do more to make all this visible to students. Whether or not it gets taken up by students or conveyed by teachers, the goal of any college course (writing and otherwise) is to get better at something.

In my experience, the way this works with food is when you’re first trying to learn how to roast a chicken, you do it for yourself (or close family and/or roommates who basically have the choice to eat what you cook or to not eat anything at all), and you make note of what you would do differently the next time you try to roast that chicken. Next time, I’ll cook it longer or shorter or with more salt or to a different temperature or whatever. A lot of my recipes have notes I’ve added for next time. Then the next time, you make different adjustments; repeat, make different adjustments; and before you know it, you can roast a chicken confidently enough to invite over guests for a dinner party.  Also, the trial and error approach to following a recipe for chicken helps informs other recipes and foods so you can serve those guests some mashed potatoes and green beans with that chicken, maybe even a little gravy.

Both writing and cooking involve skills and practices which build on each other and that then allow you to both improve on those basic skills and also to develop more advanced skills and practices. It was not easy for me to truss a chicken the first time I did it; now it’s no big deal. Writing a good short summary of a piece of an article and incorporating that into a short critique is very hard for a lot of first year writing students. But keep practicing it becomes second nature. I routinely have students in my first year writing class who gasp when I tell them the first essay assignment should be around five pages because they never wrote anything that long in high school. By the end of the semester, it’s no big deal.

Finally, there are limits to teaching and not everyone can succeed at becoming a “great” writer or cook. Never say never of course, but I do not think there is much chance my cooking or recipes will ever be compared to the likes of Julia Childs or Thomas Keller, nor do I think my writing is going to be assigned reading for generations to come. I don’t like words like “gifted” or “genius” because people aren’t better at things because of something magical. But for the top 1% of writers/cooks/athletes/actors/etc., there is something. At the same time, it’s also extremely clear that the top 1% of writers/cooks/whatever get to that level through hard work and obsession. It’s a feedback loop.

So for example: it’d be silly to describe myself as a “gifted” writer, but I am good at it and I have always had a knack for it.  I’ve been praised for my writing since I was in grade school (though I did fail handwriting, but that’s another story) and it isn’t surprising to me that I’ve ended up in this profession and I’m still writing. That praise and reward motivates me to continue to like writing and to work to improve at it. I spend a lot of time revising and changing and obsessing and otherwise fiddling around with things I write (I have revised this post about a dozen or more times since I started it a week ago).

In any event, even if I have some kind of “gift,” it ends up being just one part of a chicken vs. egg argument. Being praised for being a good writer motivates me to write more; writing more improves my writing and earns me praise as I get better. A knack alone is not enough for anything, including writing or cooking.

Oh, and for what it’s worth: I thought that recipe was just okay. I liked the idea of the rotisserie-like spice rub and I can see doing that again, maybe putting it on a few hours or the day before. But cooking at 300 degrees (instead of starting it at say 425 and then dropping it back to 350 after about 20 minutes) meant not a while lot of browning and kind of rubbery skin.

A post about an admittedly not thought out idea: very low-bar access

The other day, I came across this post on Twitter from Derek Krissoff, who is the director of the West Virginia University Press:

I replied to Derek’s Tweet “Really good point and reminds me of a blog post I’ve been pondering for a long time on not ‘Open Access’ but something like ‘Very Low Bar Access,'” and he replied to my reply “Thanks, and I’d love to see a post along those lines. It’s always seemed to me access is best approached as a continuum instead of a binary.” (By the way, click on that embedded Twitter link and you’ll see there are lots of interesting replies to his post).

So, that’s why I’m writing this now.

Let me say three things at the outset: first, while I think I have some expertise and experience in this area, I’m not a scholar studying copyright or Open Educational Resources” (OER) or similar things. Second, this should in no way be interpreted as me saying bad things about Parlor Press or Utah State University Press. Both publishers have been delightful to work with and I’d recommend them to any academic looking for a home for a manuscript– albeit different kinds of homes. And third, my basic idea here is so simple it perhaps already exists in academic publishing and I just don’t know better, and I know this exists outside of academia with the many different self-publishing options out there.

Here’s my simple idea: instead of making OER/open-sourced publications completely free and open to anyone (or any ‘bot) with an internet connection, why not publish materials for a low cost, say somewhere between $3 and $5?

The goal is not to come up with a way for writers and publishers to “make money” exactly, though I am not against people being paid for their work nor am I against publishers and other entities being compensated for the costs of distributing “free” books. Rather, the idea is to make access easy for likely interested readers while maintaining a modest amount of control as to how a text travels and is repurposed on the internet.

I’ve been kicking this idea around ever since the book I co-edited Invasion of the MOOCs was published in 2014.  My co-editor (Charlie Lowe) and I wanted to simultaneously publish the collection in traditional print and as a free PDF, both because we believed (still do, I think) in the principles of open access academic publishing and because we frankly thought it would sell books. We also knew the force behind Parlor Press, David Blakesley (this Amazon author page has the most extensive bio, so that’s why I’m linking to that), was committed to the concept of OER and alternatives to “traditional” publishing– which is one of the reasons he started Parlor Press in the first place.

It’s also important to recognize that Invasion of the MOOCs was a quasi-DYI project. Among other things, I (along with the co-authors) managed most of the editing work of the book, and Charlie managed most of the production aspects of the book, paying a modest price for the cover art and doing the typesetting and indexing himself thanks to his knowledge of Adobe’s InDesign. In other words, the up-front costs of producing this book from Parlor Press’ point of view were small, so there was little to lose in making it available for free.

Besides being about a timely topic when it came out, I think distributing it free electronically helped sell the print version of the book. I don’t know exactly how many copies it has sold, but I know it has ended up in libraries all over the world. I’m pretty sure a lot (if not most) of the people/libraries who went ahead and bought the print book did so after checking out the free PDF. So giving away the book did help, well, sell books.

But in hindsight, I think there were two problems with the “completely free” download approach. First, when a publisher/writer puts something like a PDF up on the web for any person or any web crawling ‘bot to download, they get a skewed perspective on readership. Like I said, Invasion of the MOOCs has been downloaded thousands of times– which is great, since I can now say I edited a book that’s been downloaded thousands of times (aren’t you impressed?) But the vast majority of those downloads just sat on a user’s hard drive and then ended up in the (electronic) trash after never being read at all. (Full disclosure: I have done this many times). I don’t know if this is irony or what, but it’s worth pointing out this is exactly what happened with MOOCs: tens of thousands of would-be students signed up and then never once returned to the course.

Second and more important, putting the PDF up there as a free download means the publisher/writer loses control over how the text is redistributed. I still have a “Google alert” that sends me an email whenever it comes across a new reference to Invasion of the MOOCs on the web, and most of the alerts I have gotten over the years are harmless enough. The book gets redistributed by other OER sites, linked to on bookmarking sites like Pinterest, and embedded into SlideShare slide shows.

But sometimes the re-publishing/redistribution goes beyond the harmless and odd. I’ve gotten Google alerts to the book linked to/embedded in web sites like this page from Ebook Unlimited, which (as far as I can tell) is a very sketchy site where you can sigh up for a “free trial” to their book service.  In the last couple years, most of the Google alert notices I’ve received are links to broken links,  paper mill sites, “congratulations you won” pop-up/virus sites,  and similarly weirdo sites decidedly not about the book I edited or anything about MOOCs (despite what the Google alert says).

In contrast, the book I have coming out very soon called More Than A Moment, is being published by Utah State University Press and will not be available for a free download– at least not for a while.  On the positive side of things, working with USUP (which is an imprint of University Press of Colorado) means this book has had a more thorough (and traditional) editorial review, and the copyediting, indexing, and typesetting/jacket design have all been done by professionals. On the downside, a lack of a free to download version will mean this book will probably end up having fewer readers (thus less reach and fewer sales), and, as is the case with most academic books, I’ve had to pay for some of the production costs with grant money from EMU and/or out of my own pocket.

These two choices put writers/publishers in academia in a no-win situation. Open access publishing is a great idea, but besides the fact that nothing is “free” in the sense of having no financial costs associated with it (even maintaining a web site for distributing open access texts costs some money), it becomes problematic when a free text is repurposed by a bad actor to sell a bad service or to get users to click on a bad link. Traditional print publishing costs money and necessarily means fewer potential readers. At the same time, the money spent on publishing these more traditional print publications does show up in a “better” product, and it does offer a bit more reasonable control of the book. Maybe I’m kidding myself, but I do not expect to see a Google alert for the More than a Moment MOOC book lead me to a web site where clicking on the link will sign me up for some service I don’t want or download a virus.

So this is where I think “very low-bar access” publishing could split the difference between the “completely free and online” and the “completely not free and in print” options in academic publishing. Let’s say publishers charged as small of a fee as possible for downloading a PDF of the book. I don’t know exactly how much, but to pay the costs for running a web site capable of selling PDFs in the first place and for the publisher/writer to make at least a little bit of money for their labor, I’d guess around $3 to $5.

The disadvantage of this is (obviously) any amount of money charged is going to be more than “free,” and it is also going to require a would-be reader to pass through an additional step to pay before downloading the text. That’s going to cut down on downloads A LOT. On the other hand, I think it’s fair to say that if someone bothers to fill out the necessary online form and plunks down $5, there’s a pretty good chance that person is going to at least take a look at it. And honestly, 25-100 readers/book skimmers is worth more to me than 5,000 people who just download the PDF. It’s especially worth it if this low-bar access proves to be too much for the dubious redirect sites, virus makers, and paper mill sites.

I suppose another disadvantage of this model is if someone can download a PDF version of an academic book for $5 to avoid spending $20-30 (or, in some cases, a lot more than that) for the paper version, then that means the publisher will sell less paper books. That is entirely possible. The opposite is also possible though: the reader spends $5 on the PDF, finds the book useful/interesting, and then that reader opts to buy the print book. I do this often enough, especially with texts I want/need for teaching and scholarship.

So, there you have it, very low-bar access. It’s an idea– maybe not a particularly original one, maybe even not a viable one. But it’s an idea.

The “Grievance Studies” Hoax and the IRB Process

From Inside Higher Ed comes “Blowback Against a Hoax.” The “hoax” in question happened last fall, and it was described in a very long read on the web site Areo, “Academic Grievance Studies and the Corruption of Scholarship.” In the nutshell, three academics created some clearly ridiculous articles and sent them to a variety of journals to see if they could be published. Their results garnered a lot of MSM attention (I think there were articles in The Wall Street Journal and The New York Times). And, judging from a quick glance at the shared Google Drive folder for this project,  it is very clear that the authors (James A. Lindsay, Peter Boghossian, and Helen Pluckrose) were trying to “expose” and (I’d argue) humiliate the academics that they believe are publishing or not publishing kinds of scholarship because of “political correctness.”

Well, now Boghossian (who is an assistant professor at Portland State) is in trouble with that institution because he didn’t follow the rules for dealing with human subjects, aka IRB (Institutional Review Board) approval.

Read the article of course, but I’d also recommend watching the video the group posted as a defense to this on January 5. I think it says a lot about the problem here– and, IMO, Boghossian and his colleagues do not exactly look like they knew what they were doing:


(I posted what follows here– more or less– as a comment on the article which might or might not show up there, but I thought I’d copy and paste it here too):

It’s a fascinating problem and one I’m not quite sure what to do with. On the one hand, I think the Sokal 2.0 folks engaged in a project designed to expose some of the problems with academic publishing, a real and important topic for sure. On the other hand, they did it in way that was kind of jerky and also in a way that was designed to embarrass and humiliate editors and reviewers for these journals.

The video that accompanies this article is definitely worth watching, and to me it reveals that these people knew very VERY little about IRB protocols. Now, I’m not an expert on all the twists and turns of IRB, but I do teach a graduate-level course in composition and rhetoric research methods (I’m teaching it this semester), I’m “certified” to conduct human subject research, I teach my students how to be certified, I regularly interact with the person who is in charge of IRB process, and I also have gone through the process with a number of my own projects. In my field, the usual goal is to be “exempt” from IRB oversight: in other words, the usual process in my field is to fill out the paperwork and explain to the IRB people “hey, we’re doing this harmless thing but it involves people and we might not be able to get consent, is that okay” and for their response to be “sure, you can do that.”

So the first mistake these people made was they didn’t bother to tell their local IRB, I presume because these researchers had never done this kind of thing before, and, given their academic backgrounds, they probably didn’t know a whole lot about what does or doesn’t fall under IRB. After all, the three folks who did this stuff have backgrounds in math, philosophy, and “late medieval/early modern religious writing by and about women,” not exactly fields where learning about IRB and the rules for human subjects is a part of graduate training.

If these folks had followed the rules, I have no idea what the Portland State IRB would have said about this study. The whole situation will make for an interesting topic of discussion in the research methods course I’m teaching this term and a really interesting topic of discussion for when the local director of IRB visits class. But I do know three things:

  • It is possible to put together an IRB approved study where you don’t have to get participant approval if you explain why it wouldn’t be possible to get participant approval and/or where the risk to participants is minimal.
  • If you put together a study where you purposefully deceive subjects (like sending editors and reviewers fake scholarship trying to get them to publish it), then that study is going to be supervised by the IRB board. And if that study potentially embarrasses or humiliates its subjects and thus cause them harm (which, as far as I can tell from what I’ve read, was actually the point of this project), then there’s a good chance the IRB folks would not allow that project to continue.
  • Saying something along the lines of “We didn’t involve the IRB process because they probably wouldn’t have approved anyway” (as they more or less say in this video, actually) is not an acceptable excuse.

I don’t think Boghossian should lose his job. But I do think he should apologize and, if I was in a position of power at Portland State, I’d insist that he go through the IRB training for faculty on that campus.

Three thoughts on the “Essay,” assessing, and using “robo-grading” for good

NPR had a story on Weekend Edition last week, “More States Opting to ‘Robo-Grade” Student Essays By Computer,” that got some attention from other comp/rhet folks though not as much as I thought it might. Essentially, the story is about the use of computers to “assess” (really “rate,” but I’ll get to that in a second) student writing on standardized tests. Most composition and rhetoric scholars think this software is a bad idea. I think this is not not true, though I do have three thoughts.

First, I agree with what my friend and colleague Bill Hart-Davidson writes here about essays, though this is not what most people think “essay” means. Bill draws on the classic French origins of the word, noting that an essay is supposed to be a “try,” an attempt and often a wandering one at that. Read any of the quite old classics (de Montaigne comes to mind, though I don’t know his work as well as I should) or even the more modern ones (E.B. White or Joan Didion or the very contemporary David Sedaris) and you get more of a sense of this classic meaning. Sure, these writers’ essays are organized and have a point, but they wander to them and they are presented (presumably after much revision) as if the writer was discovering their point along with the reader.

In my own teaching, I tend to use the term project to describe what I assign students to do because I think it’s a term that can include a variety of different kinds of texts (including essays) and other deliverables. I hate the far too common term paper because it suggests writing that is static, boring, routine, uninteresting, and bureaucratic. It’s policing, as in “show me your papers” when trying to pass through a boarder. No one likes completing “paperwork,” but it is one of those necessary things grown-ups have to do.

Nonetheless, for most people including most writing teachers–  the term “essay” and “paper” are synonymous. The original meaning of essay has been replaced by the school meaning of essay (or paper– same thing).  Thus we have the five paragraph form, or even this comparably enlightened advice from the Bow Valley College Library and Learning Commons, one of the first links that came up in a simple Google search. It’s a list (five steps, too!) for creating an essay (or paper) driven by a thesis and research.  For most college students, papers (or essays) are training for white collar careers to learn how to complete required office paperwork.

Second, while it is true that robo-grading standardized tests does not help anyone learn how to write, the most visible aspect of writing pedagogy to people who have no expertise in teaching (beyond experience as a student, of course) is not the teaching but the assessment. So in that sense, it’s not surprising this article focuses on assessment at the expense of teaching.

Besides, composition and rhetoric as a field is very into assessment, sometimes (IMO) at the expense of teaching and learning about writing. Much of the work of Writing Program Administration and scholarship in the field is tied to assessment– and a lot (most?) comp/rhet specialists end up involved in WPA work at some point in their careers. WPAs have to consider large-scale assessment issues to measure outcomes across many different sections of first year writing, and they usually have to mentor instructors on small-scale assessment– that is, how to grade and comment all these student essays papers in a way that is both useful to students and that does not take an enormous amount of time.  There is a ton of scholarship on assessment– how to do it, what works or doesn’t, the pros and cons of portfolios, etc. There are books and journals and conferences devoted to assessment. Plenty of comp/rhet types have had very good careers as assessment specialists. Our field loves this stuff.

Don’t get me wrong– I think assessment is important, too. There is stuff to be learned (and to be shown to administrators) from these large scale program assessments, and while the grades we give to students aren’t always an accurate measure of what they learned or how well they can write, grades are critical to making the system of higher education work. Plus students themselves are too often a major part of the problem of over-assessing. I am not one to speak about the “kids today” because I’ve been teaching long enough to know students now are not a whole lot different than they were 30 years ago. But one thing I’ve noticed in recent years– I think because of “No Child Left Behind” and similar efforts– is the extent to which students nowadays seem puzzled about embarking on almost any writing assignment without a detailed rubric to follow.

But again, assessing writing is not the same thing as fostering an environment where students can learn more about writing, and it certainly is not how writing worth reading is created. I have never read an essay which mattered to me written by someone closely following the guidance of a typical  assignment rubric. It’s really easy as a teacher to forget that, especially while trying to make the wheels of a class continue to turn smoothly with the help of tools like rubrics. As a teacher, I have to remind myself about that all the time.

The third thing: as long as writing teachers believe more in essays than in papers and as long as they are more concerned with creating learning opportunities rather than sites for assessment, “robo-grader” technology of the soft described in this NPR story are kind of irrelevant– and it might even be helpful.

I blogged about this several years ago here as well, but it needs to be emphasized again: this software is actually pretty limited. As I understand it, software like this can rate/grade the response to a specific essay question– “in what ways did the cinematic techniques of Citizen Kane revolutionize the way we watch and understand movies today”– but it is not very good at more qualitative questions– “did you think Citizen Kane was a good movie?”– and it is not very good at all at rating/grading pieces of writing with almost no constraints, as in “what’s your favorite movie?”

Furthermore, as the NPR story points out, this software can be tricked. Les Perleman has been demonstrating for years how these robo-graders can be fooled, though I have to say I am a lot more impressed with the ingenuity shown by some students in Utah who found ways to “game” the system: “One year… a student who wrote a whole page of the letter “b” ended up with a good score. Other students have figured out that they could do well writing one really good paragraph and just copying that four times to make a five-paragraph essay that scores well. Others have pulled one over on the computer by padding their essays with long quotes from the text they’re supposed to analyze, or from the question they’re supposed to answer.” The raters keep “tweaking” the code to present these tricks, but of course, students will keep trying new tricks.

I have to say I have some sympathy with one of the arguments made in this article that if a student is smart enough to trick the software, then maybe they deserve a high rating anyway. We are living in an age in which it is an increasingly important and useful skill for humans to write texts in a way that can be “understood” both by other people and machines– or maybe just machines. So maybe mastering the robo-grader is worth something, even if it isn’t exactly what most of us mean by “writing.”

Anyway, my point is it really should not be difficult at all for composition and rhetoric folks to push back against the use of tools like this in writing classes because robo-graders can’t replicate what human teachers and students can do as readers: to be an actual audience. In that sense, this technology is not really all that much different than stuff like spell-checkers and grammar-checkers I have been doing this work long enough to know that there were plenty of writing teachers who thought those tools were the beginning of the end, too.

Or, another way of putting it: I think the kind of teaching (and teachers) that can be replaced by software like this is pretty bad teaching.

Who Reads Academic Writing? Who Reads Anything?

I’ve been sprucing up stevendkrause.com lately, mainly because I’ve got some free time during the summer recess and because it’s a distraction from working on “the MOOC book,” aka MOOCs in Context, which I am hoping will be out in print (and maybe out electronically?) about a year from now. It’s been interesting talking to some non-academic-types about this book. A couple of times folks have asked “Who do you think is going to read this?” As far as I can tell, no one has intended any malice with this question; it’s honest curiosity. I typically have answered “Mostly people interested in MOOCs or distance education, I’d guess. Other academics. So I’d guess a few hundred people in the world, maybe a bit more than that.”

Non-academics who know enough about the role of publishing in tenure and promotion in higher education and who also know that I’m a full professor unlikely to take another job at this point of my career sometimes then ask “Well, then why bother?” It’s a question I take seriously. We’ve all heard before that most academic writing never actually is read, even by other academics. It is one of the reasons why I want to try to return to writing some fiction, why I’d like to write more commentaries like this one I published in Inside Higher Ed a while ago,  and why I wouldn’t mind trying my hand at some “popular” non-fiction writing.

Though actually, Arthur G. Jago recently published a commentary in The Chronicle of Higher Education that has an interesting take on this claim, “Can It Really Be True That Half of Academic Papers Are Never Read?”  It’s an accessible read that I’m thinking of assigning for first year writing in the fall. All of us are guilty of assuming certain “truthy” sounding claims without any actual evidence, and Jago traces in detail one of the most common of those claims in academia: “At least one study found that the average academic article is read by about 10 people, and half of these articles are never read at all.” That specific sentence came from another CHE op-ed piece published recently and it does sound truthy to me. Jago doggedly traces the origins of this– what study is being referenced here– and while he does turn up a number of studies that kind of make this argument, Jago concludes we will probably never find the “bibliographic equivalent of ‘patient zero.'”

The more you poke at this question about how often is the scholarship academics create actually read by anyone, the more difficult an answer becomes. For example, what does “read” mean? The easiest way to quantify this in terms of scholarship is related to citation, but a) just because someone cites something doesn’t mean they have “read” it completely or with a great level of care, and b) just because something hasn’t been cited doesn’t mean it’s been read.

Certainly putting scholarship online means it reaches more readers, especially if we apply a liberal definition of “read” to include “clicked on a link.” I’ve been linking to versions of conference presentation notes/scripts/slides on the online version of my online CV for a while now and when I compare the number of people who were actually present at these presentations with the number of clicks my materials get, it’s not even close. I’ve had my dissertation up online since 1996, and it’s received thousands of hits over the years and its being cited a few times. That’s more attention than I am sure the bound version has received in 22 years. (Note to self: I ought to take a road trip to BGSU one of these days to see if I can find it in the library). But of course a click does not translate into a reading.

The other thing that occurs to me is what’s the evidence that academic articles and books are read any less frequently than any other kind of article or book? I remember seeing a speech given by Lee Smith probably close to 30 years ago in Richmond (Lee was my MFA thesis advisor) where she quoted a statistic (perhaps an equally unverified and truthy claim, but still) that the average number of books read per year by Americans was zero since the vast majority of Americans simply do not read books at all. I knew some people just starting out as fiction writers way back when who had books coming out with major New York publishers and the press run was only going to be like 1,000 copies. As far as I can tell the trade book publishing business model is essentially the same as venture capital investing in the tech industry: publishers “bet” on hundreds of different authors hoping that one or two pay off with successful books. The rest? Well, thats what reminder stores are for.

Which is to say two things, I guess:

  • The main problem with academic publishing isn’t exactly that “nobody” reads it. Rather, the main problem with academic publishing is there are too many academics who only write and publish because they have to in order to get tenured and promoted.
  • While I’d love my MOOC book (and any other book I might end up writing) to sell a zillion copies to make me rich and admired and famous and all of that, the reality is that’s not a particularly good reason to write. So for me to keep doing this kind of thing the answer to the “why bother” question has to be “because I want to.”