Is AI Going to be “Something” or “Everything?”

Way back in January, I applied for release time from teaching for one semester next year– either a sabbatical or what’s called here a “faculty research fellowship” (FRF)– in order to continue the research I’ve been doing about teaching online during Covid. This is work I’ve been doing since fall 2020, including a Zoom talk at a conference in Europe, a survey I ran for about six months, and from that survey, I was able to recruit and interview a bunch of faculty about their experiences. I’ve gotten a lot out of this work already: a couple conference presentations (albeit in the kind of useless “online/on-demand” format), a website (which I had to code myself!) article, and, just last year, I was on one of those FRFs.

Well, a couple weeks ago, I found out that I will not be on sabbatical or FRF next year. My proposal, which was about seeking time to code and analyze all of the interview transcripts I collected last year, got turned down. I am not complaining about that: these awards are competitive, and I’ve been fortunate enough to receive several of these before, including one for this research. But not getting release time is making me rethink how much I want to continue this work, or if it is time for something else.

I think studying how Covid impacted faculty attitudes about online courses is definitely something important worth doing. But it is also looking backwards, and it feels a bit like an autopsy or one of those commissioned reports. And let’s be honest: how many of us want to think deeply about what happened during the pandemic, recalling the mistakes that everyone already knows they made? A couple years after the worst of it, I think we all have a better understanding now why people wanted to forget the 1918 pandemic.

It’s 20/20 hindsight, but I should have put together a sabbatical/research leave proposal about AI. With good reason, the committee that decides on these release time awards tends to favor proposals that are for things that are “cutting edge.” They also like to fund releases for faculty who have book contracts who are finishing things up, which is why I have been lucky enough to secure these awards both at the beginning and end of my MOOC research.

I’ve obviously been blogging about AI a lot lately, and I have casually started amassing quite a number of links to news stories and other resources related to Artificial Intelligence in general, ChatGPT and OpenAI in particular. As I type this entry in April 2023, I already have over 150 different links to things without even trying– I mean, this is all stuff that just shows up in my regular diet of social media and news. I even have a small invited speaking gig about writing and AI, which came about because of a blog post I wrote back in December— more on that in a future post, I’m sure.

But when it comes to me pursuing AI as my next “something” to research, I feel like I have two problems. First, it might already be too late for me to catch up. Sure, I’ve been getting some attention by blogging about it, and I had a “writing with GPT-3” assignment in a class I taught last fall, which I guess kind of puts me at least closer to being current with this stuff in terms of writing studies. But I also know there are already folks in the field (and I know some of these people quite well) who have been working on this for years longer than me.

Plus a ton of folks are clearly rushing into AI research at full speed. Just the other day, the CWCON at Davis organizers sent around a draft of the program for the conference in June. The Call For Proposals they released last summer describes the theme of this year’s event, “hybrid practices of engagement and equity.” I skimmed the program to get an idea of the overall schedule and some of what people were going to talk about, and there were a lot of mentions of ChatGPT and AI, which makes me think a lot of people are likely to be not talking about the CFP theme at all.

This brings me to the bigger problem I see with researching and writing about AI: it looks to me like this stuff is moving very quickly from being “something” to “everything.” Here’s what I mean:

A research agenda/focus needs to be “something” that has some boundaries. MOOCs were a good example of this. MOOCs were definitely “hot” from around 2012 to 2015 or so, and there was a moment back then when folks in comp/rhet thought we were all going to be dealing with MOOCs for first year writing. But even then, MOOCs were just a “something”  in the sense that you could be a perfectly successful writing studies scholar (even someone specializing in writing and technology) and completely ignore MOOCs.

Right now, AI is a myriad of “somethings,” but this is moving very quickly toward “everything.” It feel to me like very soon (five years, tops), anyone who wants to do scholarship in writing studies is going to have to engage with AI. Successful (and even mediocre) scholars in writing studies (especially someone specializing in writing and technology) are not going to be able to ignore AI.

This all reminds me a bit about what happened with word processing technology. Yes, this really was something people studied and debated way back when. In the 1980s and early 1990s, there were hundreds of articles and presentations about whether or not to use word processing to teach writing— for example, “The Word Processor as an Instructional Tool: A Meta-Analysis of Word Processing in Writing Instruction” by Robert L. Bangert-Drowns, or “The Effects of Word Processing on Students’ Writing Quality and Revision Strategies” by Ronald D. Owston, Sharon Murphy, Herbert H. Wideman. These articles were both published in the early 1990s and in major journals, and both are trying to answer the question which one is “better.” (By the way, most but far from all of these studies concluded that word processing is better in the sense it helped students generate more text and revise more frequently. It’s also worth mentioning that a lot of this research overlaps with studies about the role of spell-checking and grammar-checking with writing pedagogy).

Yet in my recollection of those times, this comparison between word processing and writing by hand was rendered irrelevant because everyone– teachers, students, professional writers (at least all but the most stubborn, as Wendell Berry declares in his now cringy and hopelessly dated short essay “Why I Am not Going to Buy a Computer”)– switched to word processing software on computers to write. When I started teaching as a grad student in 1988, I required students to hand in typed papers and I strongly encouraged them to write at least one of their essays with a word processing program. Some students complained because they were never asked to type anything in high school. By the time I started my PhD program five years later in 1993, students all knew they needed to type their essays on a computer and generally with MS Word.

Was this shift a result of some research consensus that using a computer to type texts was better than writing texts out by hand? Not really, and obviously, there are still lots of reasons why people still write some things by hand– a lot of personal writing (poems, diaries, stories, that kind of thing) and a lot of note-taking. No, everyone switched because everyone realized word processing made writing easier (but not necessarily better) in lots and lots of different ways and that was that. Even in the midst of this panicky moment about plagiarism and AI, I have yet to read anyone seriously suggest that we make our students give up Word or Google Docs and require them to turn in handwritten assignments. So, as a researchable “something,” word processing disappeared because (of course) everyone everywhere who writes obviously uses some version of word processing, which means the issue is settled.

One of the other reasons why I’m using word processing scholarship as my example here is because both Microsoft and Google have made it clear that they plan on integrating their versions of AI into their suites of software– and that would include MS Word and Google Docs. This could be rolling out just in time for the start of the fall 2023 semester, maybe earlier. Assuming this is the case, people who teach any kind of writing at any kind of level are not going to have time to debate if AI tools will be “good” or “bad,” and we’re not going to be able to study any sorts of best practices either. This stuff is just going to be a part of the everything, and for better or worse, that means the issue will soon be settled.

And honestly, I think the “everything” of AI is going to impact, well, everything. It feels to me a lot like when “the internet” (particularly with the arrival of web browsers like Mosaic in 1993) became everything. I think the shift to AI is going to be that big, and it’s going to have as big of an impact on every aspect of our professional and technical lives– certainly every aspect that involves computers.

Who the hell knows how this is all going to turn out, but when it comes to what this means for the teaching of writing, as I’ve said before, I’m optimistic. Just as the field adjusted to word processing (and spell-checkers and grammar-checkers, and really just the whole firehouse of text from the internet), I think we’ll be able to adjust to this new something to everything too.

As far as my scholarship goes though: for reasons, I won’t be able to eligible for another release from teaching until the 2025-26 school year. I’m sure I’ll keep blogging about AI and related issues and maybe that will turn into a scholarly project. Or maybe we’ll all be on to something entirely different in three years….

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.