Marc Watkins is right; my flavor of AI skepticism

A “Paying Attention to AI” Substack post…

The other day, I read Marc Watkin’s excellent Substack post “AI Is Unavoidable, Not Inevitable,” and I would strongly encourage you to take a moment to do the same. Watkins begins by noting that he is “seeing a greater siloing among folks who situate themselves in camps adopting or refusing AI.” What follows is not exactly a direct response to these refusing folks, but it’s pretty close and I find myself agreeing with Watkins entirely. As he says, “To make my position clear about the current AI in education discourse I want to highlight several things under an umbrella of ‘it’s very complicated.'”

Like I said, you really should read the whole thing. But I will share this long quote that is so on point:

Many of us have wanted to take a path of actively resisting generative AI’s influence on our teaching and our students. The reasons for doing so are legion—environmental, energy, economic, privacy, and loss of skills, but the one that continually pops up is not wanting to participate in something many of us fundamentally find unethical and repulsive. These arguments are valid and make us feel like we have agency—that we can take an active stance on the changing landscape of our world. Such arguments also harken back to the liberal tradition of resisting oppression, protesting what we believe to be unjust, and taking radical action as a response.

But I do not believe we can resist something we don’t fully understand. Reading articles about generative AI or trying ChatGPT a few times isn’t enough to gauge GenAI’s impact on our existing skills. Nor is it enough to rethink student assessments or revise curriculum to try and keep pace with an ever-changing suite of features.

To meaningfully practice resistance of AI or any technology requires engagement. As I’ve written previously, engaging AI doesn’t mean adopting it. Refusing a technology is a radical action and we should consider what that path genuinely looks like when the technology you despise is already intertwined with the technology you use each day in our very digital, very online world.

Exactly. Teachers of all sorts, but especially those of us who are also researchers and scholars, need to engage with AI well enough to know what we are either embracing or refusing. Only refusing is at best willful ignorance.

AI is difficult to compare to previous technologies (as Watkins says, AI defies analogies), but I do think the emergence of AI now is kind of like the emergence of computers and the internet as tools for writing a couple of decades ago. A pre-internet teacher could still refuse that technology by insisting students take notes by hand, hand in handwritten papers, and take proctored timed exams completed on paper forms. When I started at EMU in 1998, I still had a few very senior colleagues who taught like this, who never touched their ancient office computers, who refused to use email, etc. But try as they might, that pre-internet teacher who required their students to hand in handwritten papers did not make computers and the internet disappear from the world.

It’s not quite the same now with AI as it was with the internet back then because I don’t think we are at the point where we can assume “everyone” routinely uses AI tools all the time. This is why I for one am quite happy that most universities have not rolled out institutional policies on AI use in teaching and scholarship– it’s still too early for that. I’ve been experimenting with incorporating AI into my teaching for all kinds of different reasons, but I understand and respect the choices of my colleagues to not allow their students to use AI. The problem though is refusing AI does not make it disappear out of the students’ lives outside of the class– or even within that class. After all, if someone uses AI as a tool effectively– not to just crudely cheat, but to help learn the subject or as a tool to help with the writing– there is no way for that AI forbidding professor to tell.

Again, engaging with AI (or any other technology) does not mean embracing, using, or otherwise “liking” AI (or any other technology). I spent the better part of the 2010s studying and publishing about MOOCs, and among many other things, I learned that there are some things MOOCs can do well and some things they cannot. But I never thought of my blogging and scholarship as endorsing MOOCs, certainly not as a valid replacement for in-person or “traditional” online courses.

I think that’s the point Watkins is trying to make, and for me, that’s what academics do: we’re skeptics, especially of things based on wild and largely unsubstantiated claims. As Watkins writes, “… what better way to sell a product than to convince people it can lead to both your salvation and your utter destruction? The utopia/ dystopia narratives are just two sides of a single fabulist coin we all carry around with us in our pockets about AI.”

This is perhaps a bad transition, but thinking about this reminded me of Benjamin Riley’s Substack post back in December, “Who and What comprise AI Skepticism?” This is one of those “read it if you want to get into the weeds” sorts of posts, but the very short version: Casey Newton, who is a well-known technology journalist, wrote about how he thought there are only two camps of AI Skepticism: AI is real and dangerous, and AI is fake and sucks. Well, A LOT of prominent AI experts and writers disputed Newton’s argument, including Riley. What Riley does in his post is describe/create his own taxonomy of nine different categories of AI Skepticism, including one category he calls the “Sociocultural Commentator Critics– ‘the neo-Luddite wing,'” which would include AI refusers.

Go and check it out to see the whole list, but I would describe my skepticism as being most like the “AI in Education Skeptics” and the “Technical AI Skeptics” categories, along with a touch of “Skeptics of AI Art and Literature” category. Riley says AI in Education Skeptics are “wary of yet another ed-tech phenomena that over-hypes and under-delivers on its promises.” I think we all felt the same warriness of ed-tech and over-hype with MOOCs.

Riley’s Technical AI Skeptics are science-types, but what I identify with is exploring and exposing AI’s limitations. AI failures are at least as interesting to me as AI successes, and it makes me question all of these claims about AI passing various tests or whatever. AI can do no wrong in controlled experiments much in the same way that self-driving cars do just fine on a closed course in clear weather. But just like that car doesn’t do so great driving itself through a construction zone or a snowstorm, AI isn’t nearly as capable outside of the lab.

And I say a touch of the Skeptics in AI Art and Literature because while I don’t have a problem with people using AI to make art or to write things, I do think that “there is something essential to being human, to being alive, that we express through art and writing.” Actually, this is one of my sources of “cautious optimism” about AI: since it isn’t that good at doing the kind of human things we teach directly and indirectly in the humanities, maybe there’s a future in these disciplines after all.

I’ll add two other reasons why I’m skeptical about how AI. First, I wonder about the business model. While this is not exactly my area of expertise, I keep reading pieces by people who do know what they’re talking about raising the same questions about where the “return on investment” is going to come from. The emergence of DeepSeek is less about its technical capabilities and more about further disrupting the business plans.

Second, I am skeptical about how disruptive AI is going to be in education. It’s fun and easy to talk with AI chatbots, and they can be helpful for some parts of the writing process, especially when it comes to brainstorming, feedback on a draft, proofreading, and so forth. There might be some promise that today’s AI will enable useful computer-assisted instruction tools that go beyond “drill and kill” applications from the 1980s. And assuming AI continues to develop and mature into a truly general-purpose technology (like electricity, automobiles, the internet, etc.), of course, it will change how everything works, including education. But besides the fact that I don’t think AI is going to ever be good enough to replace the presence of humans in the loop, I don’t think anyone is comfortable with an AI replacing a human teacher (or, for that matter, human physicians, airline pilots, lawyers, etc.).

If there is going to be an ROI opportunity from the trillion dollars these companies have sunk into this stuff, it ain’t going to come from students using AI for school work or from people noodling around with it for fun. The real potential with AI is in research, businesses, and industries that work with enormous data sets and in handling complex but routine tasks: coding, logistics, marketing, finance, research into the discovery of new proteins or novel building materials, and anything involving making predictions based on a large database.

Of course, the fun (and scary and daunting!) part of researching AI and predicting its future is everyone is probably mostly wrong, but some of us might have a chance of being right.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.