My recent attempts at social media detoxing/dieting

Who hasn’t thought about ditching their social media accounts? Who hasn’t found themselves wasting way WAY too much time in some kind of nonsense online discussion? And then, in a brief moment of clarity after seemingly hours of fog, who hasn’t thought “this is starting to feel kinda toxic”?

I felt that tipping point a couple weeks ago when something happened on the WPA-L mailing list. I didn’t engage in the discussion there, but I did (rather foolishly) engage in too much of the back-channel conversation on Facebook, ultimately getting into that “why am I doing this toxic thing to myself?” kind of space.

A tangent/some unpopular thoughts about “that conversation” on WPA-L: first, I didn’t think it was so much an example of mansplaining as it was an example of what I described in my dissertation as an “immediate” rhetorical situation, the kind of miscommunication that happens in asynchronous electronic spaces (mailing lists, Twitter, Facebook, etc.) when the understanding of rhetor, audience, and message all become jumbled. I finished my dissertation on 1996, and one of the examples I have in chapter four is from a very similar (though not as gendered) discussion that went off the rails on ACW-L, a now defunct but similar listserv. But of course bringing this up as a possibility of what was going on was impossible. Besides, the conversation turned into one about mansplaining anyway. Second, I think the gender dynamics in composition/rhetoric are extremely complex. This is a field where there are more women than men, and it is a field where women occupy about the same number of positions of power as men in terms of being leaders, important scholars, high profile professors, and so forth. Third, I think the discussion environment on the WPA-L list had been turning kind of bad for a while, maybe because of the rise of other social media platforms, maybe because of something else. I generally don’t agree with the likes of Bill Maher who have complained that college campuses have become too “politically correct” and they can no longer tolerate any sort of divisive speakers or naughty comedians, but it does feel to me like there’s not a whole lot or room to stray too far from the party line on WPA-L anymore. And it also sure feels to me like the general toxicity of the Trump administration has poisoned everything, including what was a generally mild-mannered academic mailing list. We are all being constantly beaten down and made brittle from this disaster of a human who we elected (sort of) president and I am sure it will take us all years to get over this damage– if we ever can ever again feel “right” about trying to engage with people and ideas we don’t agree with. Let me put it this way: I was on WPA-L for a long time (20+ years?), and I do not think this would have happened during either the Obama administration or the Bush II administration.

Anyway, back to the toxicity: I decided I had had enough, and I needed to do something with how I’m engaging (and over-engaging) with social media.

So the first thing I did was sign off of WPA-L, after writing an email that I guess is easy to read as self-serving but I was trying to be sincere in thanking the group for all I learned over the years. Maybe I left too early, maybe I stayed too long (a lot of the backchannel discussion on Facebook consisted of people saying stuff like “oh, I got off of that shit show of a mailing list years ago”), maybe I was part of the problem and it will be better after I’m gone. Though it’s still a public list and easy enough to check in on once in a while.

Then Facebook. I thought briefly about just chucking the whole thing, but I still like it and I feel like I need to keep more than a toe in it because of friends and family, people I know at EMU and in academia, and because I teach a lot of stuff about Facebook and social media. So I went through my “friends” and I decided that my minimal standard for continued Facebook “friendship” was people who I sorta/kinda knew well enough that if I were to run into at a conference or something, I might recognize them and maybe even chat with them in a more or less friendly way. I went from about 650-700 down to about 460.

It was interesting culling that list. I don’t exactly know how the algorithms of who gets listed where on my friend list, but I think it’s people who post most frequently/recently first, and then everyone else in decreasing order of connectivity. I think a lot of my now former Facebook friends abandoned their accounts a while ago, and there were three or four folks on my list who had actually passed away in the last few years. Interestingly, I’m now noticing posts from folks who I hadn’t seen posting in a long time, again I suppose because of how the algorithms for what shows up where in my feed.

For Twitter, I’m kind of doing the opposite: I’m trying to read it a bit more, follow more people, and posting/retweeting more. Don’t get me wrong, I am well aware that Twitter is also kind of a cesspool, but I don’t know, it doesn’t feel quite as contentious? Maybe the brevity of the form, maybe because of who I follow or don’t follow? Maybe it’s because there are so many tweets (I’m following just over 760 tweeters/people/media sources) it feels a lot more like channel surfing than engaging in a discussion? Plus I find more of the links to things more interesting, and a friend of mind told me about realtwitter.com, which (as far as I can tell) shows you real time updates of who I’m following– that is, it apparently skips by Twitter’s rankings and ads.

And Instagram is just fun. Instagram never pisses me off. Maybe I should just be doing Instagram and nothing else.

So we’ll see if this makes things less toxic-feeling. The next step (probably) will be to try to work harder at limiting my time spent in the social media soup.

Testing the Difference Between “Fake News” and “Unsubstantiated Reports” with Provenance and Plausibility

I’ve been thinking a lot about “fake news” versus “alleged” or “unsubstantiated reports” lately– heck, anyone who has been paying any attention to last week’s news about Donald Tump has surly been thinking about this too. And it’s not just Trump labeling BuzzFeed and CNN as sources for “Fake News;” it’s other “news” people like Chuck Todd and the mainstream/traditional media across the board— at least that’s how they responded to the claims about Trump in Russia when they first broke. Within twenty-four hours of that initial story, even the New York Times was reporting on it.

Trump is going to label anything that doesn’t support him as “fake news” or coming from “losers” or being “sad” or whatever, and maybe BuzzFeed shouldn’t have published something that was as “unsubstantiated” as the stuff that was in this report. The journalism ethics here are complicated, though I have to say I think the MSM response has less to do with the question of when is proper to publish something and more to do with the “icky” factor of the alleged “golden shower” shows. BuzzFeed’s editor Ben Smith has been pretty smart about responding to the criticism– here’s a link to an interview he did on CNN. And once again, Teen Vogue has had excellent reporting/thought pieces on Trump, as in this piece “So You Read That Scandalous Report About Donald Trump and Russia– Now What?”

Anyway, in writing now about this, I’m not that interested in the ethical question of whether or not BuzzFeed should have published this in the first place. I’m more interested in playing around with/thinking about what sorts of strategies and processes can any of us use in evaluating these kinds of stories, and not just between something that is “fake” versus something that is “true,” but also between something that is “fake” versus something that is “alleged” or “unsubstantiated.” I think these are two different things and need to be treated differently: that is, something that is “fake” does not necessarily equal something that is “unsubstantiated,” and vice-versa. And as a rhetorician who has been influenced by a lot of postmodern/post-structural theories, this is also important to me because I kind of feel we’ve painted ourselves into a corner by the ways we have tended to academically approach “Truth.”

A simple example: in recent years, I’ve been very fond of showing a video called “In Defense of Rhetoric” that was put together by graduate students in Professional Communication at Clemson University in 2011. I think it does a very good job of explaining the basics of rhetoric for an audience who has only heard of the negative connotations– as in “that’s just empty rhetoric,” or (as an example from the video) the “art of bullshit.” But I have to say that this semester, in light of everything that has happened with the election and what seems to be a rise of a “post-truth era,” I did wince a bit when, at about the 10 minute mark in discussing “Epistemic Rhetoric,” the faculty interviewed here talk about how reality itself is constructed by rhetoric, about how everything we decide is based on judging between claims. I agree with this in theory, but the problem is this approach to reality is part of what’s enabling “Fake News” in the first place. It certainly has enabled Trump and his supporters to dismiss a story he doesn’t like as “fake” because if reality is based simply on how I see it being constructed rhetorically or on simply competing claims, why do we have to choose the same thing?

So how do we evaluate these claims of “Fake” versus “alleged,” and how should the press report the “unsubstantiated,” if they should report it at all?  This is what I am getting at with this idea of the tests of “provenance” and “plausibility.” By provenance, I mean an understanding of the origin of the story. I’m thinking here in particular of the way that term is used in the art and antique world to help determine authenticity and value. An antique that is accompanied with documentation that traces the history of an object is a whole lot more valuable than the same object without that documentation, and forging those documents is always a problem. (As a tangent here, I’m reminded of the novel The Goldfinch). By plausibility, I mean the potential that a story might be true based on the other things we know about the story, such as the people and places involved, when it supposedly happened, and so forth. I think I mean something here like ethos, but I think it is beyond just the individuals or even beyond the available evidence. Plausibility for me doesn’t mean whether or not something is (T)rue, but more along the lines of the odds that it’s (T)rue.

A sense of provenance and plausibility probably exists on a spectrum of “truthiness” I’ll call Fiction and (T)rue, and here I am mostly thinking of part of what Derek Mueller and I talked about the other day and/or the way that Bruno Latour talks about “black boxes” in Science in Action. I am far from a Latour scholar/expert so this reading might be a bit off, but basically, Latour points out that new discoveries/theories in science always depend on previously made discoveries/theories that are now presumed to be “(T)rue”– not in a “Platonic ideal across all space and time” notion of “Truth,” but in a “we’ve done this experiment a lot and gotten similar results so now presume it is a fact” sort of (T)rue. Geneticist are not running the experiments to determine the structure of DNA anymore because that is now just (T)rue and tucked away into a “black box”– which is to say there could be something we learn about DNA later that changes that and thus reopens that discussion.

To tease this all out, let’s compare the “fake” news that has been dubbed “Pizzagate” versus what I think is an “unsubstantiated” story about intelligence the Russians have about Trump.

“Pizzagate” was a conspiracy theory which claimed members of the Democratic Party– lead by Hillary Clinton and her campaign manager John Podesta– were running an elaborate human trafficking and pedophile sex ring housed in the basement of a a Washington, D.C. pizza restaurant called Comet Ping Pong (apparently, you can play table tennis while eating pizza).  Snopes.com has an extensive entry about the controversy here, and the Washington Post also published this article tracing the origins of this story here, too. In my mind, this is about as extreme of an example of “fake” as it gets, but I think it’s an especially important example in at least two ways. First, the story spread through social media via ‘bots along with other conspiracy theorists like Alex Jones. Millions of people (and machines) reposted/retweeted this. Second, this story had real life and potentially very dangerous consequences since a North Carolina man named Edgar Maddison Welch, convinced the story was true, showed up at the pizza place with an AR-15 ready to free the children. Here’s a story from Mother Jones about Welch.

The allegations released by BuzzFeed about Trump were contained in a document supposedly a part of an intelligence report/briefing about stuff the Russians have on Trump to potentially blackmail or otherwise compromise him. Here’s a link to the original BuzzFeed story that contains the entire report. As a slight tangent: much of the sensationalism has to do with the practice of “urolagnia,” which is sexual excitement associated with urine. I’ll admit, I find the idea of “golden showers” both gross and, as it has been reported, darkly funny. But a) this is far from the most unusual “kink” out there, and b) hey, if it’s between consenting adults and no one gets hurt, who am I to criticize anyone’s sex life? What is frankly more troubling in these allegations are the other things that the Russians supposedly have on Trump in terms of real estate deals, grooming Trump as an “asset” to Russian intelligence, and the communications between Trump’s campaign and the Russians during the election cycle.

So, how do these stories stack up in terms of “provenance” and “plausibility?”

The provenance of both stories have already been explored and reported in some detail and the difference between these two examples are quite clear. Pizzagate emerges as a combination of pure fiction and rumors; in contrast, the allegations about Trump and the Russians was part of an intelligence dossier that has apparently been in the hands of a variety of folks (including journalists) for months. This is not to say that the allegations against Trump are accurate or even close to (T)rue; however, we know a lot about the origins of this story.

The plausibility of these two stories is also quite stark. As even Edgar Welch discovered once on the scene at Comet Ping Pong, it’s just not possible because of the building itself– never mind the craziness of the rest of the details. On the other hand, the allegations of Trump’s behavior in Russia strike me as completely plausible– although it probably didn’t actually happen. After all, Trump really did make a trip to Moscow when this is said to have happened (this was during the “Miss Universe” pageant). Further, we already know that Trump has made some cameo appearances in Playboy videos,  has bragged about grabbing women by “the pussy,” and, as reported just this morninghe is being sued by a former Apprentice contestant for sexual harassment and defamation. Obviously, these past activities don’t prove the allegations of his behavior in Moscow; however, I do think these past activities do help explain the plausibility of these allegations.

In my mind, this test of provenance and plausibility also works if we change the actors in these stories. I think it is implausible that Trump and Kellyanne Conway were running a pedophile sex ring out of a pizzeria pretty much for the same reasons it was implausible for Clinton and Podesta. But I think the plausibility changes a bit with the Russian allegations, particularly the specifics of the “golden shower” show. I think these allegations brought against politicians like Hilary Clinton, Obama, or either of the Bushes would be dismissed as just not plausible. However, would we be as quick to dismiss this kind of story if it were about Bill Clinton?

Anyway, I don’t know how useful it is to think of fake news versus allegations versus real news this way, as on the spectrum of fiction and (T)ruth, as being about measuring provenance and plausibility. I’m not sure how necessary this is either given that there are lots of schemes and advice out there for testing the “truthiness” of news of all sorts, particularly as it manifests in social media. I do know one thing: we’re all going to have to get a hell of a lot better at thinking about and describing the differences between the fake, the alleged, and the real.

Re-Learning Some Email (and Server) Lessons

The other day on Facebook, I wrote:

I’ll say this about Hilary’s email mess: lots of people (some of my colleagues, lots of my students) don’t think it’s important to discuss and teach things like “how to send an email” or the basics of how “the intertubes works” because this is just stuff people don’t need to know. Email and stuff, the argument goes, is like your car– you don’t need to know how it works to drive it. Well, I hope this convinces people that’s wrong.

Maybe this is all obvious, but given what’s happened with this election, maybe not.

I should point out that I’m voting for Clinton and I hope you vote for Clinton too. I don’t think a “President Trump” (geez, it hurts putting those two words together, even hypothetically) would necessarily be the end of democracy as we know it and/or plunge the U.S. into Mad Max-esque dystopia, but I do know it would be a hot hot mess.

I should also point out that I think Hillary Clinton is the most qualified person (based on previous experiences, at least) to run for president in my lifetime. In a lot of ways, this is Clinton’s problem because even though I have “been with her” from the start, she has done/said/supported things over the last 30 years I disagree with, which is inevitable based on being in public life for the last 30 years. And yes, there are other ways in which Hillary and her family (I’m talking about “the big dog” here) have sometimes done stuff that doesn’t seem completely above board– again, almost inevitable for politicians in the public eye for decades.

But this email mess? In my opinion, it’s not a reason to vote against Clinton because I really really doubt there was any criminality there, either intentionally or unintentionally. (And as a slight but relevant tangent: let’s just set aside the fact that government argues amongst itself all the time but what’s a “secret” and how information should be classified and about proper procedures for handling this information. The second Bush administration apparently had an email server owned and operated by the RNC that “lost”/deleted 22 million or so emails, lots other politicians have in the past or currently still operate some version of a private server, etc., etc. In other words, lots of politicians have done a version of what Hillary did, but the difference is Hillary is running for president.)

So vote for Hillary Clinton, okay? But let’s also learn (or really, relearn) some email basics based on these mistakes, both the ones that she has made and the mistakes I know I continue to make all the time.

Continue reading “Re-Learning Some Email (and Server) Lessons”

Instead of banning laptops, what if we mandated them?

Oy. Laptops are evil. Again.

This time, it comes from “Leave It in the Bag,” an article in Inside Higher Ed, reporting on a study done by Susan Payne Carter, Kyle Greenberg, and Michael S. Walker, all economists at West Point (PDF). This has shown up on the WPA-L mailing list and in my various social medias as yet another example of why technology in the classrooms is bad, but I think it’s more complicated than that.

Mind you, I only skimmed this and all of the economics math is literally a foreign language to me. But there are a couple of passages here that I find interesting and not exactly convincing to me that me and my students should indeed “leave it in the bag.”

For example:

Permitting laptops or computers appears to reduce multiple choice and short answer scores, but has no effect on essay scores, as seen in Panel D. Our finding of a zero effect for essay questions, which are conceptual in nature, stands in contrast to previous research by Mueller and Oppenheimer (2014), who demonstrate that laptop note-taking negatively affects performance on both factual and conceptual questions. One potential explanation for this effect could be the predominant use of graphical and analytical explanations in economics courses, which might dissuade the verbatim note-taking practices that harmed students in Mueller and Oppenheimer’s study. However, considering the substantial impact professors have on essay scores, as discussed above, the results in panel D should be interpreted with considerable caution. (page 17)

The way I’m reading this is for classes where students are expected to take multiple choice tests as a result of listening to a lecture from a sage on the stage, laptops might be bad. But in classes where students are supposed to write essays (or at least more conceptual essay questions), laptops do no harm. So if it’s a course where students are supposed to do more than take multiple choice tests….

After describing the overall effects of students performing worse when computing technology is available, Carter, Greenberg, and Walker write:

It is quite possible that these harmful effects could be magnified in settings outside of West Point. In a learning environment with lower incentives for performance, fewer disciplinary restrictions on distracting behavior, and larger class sizes, the effects of Internet-enabled technology on achievement may be larger due to professors’ decreased ability to monitor and correct irrelevant usage.” (page 26)

Hmmm…. nothing self-congratulatory about that passage, is there?

Besides the fact that there is no decent evidence that the students at West Point (or any other elite institution for that matter) are on the whole such special snowflakes that they are more immune from the “harm” of technology/distraction compared to the rest of us simpletons, I think one could just as easily make the exact opposite argument. It seems to me that is is “quite possible” that the harmful effects are more magnified in a setting like West Point because of the strict adherence to “THE RULES” and authority for all involved. I mean, it is the Army after all. Perhaps in settings where students have more freedom and are used to the more “real life” world of distractions, large class sizes, the need to self-regulate, etc., maybe those students are actually better able to control themselves.

And am I the only one who is noticing the extent to which laptop/tablet/technology use really seems to be about a professor’s “ability to monitor and correct” in a classroom? Is that actually “teaching?”

And then there’s this last paragraph in the text of the study:

We want to be clear that we cannot relate our results to a class where the laptop or tablet is used deliberately in classroom instruction, as these exercises may boost a student’s ability to retain the material. Rather, our results relate only to classes where students have the option to use computer devices to take notes.   We further cannot test whether the laptop or tablet leads to worse note taking, whether the increased availability of distractions for computer users (email, facebook, twitter, news, other classes, etc.) leads to lower grades, or whether professors teach differently when students are on their computers. Given the magnitude of our results, and the increasing emphasis of using technology in the classroom, additional research aimed at distinguishing between these channels is clearly warranted.(page 28)

First, laptops might or might not be useful for taking notes. This is at odds with a lot of these “laptops are bad” studies. And as a slight tangent, I really don’t know how easy it is to generalize about note taking and knowledge across large groups. Speaking only for myself: I’ve been experimenting lately with taking notes (sometimes) with paper and pen, and I’m not sure it makes much difference. I also have noticed that my ability to take notes on what someone else is saying — that is, as opposed to taking notes on something I want to say in a short speech or something– is now pretty poor. I suppose that’s the difference between being a student and being a teacher, and maybe I need to relearn how to do this from my students.

This paragraph also hints at another issue with all of these “laptops are bad” pieces, of “whether professors teach differently when students are on their computers.” Well, maybe that is the problem, isn’t it? Maybe it isn’t so much that students are spending all of this time being distracted by laptops, tablets, and cell-phones– that is, students are NOT giving professor the UNDIVIDED ATTENTION they believe (nay, KNOWS) they deserve. Maybe the problem is professors haven’t figured out that the presence of computers in classrooms means we have to indeed “teach differently.”

But the other thing this paragraph got me to thinking about the role of technology in the courses I teach, where laptops/tablets are “used deliberately in classroom instruction.” This paragraph suggests that the opposite of banning laptops might also be as true: in other words, what if, instead of banning laptops from a classroom, the professor mandated that students each have a laptop open at all times in order to take notes, to respond to on-the-fly quizzes from the professor, and look stuff up that comes up in the discussions?

It’s the kind of interesting mini-teaching experiment I might be able to pull off this summer. Of course, if we extend this kind of experiment to the realm of online teaching– and one of my upcoming courses will indeed be online– then we can see that in one sense, this isn’t an experiment at all. We’ve been offering courses where the only way students communicate with the instructor and with other students has been through a computer for a long time now. But the other course I’ll be teaching is a face to face section of first year writing, and thus ripe for this kind of experiment. Complicating things more (or perhaps making this experiment more justifiable?) is the likelihood that a significant percentage of the students I will have in this section are in some fashion “not typical” of first year writing at EMU– that is, almost all of them are transfer students and/or juniors or seniors. Maybe making them have those laptops open all the time could help– and bonus points if they’re able multitask with both their laptop and their cell phones!

Hmm, I see a course developing….

When is it okay to make fun of grammar?

Remember Weird Al? Yeah, me neither. Well, no– that’s not true. Of course I “remember” Weird Al from lots of different parodies over the years, all the way back to “My Bologna” to “Like a Surgeon” to his latest releases that have come out this past week. It’s just that I don’t find myself thinking about Weird Al one way or the other– except when he pops up in the media once in a while, like now.

WA has a new album out and one his parody songs is called “Word Crimes:”

Sung to the tune of “Blurred Lines,” it’s a series common “grammar nerd” criticisms that are ridiculously picky (it is a parody, of course) and that rhyme in funny ways. As someone who appreciates word humor, I thought it was funny and I didn’t think much more about it. Ha ha.

And then the hating/backlash began.

There was Forrest Wickman’s Slate article,”Weird Al Is Tired of Your “Word Crimes” in New Video,” which goes into equally silly detail in out pet-peeving WA’s pet peeves. A more pointed critique came from Mignon “Grammar Girl” Fogarty here, “Weird Al’s “Word Crimes” Video.” She is not amused:

Perhaps the most troubling thing for me is seeing teachers who say they are going to use this in class because kids will find it funny and it will make them care about grammar. The entire ending of the video is putting down people who have trouble writing. The video says it’s OK to call people who can’t spell morons, droolers, spastics, and mouth breathers. Really, you’re going to use an educational tool that tells your struggling kids that they’re stupid? It just blows my mind that any teacher would think that’s OK.

It’s also hard for me to separate my feelings about this video from my feelings about his 2010 grammar videos that reinforce simplistic ideas, such as one in which he goes off about signs that read drive slow being wrong. The problem is that slow can be used as something called a flat adverb. The sign isn’t wrong, but drive slow is one of those things that people who don’t bother looking things up love to rant about. Those videos were extremely popular, so I imagine at least a few people told him that he got it wrong, but his comments from the NPR video suggest to me that he didn’t take the time to listen to those people and figure it out—that he still thinks he was making those signs better. If, as he says, “correcting people’s grammar is kind of a big deal” for him, then with the kind of power he has, I expect him to get things right.

The bottom line is that I don’t believe in word crimes, and I don’t believe in encouraging people to think about language that way.

In my Facebook world of comp/rhet folks, there seems to be a fair number of people in the Grammar Girl camp, finding WA’s song offensive– it’s not funny to make fun of people who can’t spell, it’s not funny to make fun of people who can’t write, we don’t need to be calling bad writers dumb, etc., etc., etc.

First off, I’m not going to “mansplain” anyone about the definition of parody. That’s a recipe for disaster. Though one fun fact: here’s the second link I found on Google searching for parody. That WA is everywhere right now.

But in a tradition that includes  a “modest proposal” to eat the children of the poor and more recently a runaway hit Broadway musical that skewers Mormonism with lots of filthy and hilarious songs, it seems kind of strange to me for people to get bent out of shape over “Word Crimes.” Even for a Weird Al video, this is pretty tame stuff.  Where were these people with arguably more offensive WA parodies like the racially charged “White and Nerdy” (fun fact– this video has Key and Peele in it!), or the food/fat-hating “Eat It” and “Fat?”

So, is it ever okay to parody and/or make fun of bad writing, grammar, and students? Are these even more off-limits than fatness, religion, and eating babies?

Don’t get me wrong– I don’t think it would be fair to make fun of/mock particular students in public, which is where sites like Shit My Students Write more or less crosses a line. There is at least the illusion that these are “real” quotes from “real” students– though I think that the realness here is debatable. Though some of the stuff on that site is pretty funny.

Of course I don’t think a prescriptive/pet peeve approach to grammar is write for teaching at any level and I’ve never done that. Of course it’s not useful to call students dumb or accuse them of committing “word crimes” or whatever. Of course.

But bad writing is funny and fair game for parody, and you know what? there are “word crimes” of various sorts. We see them every day in bad apostrophes or stupid exclamation points or “unnecessary” quotation “marks” or even passive aggressive notes.  My experience has been that these kinds of “word crimes” are ones that students at all levels recognize and they’re often actually an entry into a less picky discussion into what constitutes correctness and the rhetorical/persuasive impact of effective or ineffective grammar.

So lighten up, people. But don’t get me started on that bastard’s mocking of the Amish.

Exigency and viral feedback on “Innocence of Muslims”

This morning, I watched a few of the Sunday morning news shows, and part of the discussion is about the various riots across the Muslim world that came about from this movie (or a part of this movie), Innocence of Muslims.  A couple of comments/typing aloud sort of observations.

  •  This is a very veeerrryyyy weird movie.  Time had an interview with one of the actors who said that none of the experience made a lot of sense to anyone on the set, but basically a job is a job.  All the anti-Islamic stuff is clearly dubbed in and the 14 minute clip I link to here lends some credibility to this.  In places, it has the same obviously dubbed in jerkiness of Barack Obama singing “Call Me Maybe.”  In other words, beyond being anti-Islamic and racist and hateful and all of that, it’s just horrifically bad, so bad that I wonder if it would be better to think of it not so much as the cause but the opportunity of the events that continue to unfold.
  • I think it’s more complicated than a “video that went viral” on YouTube.  Not to rely too much on Time for this, but the article “The Agents of Outrage” points out that the movie (perhaps the whole thing?) was “screened in Hollywood early this year but made no waves whatsoever.”  It went up on YouTube and got in the hands of anti-Muslim Coptic Christians and infamous Koran burning Pastor Terry Jones in the hate blogosphere.  But it really didn’t escalate in Egypt and then Libya until someone named Sheik Khaled Abdaallah talked about it on his TV show in Egypt.  Abdaallah is described in this Time article as “every bit as inflammatory and opportunistic as Jones” (only he’s a Muslim highly critical of the Copts), so what we have here in a way is one extremist hate group versus another extremist hate group.  The point is I don’t think the video on YouTube itself spread virally before it was spread in comparably older mediums.
  • In any event, now there are protests all over the place, and I am willing to wager that the vast majority of the folks protesting at American (and apparently European) embassies around the world have not seen any of the movie that may (or may not) have been the exigency for these protests in the first place. I would even go so far as to say that if at least some of these protesters did see the clips of the video being circulated, they too would be confused.  I think most of the protesters now are protesting in reaction to the other protests and not the movie itself.  In that sense, it’s the other protests (and the coverage of them in the media) that have gone viral and not the original movie.
  • In the fourth chapter of my dissertation, I write about how easy it is in rhetorical situations mediated through technologies like the internet for the boundaries between the rhetor, the audience, and even the message itself to break down.  I specifically wrote about a “Mac vs. DOS” question to a mailing list and how that discussion moved far away from the original point of the question, and I argue that this is one of the inherent conditions of “immediate” rhetorical situations. But it is also simpler than that.  For example, there have been a couple of riots at MSU following basketball team losses, riots where the exigency was initially related to a game but which changed as the riots progressed.  And obviously, not everyone who participated in the riot as a result of the MSU loss; rather, some rioters took it merely as an opportunity to loot and cause damage.  I suspect there’s some of this going on with these riots.
  • Apparently there is some dispute as to whether or not Ambassador to Libya Chris Stevens was killed as a result of the protests getting way out of hand or if it was premeditated, thus making the protests a sort of “cover story” for a previously planned killing. And what isn’t really being talked about much is the extent to which this was all connected to the anniversary of 9/11 and the extent to which the killing was undertaken by al-Qaeda related groups.  Of course, this too is still emerging.
  • And what you also see here is just good-ol-fashioned culture clash.  Folks in these countries where there are strict rules on what can and cannot be said about Islam or what-have-you wonder why there aren’t laws against this sort of blasphemy in the U.S.   Americans (and I suspect many others in “the west”) uphold the value of free speech even when it is hateful speech, and we (well, at least I do) wonder why such a shoddily done and ridiculous video that should perhaps best be simply ignored has gotten this much attention.  Add to that a technology– YouTube et al– that make it pretty much impossible to keep this particular video out of the hands of people who want to see it (even though YouTube has blocked it in some countries like Egypt) combined with the  fact that the protests themselves are being broadcast online and you have a feedback loop here:  protest leads to protest.

Learning vs. Teaching vs. Credentialing

There’s been a couple of interesting developments in higher education news in the last couple of days that has me thinking more about how the “education” part of things in colleges and universities actually works.  First, there’s the announcement that U of M and  several other universities will be offering “free courses” on a variety of different topics for anyone out there on the internets who might be interested.  This is being done through an outfit/start-up called Coursera, which I assume is making money through data mining of its users and maybe by eventually morphing into an actual credit-granting enterprise.  Here’s an interesting quote from the annarbor.com article about this:

For U-M, adapting to the platform gives faculty a unique way to communicate with alumni and prospective students.

“This is a great way for alumni or prospective Michigan students to experience a little bit of what a U-M education is like,” Scott Page, the professor teaching Model Thinking, said in a release.

Added Martha Pollack, vice provost for academic and budgetary affairs: “This is one more way for us to connect with prospective students and alumni.”

The other event– seemingly the opposite kind of thing but maybe not– comes from Inside Higher Ed in the article “Pacing Themselves.”  Here’s a long quote:

The media conglomerate Pearson today announced a partnership with Ivy Tech Community College of Indiana to provide online, self-paced courses that the company says will help Ivy Tech deal with student demand and overcrowding issues in required general education courses.

For Pearson, which already sells modules for instructor-led courses, the move represents a further step in the company’s strategy of inserting itself into virtually every area of e-learning short of full degree programs.

“We thought it was time for us to have a self-paced play that our partners could then plug into their institutions and get more students into higher education,” said Don Kilburn, the CEO of Pearson Learning Solutions.

Meanwhile, the partnership allows Ivy Tech to refer certain students to hands-off self-paced general education courses — which it does not currently offer — without building such courses itself.

“It is a way to test out that modality and see if it works for some students without taking a lot of business risk on our own,” said Kara Monroe, associate vice president for online academic programs at Ivy Tech.

Both of these events problematize in strange ways this mission of education in colleges and universities.  And by “educational mission” of the university, I basically mean three things:

  • Learning, or more accurately, extending to students the opportunity to learn.  Universities are pretty good at that, but so are lots of other things– wikipedia, the public library, about.com and other web sites, a good book, life, etc.
  • Teaching, which is when a professor (or instructor or adjunct or grad student) guides a student in learning something.  There’s really nothing I teach that students can’t learn on their own through some of the things I mentioned as sources for learning, but the advantage students get in being taught a subject by someone who knows a lot about the subject is guidance, interactions with other learners, systematic efficiency (because teaching is really good at steering learning in a way that is less likely to be counter-productive), positive (and negative) feedback, and so forth.
  • Credentialing, which means some sort of evaluation that is recognized by others as having some merit.  Practically speaking, this means a “seal of approval” (e.g., grades) given by teachers for these discreet learning units we call “courses,” which are systematically taken (a “major” which leads to a “degree”) and which are also validated by institutions (say EMU) which are in turn validated by both official evaluators (say the North Central Association) and unofficial but certainly more powerful evaluators (various “top university” rankings like US News, what employers say, word of mouth, etc., etc.).

Now, learning, teaching, and credentialing are obviously all related, though in complicated ways.  For example, teaching someone something is not the same as them learning it.  It takes a willingness to learn and to be receptive to teaching,  and everyone who has ever taught– especially something that is not considered by many “fun,” like first year composition– knows there are a surprising number of students who don’t seem able or willing to take on the learning challenge.  Another example:  a lot of professors are completely comfortable with the teaching and learning part of education, but most would just as soon avoid the credentialing/grading part of what we have to do to make this whole enterprise work.  Faculty get even more squeamish when we talk about the mean cousin of credentialing, assessment.  

Anyway, to turn back to what I think is troubling (at least to me) about both the Pearson Learning Solutions and the Coursera deal.  It seems to me that the Pearson “solution” is a rather cynical way of skipping ahead to just the credentialing leg of the stool and calling it a day.  There’s obviously no teaching involved and with only an e-textbook and “10 free hours of online tutoring support,” it doesn’t seem to me like there’s much of a chance for a lot of student learning here either. Besides that, the credential they are trying to provide here is minimal at best.  I mean, given that the unofficial value of an Ivy Tech Community College is probably pretty low to begin with (certainly relative to the institutions sponsoring Coursera courses), what does this sort of move do to the perceived market value of their degrees?

The Coursera “great minds” courses might seem at first to be a completely different and more noble venture, but it seems to me that this isn’t education.  Sure, there’s a lot of learning potential with these classes, but so what?  There are already plenty of places on the ‘net to learn about science fiction and fantasy literature, for example.   As of right now, U of M (and I suspect the other institutions associated with this) is not really thinking of this as education at all; it’s marketing, something that might connect with alumni and maybe with potential students.

Of course, this could change.  I seriously doubt that U of M would ever accept their own Coursera courses as credential-worthy credit that is the same as their more traditional courses, but that doesn’t mean that Coursera isn’t going to try to sell that credit to someone else.  Inside Higher Ed had an article on all this, and here’s a passage that made me think this idea of Coursera granting credit ala Pearson:

“There are no definite plans yet for what courses, if any, might have certificates and, if they exist, how much might be charged for them,” wrote MacCarthy via e-mail. “That said, if there were to be some monetization and revenues in the future, universities would partner with Coursera in determining any future structure or pricing for certificates.”

Ng, one of the Coursera founders, said “no firm decisions have been made yet” on how the company’s university partners might recognize the achievement of their non-enrolled students. “We’ve had informal discussions with the partner universities about different certificate options, but the final decision will be made on a per-university and per-course basis,” Ng wrote via e-mail.

These certificates wouldn’t be the same as credit– well, at least initially, and at least at a place like the University of Michigan.  I can imagine a scenario where the Ivy Techs of the world say “sure, we’ll count that as credit,” and I can also imagine a slippery slope where all kinds of institutions– maybe never U of M but places like EMU– start counting a certain number of these certificates as transfer for things like general education.

The other thing that both Pearson and Coursera are attempting here is a version of education without teaching.  This is patently obvious in the Pearson/Ivy Tech arrangement, but it is also the case with the Coursera courses.  The idea here is to have tens of thousands of students in these classes– potentially a great learning environment, but not something where you could really expect any meaningful teaching.  At best, the teaching that might take place is in the form of an army of part-timers to watch over those thousands of students participating in discussions and quizzes and the like.  That appears to be the case with their hiring.

So I really don’t think this is the future of higher education on the internet.  At least I don’t hope this is the future of higher education on the ‘net.  I’d kind of like to keep the teaching in education….

 

The CCC Online Now (and before and before and for how long)

The other day on the WPA-L mailing list (this is the main electronic mailing list/exchange in the composition and rhetoric world), Bump Halbritter announced the CCC Online, version (edition?) 1.1.  Here’s a link to the public table of contents.  Since I have some history with previous versions of the CCC Online and even with this issue (and since this came up in some Facebook comments too), I thought it over and decided to write a response to the announcement.  A long response.  And I decided to post it here, too.

Read on in the continued part.  Hopefully, I don’t ruffle too many feathers and/or get into too much trouble….

Continue reading “The CCC Online Now (and before and before and for how long)”

A quick post on 9/11, killing Bin Laden, and the Internets

On the morning of September 11, 2001, I was at home not feeling particularly well– a cold or something.  I mowed the lawn, and then came in and just happened to turn on the TV and saw a story about a plane crashing into one of the World Trade Center towers, a crash that network news first reported as an accident.  Until the next plane hit, and then the Pentagon, and then a field in Pennsylvania. I think it’s fair to say that pretty much everyone in the U.S. (maybe the western world) who had a television (especially with cable) spent the next 72 hours or so watching the news, with breaks to nap, go to the bathroom, and drink.

Ten years passed and many many things happened.

Then, Sunday night (which, oddly enough, was the eighth anniversary of Bush’s infamous “mission accomplished” speech) I was getting into bed with my iPad at 10 pm or so, planning to read a bit on the kindle app before getting to sleep and ready for the beginning of the spring term.  I checked Facebook first and saw someone (I can’t remember who) in my feed had posted that Obama was giving a previously unannounced speech at 10:30.  Uh-oh, I thought, and got out of bed to turn on the TV, my iPad (with Twitter and Facebook) by my side.

You know the rest, and I am sure there will be many more examples of this sort of piece that is running on The Atlantic’s web site.

Anyway, that makes me think of at least two things:

  • 9/11 was a very clear “exigency” in that it was obviously the beginning of a new situation, although arguably from the point of view of Bin Laden, Al Qaeda, and related groups, 9/11 was merely the middle of a fight that began much much earlier– CIA involvement in Afghanistan,  The 1998 bombings of the US embassies in Africa, etc.  On the other hand, the death of Bin Laden doesn’t seem like an “ending” or a “decay” of this situation.
  • I’m not quite sure what it means that I have heard about this (and nearly all other “breaking news” in the last year or so) first via Facebook and/or Twitter, and then I follow it up with live coverage on TV, and then still later, with writings published on the web or even on paper.  But it means something for sure, something about what “main stream media” is and is not still  capable of doing well.

Four Thoughts on Wikileaks

Not necessarily in this order:

  • If the mainstream media did its freakin’ job, Wikileaks would be irrelevant.  The only reason this is much of a story at all is because MSM, too lazy and/or too afraid of and/or owned by “the man” to actually dig around and investigate and look for whistle-blowers on its own, is perfectly happy to have Wikileaks do their homework for them.  MSM isn’t in trouble because of the internets or whatever; they’re in trouble because they can’t do as good of a job of telling people what’s going on as a bunch of half-baked computer hackers.
  • I haven’t read anything on Wikileaks lately, but I have yet to hear a “leak” that that was something that is really too surprising.  Various cables about various world leaders might be embarrassing, but I think we already knew that the people running North Korea are nuts, the people running Afghanistan are corrupt, and even that a lot of the other countries in the Middle East would be kind of okay with the U.S. putting a beat-down on Iran.  Now, if Wikileaks uncovered something like  911 being an “inside job” or how the U.S. has been secretly supporting North Korea (just to keep tensions high) or about our contact with aliens at Area 51 or whatever– if any of that happens, then we’re talking.
  • I’m generally for the idea of Wikileaks, but it’s hard for me to get too far behind it in part because Julian Assange seems like a real piece of work.  Even before the rape/sexual assault charges in Sweden, he seemed kind of… I don’t know, smarmy to me.  He seems sort of like a more liberal/libertarian version of Matt Drudge, and I don’t mean that as a compliment.
  • Derek and I were talking the other day about how Ratemyprofessor.com and wikileaks seem to be kind of similar– reckless, based mostly on rumor and unsubstantiated reports, mixed with a twist of “the truth.”