Post from sabbatical-land 202 days to go: a tangent thought about the need (or lack thereof?) for teaching code in web writing courses

I have been doing some reading and writing that is more directly tied to my MOOC sabbatical project than this post, honest. Lately, I’ve been reading and writing about correspondence schools and how they were influenced by the 19th century Chautauqua Institute and movement. I’ll spare you the details for what I am assuming are obvious reasons, but here’s a fun fact of doing this kind of research nowadays. Part of what I needed/wanted to hunt down was a sort of infamous quote from William Rainey Harper, who was the first president of the University of Chicago and an early proponent of correspondence schools. He predicted that the day was coming where most students would take courses via the mail. Anyway, he has a longish part about his thoughts on the pros and cons of correspondence/distance education in an 1885 book by John Heyl Vincent called The Chautauqua Movement, which, conveniently enough, is available in its entirety via Google Books. Who says the Internets isn’t good for anything?

Where was I? Oh yes, speaking of the Internets:

In the fall, I’m liable to be teaching a class I’ve taught several times before, Writing for the World Wide Web, and I’m on the cusp of thinking that this might be the first time I teach that class where I only spend a minimal amount of time with HTML and CSS. Maybe just the Codecademy course on HTML & CSS; maybe not even that much.

I think the thing that has kind of pushed me over the edge on this is Jeff Bridges’ web site. Or more specifically, squarespace and their Super Bowl ad. That’s a service that’s perhaps a little more about selling stuff than we tend to talk about in Writing for the Web, but as far as I can tell, it’s a drag n’ drop kind of app for setting up a site. Then there’s wix. It’s a little wonky, but it is all drag-n-drop stuff and it took me about 3 minutes to make this free page. (Sure, it makes really ugly code, but it does work, mostly). Of course, there’s wordpress, which is something I introduce to students as it is, and it was the option of choice discussed in this Vitae piece “How to Build a Website in 5 Steps.” I’m sure there are a lot of other options there for this kind of thing.

Back in the old days, the WYSIWYG options for HTML/CSS editing were poor– and I would include everything from the versions of Dreamweaver I’ve seen all the way back to the editor that came with one of the early versions of Netscape. I remember as early as about 1997 there were folks in the computers and writing world who were saying there was no point in wading into coding. But while those early WYSIWYG tools were helpful, they were glitchy and unreliable, meaning they were more like “what you see is what you get a lot of the time but not all the time,” and if you didn’t know enough about coding to figure out what was going wrong, you were pretty much screwed. As a teacher, I learned pretty quickly it was more time-consuming to not teach students HTML building blocks because when they tried to make a web site with one of these apps with no clue about the code underneath, they would get stuck and I’d have spend a lot more time helping them get unstuck. In any event, I taught code back then because writing web pages required writing code. These weren’t two different functions/jobs, much in the same way that printers a few hundred years ago directly employed writers and were themselves the publishers and book sellers.

That was then and this is now. I haven’t spent a whole lot of time with wix or squarespace, but they both seem easy and robust enough for a beyond basic site. It’s useful to understand some of the basics of HTML/CSS coding stuff for WordPress of course, but it’s not critical. So if the goal of a class like Writing for the Web is to have students present/study content on the web in some rhetorically meaningful way, then spending time on code just isn’t as important as it used to be. If the goal of a class like this is to also professionalize students to work “in the field,” coding might be a bit more important, but maybe not.  Any kind of entity or company that would employ someone as a technical/professional writer (broadly speaking) probably would also employ a full-time IT person who deals with the technicalities of the coding of the web site. And of course, that IT person is probably working with a lot of other stuff that I’ve heard of but don’t understand– Python (which reminds me: I should check into my Coursera course on that today), Ruby on Rails, PHP, etc., etc.

Writing for the Web as a class has always been a class that has included elements of a computer programming class (not to mention a graphic design class and an audio-video production class), but it seems to me that the space between the coding/programming that makes the modern web work and the content delivered on the web has widened. And while it is arguably a good idea for anyone who is interested in going into anything that smacks of content development nowadays to take some basic programming classes, the course I teach focuses more on the content.

As I teach it at least, the course has moved more toward social media issues, web style, usability, and the decisions writers have to make to re-present “words in a row” essays into web sites. I still teach a large HTML/CSS component in the class, and I’m beginning to think that the time spent on that isn’t worth it anymore. Or maybe it’s a different class: that is, maybe there is a need for a “coding for writing majors” kind of course where the focus really is on working through all the exercises at Codecademy.

Something more I’ll have to think about in around 200 days.

Enough with the “no laptops in classrooms” already

There has been a rash of “turn off the laptop” articles in various places in the educational media, but I think what has pushed me over the edge and motivated this post is Clay Shirky’s “Why I Just Asked My Students To Put Their Laptops Away” on Medium. In the nutshell, Shirky went to the no laptop camp because (he says) students can’t multitask and students are too easily distracted by the technology, particularly with the constant alerts from things like Facebook.

Enough already.

First off, while I am no expert regarding multitasking, it seems to me that there are a lot of different layers to multitasking (or perhaps it would make more sense to say attention on task) and most of us perform some level of multitasking all the time.  Consider driving. I think it’s always a bad idea to be texting while actually moving in traffic because, yes, that’s too much multitasking for most people. But how about texting or checking email or social media while at a long light? I do it all the time. Or how about talking on the phone? For me, it’s easy to talk on the phone while driving if I am using headphones or if I’m driving a familiar route in normal conditions. When I’m driving an unfamiliar route in bad weather or in heavy traffic, not so much.

Second, distraction and not paying a lot of attention in class isn’t exactly new. When I was in high school, I sat in the back of the room in that chemistry class I was required to take and I read paperbacks “hidden” under the table. Students used to pass these things called “notes” on paper. Students did and still do whisper at each other in distracting ways. As both a college student and as a college teacher (certainly as a GA way back when), I’ve been with/had students who were distracted by and multitasking with magazines, newspapers, other people, with napping, etc., etc.

I agree with Shirky and some of the articles he cites that what’s interesting and different about contemporary electronic devices generally and social media in particular is that these are designed to distract us, to break our concentration. I routinely experience the sort of instant and satisfying gratification suggested in the abstract of this article. But to suggest that teachers/professors can solve this attention problem by asking students to temporarily turn off their laptops and pay attention to the sage on the stage strikes me as both naive and egotistical.

So here are three tips for Clay and other would-be haters for how to mentally adjust to the inevitability of laptops in their classrooms.

Number one, stop lecturing so much. When professors take the “stand and deliver” approach to “teaching,” the laptops come out. And why shouldn’t they? In an era where anyone can easily record a video and/or audio of a lecture that can be “consumed” by students on their own time, why should they sit and pay attention to you droning on?

I realize this is easy for me to say since I teach small classes with 25 of fewer students, but there are lots of ways to break up the talking head in a large lecture hall class too. Break students into groups to ask them to discuss the reading. Ask students to take a moment to write about a question or a reading and then ask them to respond.  Require your students to discuss and respond. Use the time in class to actually do work with the laptops (individually and collaboratively) to do things. Just stop thinking that teaching means standing there and talking at them.

Number two, be more interesting. If as a teacher (or really, just a speaker) you are noticing a large percentage of students not paying attention and turning to laptops or cell phones or magazines or napping, there’s a pretty good chance you’re being boring. I notice this in my own teaching all the time: when my students and I are interested in a conversation or an activity, the laptops stay closed. When I start to drone on or it otherwise starts getting boring, I see the checks on Facebook or Twitter or ESPN Sports or whatever. I use that as a cue to change up the discussion, to get more interesting.

Number three, “Let it Go.” Because here’s the thing: there’s really nothing professors can do (at least in the settings where I teach) to completely eliminate these kinds of distractions and multitasking and generally dumb stuff that students sometimes do. Students are humans and humans are easily distracted. So instead of spending so much time demanding perfect attention, just acknowledge that most of us can get a lot done with a laptop open. If you as the teacher are not the center of the universe, it’ll be okay.

A #cwcon 2014 in Pullman recap

I had an educational/fun time at the Computers and Writing Conference last week in Pullman, and I promise I’ll get to that after the jump. But let me get some complaining out of the way first.

I still wish that there was something more of an “organization” behind the annual Computers and Writing Conference, something more akin to the ATTW or RSA or CPTSC or whatever– not necessarily as structured and rigid as giant organizations like NCTE or the CCCC, but something more than the current non-structured affiliation (sorta/kinda) with a standing committee of the CCCCs which lacks an electing process, term limits, and (IMO) transparency. I’ve already voiced these complaints on mailing lists like tech-rhet– and by the way, my complaining a few months ago surfaced at this conference in the form of a few people saying to me stuff like “I’m glad someone finally said something” and a few others obviously avoided me. But maybe more organization isn’t necessary since there are other more organized groups out there. Anyway, got that off my chest. Again.

I still wish C&W would be held in an accessible location more than once every four or five years. Last year it was Frostburg, Maryland; this year, Pullman; next year (and of course we didn’t know the conference was going to happen at all until a few weeks ago), it’s going to be at the University of Wisconsin-Stout in Menomonie, which is just over an hour’s drive away from Minneapolis.  Not so distant past locations for the conference include Muncie, Indiana; Lubbock, Texas; and Normal, Illinois. Maybe for 2016, we need to go really remote, like Guam. (Actually, that might be kinda cool, Guam….)

I am still feeling a little “conferenced out” in general, and I only went to two this year– this one and the CCCCs in March. This complaint is not about Computers and Writing; it’s about the place where I am personally and professionally with academic conferences. Sure, I can and do learn a lot from attending conference sessions (see below) and a conference presentation does count on my C.V. for something, even if only five or so people come to my session (also see below). But with my meager travel budget (this jaunt to Pullman was completely out of pocket for me since I spent my money going to the CCCCs) and with other scholarly venues to present my scholarship (e.g., here, journals, more local events, etc.), I think I really need to rethink and to cut way back on the whole conference thing.

(Of course, I say that and then I do something different. There’s a pretty decent chance that I’ll go to at least three conferences next year, though two of them would be in Michigan).

Alright, enough whining. C&W 2014 in Pullman was pretty cool.

Continue reading “A #cwcon 2014 in Pullman recap”

If you can’t beat ’em and/or embracing my DH overlords and colleagues

A few days ago, Marc Bousquet posted on Facebook a link to “Technology Is Taking Over English Departments: The false promise of the digital humanities” by Adam Kirsch and published in the New Republic.  Kirsch obviously doesn’t think highly of digital humanities and technology at the expense of the feel and smell of paper and the old-fashioned magic of old-fashioned reading, and Bousquet obviously didn’t think much of Kirsch’s critique. Bousquet posted on Facebook about the Kirsch article twice for some reason; to quote (can I quote Facebook like this?)

Technology Is Taking Over English  http://t.co/d21kSd5opr Ahistorical & stupid cuz comes from a lit-dh discourse bypassing rhet-comp. Duh.”

and

“DH added strawberries to breakfast cereal! The era of breakfast cereal is over! Moral panic in lit makes it to TNR: http://t.co/d21kSd5opr

I agree with Bousquet: Kirsch’s piece is wrong, but it’s more than that.  I think it is in places almost perfectly, exquisitely wrong. To me, it’s like a rhetorical question that falls flat on its face because of Kirsch’s many assumptions about the problems of the digital and the purity of the humanities. And this made me realize something: it’s time for me to admit that I’m actually a digital humanities scholar/teacher and have been all along. It’s time for me to put aside petty arguments and differences (I’ll get to that below) and jump on that bandwagon. Continue reading “If you can’t beat ’em and/or embracing my DH overlords and colleagues”

A #cwcon 2013 story

My first computers and writing related lesson was on the drive to the annual Computers and Writing Conference and it was about the agency (or authority or trust) we put in our machines, specifically our cell phones, as if they were reliable people. I was travelling by myself and my only navigation equipment was my iPhone. I didn’t pay a whole lot of attention to the route my iPhone had planned for me until I got close to Frostburg, and by the time I did start to pay attention, it was too late.  The “route 1” Apple Maps planned involved about 20 steps for the last 40 miles– turn left down this street, right for 1000 feet down this road that looks more like an alley, left again, etc. This might have been scenic, but just as I started all these crazy turns, thick fog settled in. And I mean scary, white-knuckle driving under the best of circumstances thick fog. I could not see anything beyond the edge of the road– not that there was much to see beyond the edge of the road anyway. It was so bad I was literally driving by iPhone: I propped it up in the cup holder and I glanced between the road and the blue dot and instructions on the screen telling me I needed to turn left in two tenths and then one tenth of a mile… and then I’d actually see the turn. Thank you, iPhone!

Had my iPhone been “smarter” (and frankly had I been smarter and thought more carefully about the route Maps had selected), I wouldn’t have ended up in these back woods in the first place. On the other hand, had I been traveling with another human and had that human been serving as the navigator on these side roads with the previous generation of navigating technologies– a road map– I am pretty sure I/we would have been lost in the fog until it cleared because there is no way we would have been able to spot those turns.  The iPhone got me into that mess, but it also got me out of it.

But back to the topic at hand, the annual Computers and Writing Conference, #cwcon, this year in Frostburg, Maryland. Let me get my main (really, only) gripe about the conference out of the way right at the beginning: I didn’t think a whole lot of Frostburg.

Continue reading “A #cwcon 2013 story”

The SCOTUS decision on Obamacare and “Immediacy” (and digital rhetoric)

I’ve been reading the blogging carnival entries on digital rhetoric with some interest, hoping I could find a way to make a contribution.  I don’t know if this is really worthy or not, but here it goes:

My 1996 dissertation was called “The Immediacy of Rhetoric” and it was an examination of the impact of emerging and largely digital communication technologies (particularly the Internet, but television and lots of other things fit here too) on the ways rhetorical situations work.  I use the word “immediacy” to suggest the double-edged sword of these kinds of situations.  On the one hand, they have the potential of closeness and even intimacy since so many of the usual filters of message, rhetor, and audience collapse.  On the other hand, immediate situations are also sites of chaos and confusion precisely because of these lack of filters.  That’s the very short-hand/elevator-pitch version.

Two other things I’ll mention.  First, somewhere in The Archeology of Knowledge, Foucault says that when there are disruptive moments in history (I can’t remember the exact quote right now, but I think he and/or his translator even uses the term “rupture”), one of the first things we have to do to make sense of it all is smooth over that rupture with some kind of explanation.  Or something like that.  And one of the hallmarks that signals the end of discourse regarding a disruption in particular and a situation in general is self-reflexivity on the way the situation itself was communicated.  This happens in main stream media all the time.

Second, I’ve been thinking a lot lately about how memory works and the things I’ve read that suggests true multitasking is impossible– that is, we can’t really process two or more tasks at once, but we can shift between multiple tasks very quickly, often in a fraction of a section.  I haven’t worked this out in my own head yet, but I think this is one of the reasons why that even with all of the speed, intimacy, and chaos possible with various immediate and fluid situations, we still ultimately make meaning of a rhetorical event afterwords in the same way we make meaning out of pretty much everything else, and we still need, desire and highly value a point of fixed closure– thus the ongoing role of articles, books, and similarly fixed vessels.  Interaction, exchange, and commentary are all fine and good as part of a process, but we value (in all senses of that word– as a cultural value, an intellectual value, money, etc.) the last and fixed word.

So, the reporting of the Obamacare decision as an example of immediacy:

I found out about the June 28, 2012 decision while driving through West Virginia and Annette told me.  She had found out via her iPhone while doing a reading of Facebook.  So that’s a a simple simple example of how current and future technologies change the potential to interact in rhetorical situations: absent these tools, digital rhetoric/immediate situations aren’t possible.  That might seem just obvious, though maybe not.  I am reminded of a discussion I had with my dissertation advisor about a chapter describing the context of the internet in 1996.  She didn’t think it was necessary because how much could change, really?  After all, there were already 30 million users and Netscape; how much further could this internet thing go?

In any event, tools matter a lot.  Of course, that isn’t necessarily uniquely limited to digital tools since the tools and technology of literacy, writing, printing (followed later by mass distribution technologies like affordable paper), audio recordings, film, video, etc. all have had significant technological/toolish impacts on how rhetorical situations in particular and rhetoric in general works.  Or even is, since rhetoric was classically limited to live speakers.

A lot of humanities and comp/rhet types (academics in general, perhaps) downplay the role of technology in our thinking about how rhetoric (and just about everything else) works, I think because many/most humanities and comp/rhet types understand the theory a whole lot better than they understand the tools (or coding or “computers” in general).  I’ve read lots of stuff in the name of “digital rhetoric” (and don’t get me started with “digital humanities”) where tools and technology are secondary at best, sort of the bottle holding the wine, and technology merely alters the speed and potential proximity of components of a rhetorical situation.  But in terms of both digital rhetoric generally and what I mean by immediacy, that’s the whole point:  the evolving speed and presence potential of new technologies have been in some sense gradual and historic (the way that postal systems and then the telegraph changed communication in the 19th century comes to mind now), and in other ways radically fast (the way we find out about emerging situations/events via social media on ever-connected smart phones).  The tool is not the only thing that matters, but when it comes to contemplating “digital rhetoric” generally or immediacy in particular, it’s critical.  Without contemporary and future-looking computer and media technologies, there’s no “digital” in “digital rhetoric.”

One of the first things that happened when the decision was announced (and that I missed because of being in the car and that I recap here with hindsight and memory) was CNN and FOX screwed it up.  As NPR reported, reporters literally ran with paper out of the Supreme Court so that the results could be digitized– that is, broadcast, posted on the web, sent out as audio (analog in how we hear it though digital in how it is posted)– by rhetors (news outlets) to the audience.  Dennis Baron had a blog post where he argued that this was an intentional misreading of the decision by these media outlets because they were mislead and because “everyone expected” the decision to be overturned.  The media simply reported what they thought they already knew.  But I think the right answer is it was sloppy reporting facilitated/enabled by the speed of immediacy, the lack of any interpretation/mediation of events, and the collision of the analog decision (available to reports first as dead trees text) with the digital world.  The Supreme Court’s decision Obamacare is of course complex, but it is not misleading.  The desire and potential to be the first to report the decision trumped the desire/need to actually be correct.

So again, immediacy is a double-edged sword.  Digital media technologies can break down the boundaries between audience, rhetor, message, and interpretation itself, which has the potential for great intimacy.  We can “be there” during riots in Egypt as part of the “Arab Spring” through not only major news outlets but thousands of participants in social media and video sites.  On the other hand, these immediate situations also have the potential for great chaos and confusion precisely because of the lack of boundaries that define interpretation and expertise.  Again, think of the chaos of the Arab Spring, especially through the filter of media, and the confusion of being flat-out wrong as was the case with the decision on Obamacare.

Speed matters a lot, too.  Clearly that’s what is at work with the misreporting from CNN and FOX.  Sure, this is far from the first time this sort of thing has happened and “scooping” the competition has been the hallmark of journalism dating back to its most yellow days.  But the rapidness/simultaneity that is cause and necessity of digital media makes the speed all the more important.

Very shortly after the reports emerged, the efforts at closure (and to seal the rupture in the narrative) began. It’s an understatement to describe the decision as a surprise.  Shortly after Annette shared the news via her iPhone, I turned on the radio.  All I could find out in the middle of nowhere West Virginia was a conservative talk show, and clearly, the decision was an enormous rupture for these folks.  The fact that Chief Justice John Roberts sided with the liberal minority on the court was inexplicable to the commentators and callers.

But within hours, explanations to close and reconcile the rupture emerged (and they continue, too).  One theory was that Roberts’ decision based not on the commerce clause but on taxation was in reality his effort to give conservatives ammunition in the fall elections, and conservative commentators immediately changed their attack from being about “individual freedom and choice” to “the largest tax in history.”  Quasi-conspiracy theorists suggested that Roberts changed his mind at the last minute, that Justice Anthony Kennedy was pressuring him to switch back, and that this last minute switch is evident in the text from the various decisions.  Another theory suggests that Roberts made his decision in the name of protecting his legacy as Chief Justice in particular and the institution of the Supreme Court in general.  And so on.

Interestingly enough, I have yet to hear a commentator on either the left or the right suggest that Roberts made the decision he made based on his interpretation of the law. Given that a lot of law professors thought the law was constitutional before the decision, perhaps the real answer is that Roberts did his job as a scholar of the law and a judge.  But that account doesn’t explain how a conservative (Republican) judge could possibly side with a liberal (Democrat) policy, which is why I suspect this explanation has been largely discarded.

Neither speed nor the seeking of closure are uniquely digital, though I think they’re altered by the digital in some interesting ways and I think they are inescapable in digital environments.  Even as we celebrate the fluidity of possibilities in digital rhetorical spaces, we crave and value in all senses of those terms the closure, finality, and even authority that comes from “print” (either the old-fashioned paper kind or the electronic new-fashioned kind exemplified by eBooks and electronic journals).

I haven’t thought this all the way through yet (or even partly through), so I’ll refer to two other blog posts I had on this.  First, there’s my reaction to seeing David Weinberger at U of Michigan talking about his latest book, Too Big to Know.  It’s not that I disagreed with Weinberger about the nature of knowledge has changed as a result of the digital age and the internet and such.  That’s all fine and good, but Weinberger hasn’t earned intellectual and actual (e.g., money) capital from his blog; he earned it from his book.  The same goes for folks like Kathleen Fitzpatrick’s book about the future of academic publishing, Doug Eyman and his (hopefully) forthcoming book, Liz Losh and her excellent book, and so on.  Even the prize the U of M Press/Digital Rhetoric Collaborative is based on publishing a book.

I don’t say this to dismiss digital rhetoric; I say this to simply point out that there still must be some unique value to books given that is where most of the scholarship on digital rhetoric has appeared.

Second, as I mention indirectly in this post about Daniel Kahneman, everything we describe about rhetoric (digital or otherwise) is definitionally a memory.  This post– which I’ve been writing off and on for over a week now– is an effort to examine a specific event that I see as demonstrating characteristics of immediacy, but like any other analysis, it takes place in hindsight.  We cannot really think about digital rhetoric as we experience it.

Anyway, a rambling what digital rhetoric means to me.  I’m anxious to get back to reading others’ thoughts on this.

 

If you're looking for an online grad course in computers and writing….

… I thought I’d throw out the the chance for folks looking for grad school credit and/or a course in computers and writing to sign up for the class I’m scheduled to teach for winter term, English 516: Computers and Writing, Theory and Practice. There’s a description here; anyone really interested can email me at stevendkrause at gmail dot com.

It’s been kind of a weird late fall/early winter around here. EMU students have been traditionally slow to register for winter (what everyone else calls spring) term classes, mostly because even in the best of times, our students need to settle their accounts for fall before they can register for winter, and they often don’t register until right before Christmas or right before the semester starts in early January. But nowadays, the economy in southeast Michigan is in poor shape, and I think that and other things are having an impact.

Right now, it looks to me like my class has enough students to “make,” but I would just as soon run this class full or over-loaded, and it occurred to me that there might be a few folks out there who might either be interested themselves or know someone. I know that EMU graduate students not in the writing program can take the course with the instructor’s permission; I don’t know how it would work for someone not at EMU, but I’m sure we could make it work and it would potentially be a fun/cool experience for one and all.

Anyway, like I said, if you’re interested, let me know.

Computers and Writing 2008 CFP (and other conference thoughts)

I just found out about the call for proposals for the 2008 Computers and Writing conference in Athens, GA. It’s going to be May 21-25, 2008; proposals are due some time between December 3 and January 10.

The theme of the conference is “Open Source as Technology and Concept,” which I might or might not ignore, depending on what I decide to propose. My plan, which may or may not be acted upon ultimately, is to drive down with Steve B. and Bill HD and to bring the golf clubs. Besides many fine courses around Athens, I figure we can play on the way there and/or back. We’ll see how it turns out.

While I am certain (almost) I’m going to C&W this year, it does beg the “how many conferences” question. Originally, I was planning on going to the CCCCs this year, despite having my proposal rejected in a problematic fashion. But without going into any details right now, it is beginning to look like I’m going to a different conference in mid-April, and that has raised the “Is this Conference Necessary?” question for me. I am not quite the jet-setting academic eluded to in this article, but like most folks who are “active scholars,” I still go to a few conferences a year. Of course, at this stage, attending conferences is lot less important than when I was a grad student seeking a job (and needing to have something to put on my CV) or when I was a tenure/promotion-seeking professor. I spent quite a bit of student loan money going to conferences simply because it helped me get a job, and I spent a fair amount of time at whatever conference in order to keep my job. Now? Well, the mileage isn’t quite the same. Actually the mileage is pretty much zero, career-wise.

So, since conferences don’t count for me much anymore, I get to make choices. And I think I’m going to choose Athens and choose to stay home from New Orleans. Though I could easily change my mind.

NCTE Aftertaste

Here’s a 6:18 video of my trip to New York City and the National Council for Teachers of English conference:

This little video is an unusual project for me because it’s very much a mixture of my “official” and my “unofficial” lives. Of course, conferences tend to be spaces where there is inevitably a blending of serious/scholarly things (giving papers, attending sessions, etc.) and not-so-serious/friendly things (cocktails with colleagues in the field, dinners, travel, touring, etc.). Anyway, since our session was about film/video making and writing, I thought I’d give it a shot.

I had a very mixed experience at the conference, frankly. On the one hand, I thought our panel was fantastic– great people, everyone was super-duper prepared, everyone had really interesting projects, everyone was really really smart and cool and all the rest, etc., etc. As I said in my NCTE prelude post, I went into this panel kind of as an accident and as a result of the CSW movie I made. I mean, I didn’t have that much specific interest in making movies, certainly not as a writing teacher. But I came out of this session really jazzed about the possibilities I saw from my fellow presenters, about diving into FinalCut Pro (or Express) and trying my hand at Garage Band, etc.

And I also had excellent “not-so-serious/friendly” activities at the conference. I got to hang out with my former colleague and still fantastic friend Annette S. a bit, I met a new bunch of people, I had a great dinner and great conversation with folks from the computer and writing world, Doug Eyman, Mike Palmquist, and Nick Carbone. Not to mention tourism in New York.

But on the whole, I’ve got to say that NCTE is not really my conference.

First off, we only had about 10 or so people in the audience. Now, normally, that wouldn’t be that big of a deal to me– I mean, let’s face it, that’s kind of par for the course at most conference presentations. But we were a “Featured Presentation,” we had an ideal time slot, and we had a hot topic– or so we thought. The only guess I have as to why the crowd was so small was because NCTE really is mostly about K-12, and those folks just aren’t interested in things like making movies in writing classes.

Second, the facility where this was being held was a problem. The amenities were, um, incomplete. We did a lot of planning via email before this session, and one of the concerns many in the group had was what sort of sound system we would have– or not have. Pete Vandenberg saved us on that score by bringing along a great system. We had assumed all along that we were covered with a projector to show the movies, but it turned out the projector the NCTE folks were prepared to provide was of the overhead variety. Fortunately, we did not have perform our movies; I brought a projector along from school as a plan B. I could go on, though I think this is whiny enough. All I’m saying is that if conferences like the NCTE (or the CCCCs, for that matter) actually want to give opportunities to presenters to talk about technology, they need to provide some basic technology.

But enough complaining. I had fun, I made it home, I’m ready for this coming week. Sort of.

Catching up on a boatload of online readings

I have about 15 tabs open in my browser with things I have been meaning to read and/or blog about. I don’t want to spend a lot of time on this now, so here’s a whole bunch of stuff that might be useful and/or interesting later, mostly in terms of teaching but some scholarship, too: