Tuesday, November 18, 2014

Narratives and Brands and "Avatar Emergency"

First off, a warning: I gave blood yesterday (Monday) and, while I'm pretty sure I'm almost full-on back to my normal self, I could conceivably go on many a tangent with relation to this entire endeavor, this last blog entry of the class. Also, to come full circle, I had to order the first book for this class online, and so it was with this one. Circle of life, I guess.

Anyway, something that stood out to me in considering this blog entry with regards to Ulmer's Avatar Emergency was that he pointed out how "brand" in the sense that it's used in social media and indeed in everyday conversation these days is different from "avatar." I'm not sure that I believe that, or that the distinction can be so easily made. For one thing, when I first began surfing the 'Net, going to message boards about favorite obscure bands from the Eighties (my particular drug of choice in this was Joy Division, which after Ian Curtis' suicide became New Order, the group responsible for "Blue Monday" and most of techno music in the Eighties), I had to create an "avatar," which I understood as being "me but not me." That is, it was something promised in the early days of the internet, to "lose yourself" in the creation of an online identity that shared some tertiary traits with you but which could be enhanced, downgraded (in case you didn't like yourself but the world saw you as confident or something), or just tolerated by the others. Said toleration could be exhausted if you were an asshole online (as, regrettably, I sometimes...okay, often was). We still see it today, with Internet "trolls" who, far from ever offering anything constructive to say, simply feast upon the insecurities of whomever they're pursuing, posting comments and other such ephemera to take down their intended target. I think that's why I posted the link to the Gamergate story from Deadspin (and also, to help increase my grade in class via Facebook group postings), a way of making amends for my own troll-like behavior in the past by pointing out more recent instances of it by people (usually men) who really should know better at this juncture. The internet is about twenty to twenty-five years old, at this point. It's time it started acting like an adult.

Another thing: the terms "brand" and "narrative" have been in use a lot lately (I watch ESPN a lot, and both come up whether discussing a particular player's image or how a story plays out), and what I think merits discussion is whether this is a good thing or not. Branding yourself (figuratively, at least) as a concept has been around for decades, but it's only recently, it seems, that we're more open about discussing it. Branding is a term from marketing, I once had a job interview with a marketing firm in Greenville and when the question of "what is marketing" came up on the application I should've just written "branding" (instead I wrote something nonsensical, in retrospect. No wonder I never heard back from them). "Narrative" is borrowed, to my mind anyway, from literary studies, that is, the basis of fiction as an art. If the narrative doesn't work, the fiction falls apart. This is a lesson many a screenwriter of various late-night Cinemax movies never learned (not that narrative was ever the motivating factor behind such films, or indeed action films, which leads me to think that the beats both porn and action strive to hit are so similar as to merit a discussion of what each owes the other and what each gets from the other. Told ya I'd ramble).

I think we hear a lot about brand and narrative today because, in a sense, these are analog responses to a digital future. These are concepts that have been around forever (or at least seem to) and the internet, while more than old enough to drink, vote, and die in a war, is still really young compared to more ancient and established media. In terms of both terms, allow me to mention two people who, really, should never be mentioned in any academic setting: Kim Kardashian and Jameis Winston. God help me, I know, but hear me out.

Kim Kardashian is branded as a sexy woman whose main attribute, whose sole contribution to Western society (apart from her sex tape or her reality show) is her buttocks. They are ample, indeed, and she has literally made a career of showing them off (most recently for a magazine desperate for the publicity). We know (or "know") that there's nothing of substance going on behind that face of hers, that her brain is simply a repository of Kanye West lyrics and "how can I show off my ass this week" queries. That is her brand, and we (even those of us who are sick of her) buy into it. But the narrative, such as it is, is flawed: clearly this woman has the intelligence and self-awareness to know that what sells (this image of her as a sexualized woman not afraid to flaunt it in various states of undress) is what keeps her business (the business of being Kim Kardashian) going. She is much, much smarter than we would give her credit for, in public. Her skills of manipulation when it comes to the media (even when they're talking about how sick they are of her, and what kind of message is she sending to our kids who idolize her) are worthy of any discussion of media strategies. In ten years, her celebrity will most likely fade (then again, people said that back when she got famous, and she's still in the news). At any rate, someday she'll be sitting on a huge pile of cash because we all bought into the narrative that her presence merits discussion.

Jameis Winston comes to mind because, well, the book we're studying is written by a professor who teaches at the University of Florida, but the color scheme on the book cover (orange and green) suggests associations with the University of Miami, so the quarterback at Florida State seems a fair topic. In terms of brand, it's this: he's never lost a game as a starter (and the way this season is going, he never will), he's got talent on loan from God (to borrow a phrase the ever-so-humble Rush Limbaugh applies to himself), and he's got some off-the-field issues. The narrative, however, is this: He's a villain because of his off-the-field issues, and every win his team experiences is a slap in the face of decent society, and oh boy what about his arm (but too bad it's connected to such a "thug"). Now, taking into account the fact that the media (sports media in particular) needs someone like Winston to galvanize discussion (in case the games themselves don't live up to the hype that ESPN and other networks invest in such displays of brutality), is any of this fair to Winston as a human being? It may or may not be, depending on the truth surrounding the allegations against him when it comes to women (recently, beloved entertainer Bill Cosby has himself been the target of rape allegations, whether these allegations are true or not will affect the narrative and brand of Cosby as "America's family man" and as a child of the Eighties, I hope it's not true, but as a child of post-OJ celebrity exposes, I fully expect there to be some truth to the allegations). Each competing narrative, each competing brand, seperates us further from the truth of the individual, the truth of the actual person. Online bullying exists because some people can't seperate themselves from their online selves, can't pass it off as just people picking on their avatars, brands, narratives, and not themselves. Maybe in time, such building up of narratives and brands can extinguish the flame of online bullies and trolls, help mediate between those being bullied and those doing the bullying.

Roger Goodell could certainly do with some re-branding or fashioning of a new narrative, because he's damned if he does (suspends Adrian Peterson for the year) and damned if he doesn't (suspends Ray Rice for two games, initially). All of this affects the real Roger Goodell (or if you believe South Park, the Goodell-bot), but what we see is the brand of "protector of the shield" in awkward, embarrassing press conferences. We live in an age of "no publicity is bad publicity," but I wonder if that's such a good thing. I feel outrage at Winston when I see him on TV, decrying what he's alleged of doing...all while being a fan of Woody Allen (alleged child molester), John Lennon (beat his first wife, neglected his first son, may have been under Yoko Ono's control), Hunter S. Thompson (decades of drug abuse), and so on. Morality might not have a place in the brands or narratives we construct today.

Also, narrative depend on the dominant group in charge, so maybe it's a good thing to have multiple narratives of events. I was joking around on a friend's Facebook page about how, in a certain light, Luke Skywalker is a mass murderer when he blows up the Death Star. It sounds ludicrous, of course, but that's just because we've bought into the narrative that Lucasfilm promoted. Always interesting to think about it in different terms, I think.

At any rate, I have enjoyed this class (and even come away understanding more about the subject that I initially thought), so if this is the end, it has been a trip.

Tuesday, November 11, 2014

Finding Augusta...Oh, There It Is

I want to start this post with a story from my undergrad days: I was a Film Studies minor, and as such I was encouraged to attend a series of screenings by documentarians as a way to both broaden my mind and get some extra credit by writing up said screenings for one of my film classes. One such screening was a film called "All Rendered Truth," which is what one of the subjects of the film (a bunch of people who made art out of everyday and neglected objects, I guess they fall under the rubric "folk art" though I'm not sure if that term was in popular usage back then) said that "art" stood for, and I think it's a fantastic definition that holds up. Anyway, the two documentarians were there and took questions after the screening. One guy, notebook in hand (I'm guessing he wasn't there out of interest in the subject, any more than I might have been), asked "how did you find these artists," and the two guys described how they usually drove around, fielding phone calls or inquiring at local spots about artists in the area, and so on. Then the same guy raised his hand and repeated his question. A buddy of mine whispered under his breath, "they already answered your question, dumbass."

I bring this up not just because it's a funny story, but because I think it has a relation to the book Finding Augusta, in that the person at the center of the tale (Scott Nixon) is an enigma wrapped inside a riddle wrapped up inside another enigma (or something like that). We get some basic biographical detail (he was an insurance salesman who lived in Augusta, Georgia, and he liked to make films about the various Augustas or variations on "Augusta" that he encountered on his travels, he assembled that footage into a brief film called The Augustas but never really documented why or to what ends he did so). I keep wanting to ask not just "how did he find these Augustas" because clearly he looked at maps for the most obvious ones (like Augusta, Maine, for instance, or other Augustas throughout the country) but also "why." It's an answer that's elusive, for sure (no sled emblazoned with the word "Augusta" to be found here, just as it's heaped upon the funeral pyre of Charles Foster Kane's legacy). It's frustrating.

The book itself deals with issues both related to that frustration and seperate from it: Cooley spends some time talking about the construction of cellphones as something that "fits easily" in your hand, thus never apart from you. As anyone who's seen people bump into a street sign while checking their Twitter feed can attest (and I did see that sometimes, when I was working downtown and we'd get a break around ten in the morning and at three in the afternoon), the concept of a phone as being *not* an extension of yourself is becoming more alien now. I don't own a smartphone myself, so the QR codes for the Augusta App were useless to me (and I thought it interesting that, in a book dealing with the idea of surveillance as a tool of governance, we were being asked to add something to our phones that facilitated the surveillance by the author of this book. I'm not sure if that was the case, I defer to anyone who did add the Augusta App to their phone). But I think it's interesting how Steve Jobs (whom I've beaten up repeatedly in class and on this blog, but only because deep down I respect the fact that he changed our lives with his products, however ambiguous my feelings about those products may be) wanted the iPhone and other Apple products to avoid the "sleek, cold" designs of his competitors, when I think the case can be made that Apple products are by definition sleek and cold now. You can personalize the iPhone with a colorful case, but the basic design is sleek and white (or silver) and futuristic in a way old timey sci-fi movies imagined the future to be (aerodynamic spaceships with no "flaws" evident). I could be in the minority here, but I think Apple has gotten away from being "warm and fuzzy" in their designs. We have met the enemy, Apple might say, and they are us.

Also, the concept of the phone fitting the ideal hand, without taking into account variations or mutations (or the simple fact that not everyone's hands are the same size), is interesting to me. Does an iPhone really fit your hand well, or does it feel too small or too big? I think of the Seinfeld episode where Jerry's girlfriend has man-hands; she looks perfectly normal except for the fact that, when she tries to feed him at the restaurant or stroke his face, we cut to Bigfoot-sized paws imposing their will on his face. Also, and this may just be me, but the concept of "fitting in your hand" brought to mind the M&M slogan (hey, last week we talked about Eminem, so it's only fitting) "melts in your mouth, not in your hands." Perhaps in some way that only demented grad students might consider (raises hand, calls attention to self), the phone-in-hand concept is the inverse of that: it melts to your hand, becomes a part of you, so much so that you can't imagine living without it. Sounds bizarro, I know, but as I'm typing this I'm considering checking my phone to see if anyone's texted me (or if, more likely, my cameraphone has been operating all this time without my knowledge or initiative, taking pictures from the inside of my pants pocket that show up as black spaces), because I've got it on silent, so as to avoid disturbing anyone. However much I might have once been snarky about such ideas, the fact is that my phone is a part of me, even if I don't want it to be.

I was kinda hoping that (as with Digital Detroit and the author's references to Bob Dylan, Lester Bangs, and other tangentially-related Detroit-pop-culture ephemera) we'd get a chance to discuss James Brown, the most famous son of Augusta, Georgia, but he doesn't come up because really the book isn't actually about Augusta, or Augustas. It's about this network that we've all bought into, one that has become corporatized and overrun with services and apps that "offer" freedom but really exist to track us, our habits and search histories and buying trends and fetishes that we don't tell anyone about and so on. In a university where the email is through Google, the software provided by Adobe, and the drinking products by Coca-Cola, it's not just the web that is a corporate wasteland, beholden to signs (literally and figuratively) that we do not have agency over our own actions. Governance might very well be best when it governs least, but try telling that to either side of the aisle (for all their talk of "smaller government," the Republicans under Bush probably caused the most significant growth of bureaucracy since the Second World War. Now most of the top administration officials are yukking it up on Fox News, talking about Obama being the second coming of Hitler or something. Go figure). In the future, we're gonna have to guard our shit a little better, I guess. But don't worry, there's an app for that....

Saturday, November 8, 2014

Lipstick Traces (re: Digital Detroit)

It really didn't occur to me until after class Wednesday (and I was dissatisfied with the entry on Wikipedia about it, which is why I'm not posting this to the class's Facebook page), but in a lot of ways Digital Detroit reminded me of Greil Marcus' book Lipstick Traces: A Secret History of the Twentieth Century, which came out in 1989/1990-ish and which I discovered on my local library's shelf a few years ago (it has since been deleted from the collection because I was the only person who ever checked it out, apparently).

The book traces the various cultural and artistic movements throughout the century that, in Marcus' view, left very little evidence of their having existed. A lot of the work was "in the moment," and the moment was fleeting (as in the case of the French Situationists or the May 1968 revolts) or ignored in the face of more pressing concerns (the Dada art movement in the midst of the First World War). All of these things he explored as a way of talking about the punk rock movement of the mid-Seventies, specifically the Sex Pistols and their brief life (from about the end of 1975 until the American tour in January 1978, after which Johnny Rotten left the group and manager Malcolm McLaren tried to continue to cash in on the uproar over punk, but then Sid Vicious died in 1979).

Digital Detroit has an abundance of pop-culture references, ways in which the author thinks about the city of Detroit through the artifacts he uncovers (a Bob Dylan concert in town in 1965, the author's readership of Creem before even thinking about coming to Detroit, etc.). In much the same way that Jeff Rice tries to connect this cultural ephemera to his conception of Detroit, so Marcus tries to connect or suggest connections between the various art movements he cites and the brief flicker of punk rock in its initial stages (back when the movement wasn't yet codified by leather jackets, breakneck rhythms, and odd hairstyles). As John Lydon (the former Johnny Rotten) said in his memoir, the reason they put safety pins in their clothes was because the clothes were falling apart and they couldn't afford new ones, not as a fashion statement.

I think the argument Marcus was making (and which is echoed by Rice) is that these movements, however brief or "insubstantial" or unimportant in the grand scheme of things, did leave their traces in the way we relate to some certain things (like how Rice relates to the Maccabees building, once the site of a secret society whose exact purpose might not be evident anymore). It goes back to the idea of connectivity, that nothing is ever really "lost" on the internet. Marcus recently came out with a new book, The History of Rock and Roll In Ten Songs, which talks about songs and artists who might not be obvious contenders for discussion in some people's minds, but which show aspects about the history of popular music in the last century that we should pay attention to. I haven't got the time right now to read that book, unfortunately (I did read his entry on Joy Division's song "Transmission," at least, before realizing I needed to put more time into readings for my classes, and so returned the book to my local library), but I like what Marcus does in all his work (highlighting things that we might have missed the first listen or so, the first encounter with a piece of art or literature or film). Like in the case of Rice, I might not always buy that such a connection exists in the works Marcus cites, but it's never a dull read.

Monday, November 3, 2014

Invisible Cities

In 2012, I got the chance to travel to New Orleans, Louisiana, for a Jeopardy tryout that was being held there for folks who did good enough on the online test back in February of that year to be considered for the show. It was August, so New Orleans was particularly muggy even early in the morning (I remember going outside one day at about nine in the morning and being alarmed at how hot it already was), and from our hotel room in East New Orleans we had a bit of a journey to make to get to Canal Street, the main street of the city. Now, I'd been to cities before (New York once in 1997, Washington DC a couple of times, Atlanta enough to know that I didn't even want to think about living there, and Greenville if you want to count it as a "city" compared to the other ones), but New Orleans was different. For one thing, it took a hella long time to drive there (duties for driving split between my sister and I, my future brother-in-law along for the ride as it was a month before the planned wedding and this was the closest they could afford to a honeymoon). For another, we were coming to a city that, between the three of us, we knew almost nothing about. I had read The Moviegoer years back (wasn't yet in my Walker Percy appreciation phase; the trip to New Orleans helped), and also A Confederacy of Dunces, but apart from jazz music and Hurricane Katrina, I was woefully ignorant about the Cresecent City.

It's interesting how, in Digital Detroit, Jeff Rice addresses the idea of cities having narratives, and how those narratives affect our perceptions of place, because when I got to New Orleans there was still evidence of Katrina's destruction, either manifested in ruins and abandoned buildings, or in the psyche of people we met, the people whose livelihoods depended on bringing people in to see their city not as a casualty but as a phoenix, rising again from lowdown no-good times. In order to get to Canal Street from our hotel, we had to drive past a crumbling mess of what had once been a building, lorded over by construction workers in hard-hats and working through the rubble with power tools and vehicles; it didn't matter if the building was demolished because of something non-Katrina related (after all, we visited right before the seventh anniversary of the storm; it would seem an awful long time for something to be a heaping pile of rubble without just now being sorted over), it came to symbolize in my mind the work that still needed to be done, to rebuild not just the city but its people. The narrative of New Orleans had once been as the birthplace of jazz, as the home of Mardi Gras, as a destination for sin and depravity deep in the conservative Bible Belt; now it was the city whose destruction at the hands of an unimaginable force uprooted more than a third of its residents, turned its sporting arena into a ghetto for the unwashed and unloved, and merited little more than a cursory glance by our then-President, who proceeded to keep on flying and ignoring the problems of the city that neither he nor his family gave a damn about because they weren't his kind of people (yes, I just referenced Kanye West in a Digital Humanities context. I am ashamed).

Detroit is another victim of narrative, this one of racial disharmony and corruption on all levels that has left the ordinary citizens as lone survivors of an apocalyptic terror that really took hold in the era of Bush and bailouts, but whose seeds of destruction were sown long, long ago. I read Charlie LeDuff's searing look at his hometown over the summer (definitely not "light, fun beach reading"), and I can grasp a little of what makes Detroit unique in the annals of "failed cities." Unlike the ghost towns of the American West, Detroit was tied to something a little more labor-intensive than gold-mining (manufacturing cars for the world), yet it fell pray to the same forces that doomed the silver and gold towns that once doted the landscape between here and San Francisco: people found a better way to do what Detroit did best, and the demand went with it. Where the ghost towns ran out of precious minerals, Detroit ran out of interest from the outside world in what it was selling. Henry Ford would be turning over in his grave right now if he could see Detroit.

Which isn't to say that Detroit didn't deserve it, in some sense: the concept of manufacturing on an assembly line, with little or no allowances made for the workers, is horrifying because of its automated nature (and attractive to the very same capitalists we see celebrated in glamorous profiles in business magazines, all because a lot of them manage to outsource that kind of labor far from prying eyes). When a city, when a factory town, is tied to a mode of production that is ultimately doomed, with no allowances for change, it's hard to see much in the way of sympathy. This isn't to say that I want to see Detroit razed to the ground, left to crumble like an ancient Mayan civilization, the decals on the factory doors serving as hieroglyphs for future scholars to puzzle over. But there's a good chance it'll happen.

Rice mentions Italo Calvino's Invisible Cities, a work that I'm familiar with thanks to my World Lit class when I was an undergrad. In the book, you have a series of cities described (some briefly, some in detail) over the course of the work (it's hard to classify, it's a work of fiction but not necessarily a novel, except maybe in a modernist sense). When I went to New Orleans, I had some experience with cities real and imagined (I was an avid reader of works that took place in big cities, because as a small-town boy I had to believe there was a bigger, more exciting world out there than the one offered by my humble home town), but this was one of the few times I'd been in a real city, and it was overwhelming. I remember beginning to walk down Bourbon Street on a Sunday afternoon, all the sex shops and racy souvenirs (and all the people walking down the street), and I had the very "small town" response that I would've thought I was too sophisticated for: I felt out of place. I made a joke to my sister later, about how all those people should've "been in church," but the truth is a lot of them probably *did* come straight from church, at least the well-dressed ones.

Growing up in a small town is frustrating when you have dreams of greater, grander things, but I guess it helps you appreciate the majesty of big cities more than you might if you grew up right in the heart of Manhattan, New Orleans, or Atlanta. Walhalla, my home town, will never win any claim to being a sophisticated city; you drop the main drag of town, the street where most of the businesses are located, right in the center of Manhattan and never touch Harlem to the north nor Wall Street to the south (though you'd have an invasion of hipsters from Brooklyn because of all the antique shops that Walhalla has. They'd say they're being ironic but I think they'd be genuninely thrilled at our stock of vinyl records and crappy, broke-down toys). I've always had a problem envisioning cities bigger than Walhalla, more contained, more spread-out but not in a country way (i.e., you have parts of Walhalla that you get to only by driving down lonely-looking back streets full of grass, trees, and other non-urban trappings). I've been to big cities that stretched on forever, that included neighborhoods where I might be best advised to steer clear of (either because of my ethnicity or because of my gullibility when dealing with someone looking to seperate me from whatever cash I have on hand).

When I went to the University of South Carolina, I was right in the heart of an urban enviroment. I lived in a dorm that was in the middle of campus, yet not far removed from the downtown section, right around the State House; there wasn't much beyond a news stand, a CD shop, and some boarded-up buildings, but I remember taking epic walks around the area surrounding the State House (usually in daytime, though I did like a night stroll from time to time). I didn't drive yet, but there was no need; most everything was within a short (or long) walk from my dorm. There were parts of Columbia that I couldn't get to by walking, of course, but that's what I had friends with cars for. I'm sure a lot of the time, people who worked downtown or knew some of the dangers posed by a big city (even one like Columbia) wondered who the crazy-ass white boy wandering around was, but truthfully except for a few times I never really felt any danger or unease. To this day, my narrative of Columbia is based on those jaunts I took, especially when I should've been studying for class instead, but I don't regret it, really.

So if a city is a database, its various inhabitants can create a narrative to suit their purposes. For me, my narratives are thus:

New York: imposing, overwhelming, exciting, not terrifying too much to be in a tall skyscraper (this was pre-9/11).

Washington DC: spread-out, old-fashioned architecture, good touristy sights (Air and Space Museum), powerful but folksy.

Atlanta: murder to drive through, no idea where anything is, a nice place to visit but you couldn't pay me to live there.

New Orleans: reminded me of what a small-town kid I really am, exhilerating, bewildering, magical, scary, fantastic, I'd like to go back (but not in August; way too hot that time of year).

Tuesday, October 28, 2014

Life is a game/Game is a life

I was all ready for more unnecessary italicizing of ideas that seemed important to Manovich whenever we moved on to Ian Bogost's "Unit Operations"....excuse me, I apologize.

Anyway, Unit Operations: An Approach to Videogame Criticism has proven to be much less hateful than Manovich, precisely because it's about something that I think we can all relate to and something that was made possible by software...sorry, won't happen again. Bogost explores the idea of videogames, and how they relate to other media, in a pretty interesting way. And it's well-deserved.

We as a society are slow to embrace the idea that something we grew up with (and something that is so seemingly "current" that we're at a loss to consider that it has a history beyond our chronological introduction to it) could be worthy of scholarly discussion. Well, maybe it's just me; I never found myself thinking (in the midst of screwing up yet again to get beyond the basic beginner level in Super Mario) "hey, I wonder what this says about society, and about our interaction with the game versus our interaction (or lack thereof) with other mediums." Cut me some slack, I was a pre-teen.

But I did grow up with videogames, we had the old-school Atari and I recall fondly the badly pixilated thrills of games that required a joystick and which featured one button besides the one on top of the joystick, and if there was such a thing as "cheat codes" back then, I didn't know it (I've always felt like cheat codes were, ahem, cheating, both by you of the game and of the code by you by reducing your enjoyement of the game to figuring out ways to beat it that went off the beaten path. I was a bit more law-and-order then, I guess). There was still a filter, of sorts, between the videogame onscreen and your real life, the one going on around you (and the one in which cheat codes were probably used against you, to be honest. It was the Reagan/Bush era, and the nostalgia/homoerotic love-fest Republicans have for that time bewilders me). The concept of an "immersive gaming experience" consisted of Tron, which is confusing as hell when you're a little kid and what you're watching is basically Jeff Bridges in a suit made of Nite-Lites. But nowadays, of course, the game interacts with your real life, in ways that would've seemed impossible to artists back then. I have never played a Wii (there's a fantastic Key & Peele sketch about that, it veers into NSFW territory towards the end so I didn't post it to the group's Facebook page), but I have played Rock Band: it's reducing the musicianship of people I admire (and Gene Simmons) to controls on a panel, albeit a guitar-shaped one. The experience of playing live music is turned into a game in which you collect points based on how "well" you "played," and the quotation marks are appropriate. However, the italicizing could be considered excessive on my part.

I thought the discussion of non-game games (i.e., simulations like The Sims or Star Wars: Galaxies) was interesting because those games seem to re-define the purpose of videogames (i.e., the escape from reality that is such a conducive force for much of the stereotypical gaming set, the ones that aren't good with basic social interactions). Games have gone from fanciful journeys (hero-quests, to borrow some Joseph Campbell because I too have seen Star Wars and will get around to The Hero With a Thousand Faces at some point) to almost blah recreations of the real world (or in the case of Galaxies, a mundane rendering of what was originally a more cosmic idea). At what point does the idea of "life as a game" cross over from "wow, this is exciting, I get to collect points and do things in real life that I could only do in games" to IRS Audit:The Game, in which you have to navigate the legal and fiduciary respobsibilities that come with real life situations.

Scott Pilgrim Versus The World, to my mind, is the best of the "videogame brought to cinema" movies because it's not actually based on a game; the source material is a graphic novel (which, like videogame movies, is a hybrid of two things: the comic book and the novel-like narrative structure, because a lot of comic books are one-and-done affairs while a graphic novel has the potential to grow over many issues. This is a gross simplification of both comic books and novels, of course, but it works for the example). In the film, Scott has to "battle" the ex-lovers of his current flame, Ramona, in videogame-style contests that recall for me the battles one would encounter in Mortal Combat (all they needed to complete the illusion was the final "Finish him!" that confirmed MC's bloodlust in the eyes of concerned parents who, as usual, overreacted to something they didn't understand, much the same with violent rap lyrics or over-the-top slasher films). The rules of real life (you can't go around fighting people, when they die they don't increase your own chances of living nor turn into coins) are broken throughout the film, because otherwise the film is just a typical romantic comedy with a pretty good soundtrack. In a game world, Scott can defeat the evil exes and inch closer to becoming the kind of guy Ramona can live with. Complications arise, of course, as in games. But the overall feel of the movie, hyper as it is, suggests a videogame with only one outcome: Scott gets the girl. In videogames, there are multiple ways the narrative can end, and even points where it can end before you reach the supposed conclusion (as I've learned when trying to tackle Tetris, you can't really win, you can only hope to keep going).

I've never really looked at videogames as being "worthy" of such critical approaches, but that's not because I overwhelmingly think they don't deserve it. I'd just never considered it, and while I don't buy into the premise that they are always worthy of such discussion (c'mon, Donkey Kong could probably be read as a Marxist text on the fetishization of empty barrels used to crush Italian plumbers, but that's a really awkward stretch), I do think it opens up a new world for serious discussion. I think, in true High Fidelity fashion, that we can be defined by our tastes in pop culture (though in HF it's more about individuals, not groups), and as a group we can be defined by the games we embrace as much as we can the cinema, music, or (increasingly less likely) literature. Videogame studies also embrace the notion that I think we've been ignoring throughout the course, that the humanities isn't just literature. There's a philosophy behind even the simplest games, and I think we can try to discuss it (ahem, sorry) try to discuss it as seriously as we take the philosophy behind Moby-Dick or Star Wars. I just hope Manovich isn't there to italicize everything...

Tuesday, October 21, 2014

Manovich, Manovich, Manovich!

I started Software Takes Command thinking, "I can handle a book that has a fifty-page introduction, important or percieved-to-be important ideas in italics, and asks the questions that not too many people ask anymore (like 'how does Word work?')." I'm not so sure now, but I have some definite ideas about what this book tries to say.

First off, let me say this: Apple (helmed by the evil Steve Jobs, even after death) and Microsoft have made their living off keeping us away from the actual viewing of hardware and software, i.e., "how the sausage is made." Apart from that one Apple computer in the early part of the 2000s with the inside portion visible through different-colored bodies, both companies have made it a priority to keep the user at a safe distance. This is understandable from a business sense (unless you want to take apart the computer or product to figure out how it works and how to steal said design, you're not likely to succeed in said intellectual theft), but it also seems to be the real-life Revenge of the Nerds that the films only hinted at. The IT guy is the most important figure in any company, because he (or she, to be politically correct) knows what to do when everyone's computers start misbehavin'. Sleek designs and "wowee zowee!" graphics on our phones (well, not mine, I'm the one guy in America who still has a flip phone) keep us from asking the pertinant question "how does this work?" And that's usually how progress rolls.

Think back to all the various "new things" technology has given us just in the last half of the twentieth century, and how "new" and "exciting" they once were compared to how they are viewed now, seeing as they're more commonplace today. I think Kubrick's 2001: A Space Odyssey plays differently today than it did in 1968, just because we're immune through media saturation to the wonders of outer space presented in the film. If today, a filmmaker tried to get away with a space shuttle docking with the International Space Station that takes up a good chunk of screen time (not to mention being set to the "Blue Danube Waltz"), he'd be laughed out of Hollywood. Trains were the space shuttles and rockets of the nineteenth century, as Manovich alludes to; gradually through constant use, the novelty wore off and we stopped asking "how does steam cause the train to run?" Luckily for us (well, some of us), Manovich is here to ask the questions in regards to software.

Some of his insights are worth considering, but I feel like we have to slog through an awful lot of "yes, I know how that works, but thank you for going into exhausting detail for me." I don't want to bash Manovich (I just love that name, it sounds like some crazy-eyed inventor of the nineteenth century), so I'll restrain myself and move on to the next thing that the book got me thinking about.

In the last chapter (which, full disclosure, I haven't finished as of this posting), Manovich describes the incorporation of software advances into film-making, and here's where I get to show off my Film Studies minor (money well spent, state of South Carolina!). What was interesting to me was how Manovich highlighted the use of computer-generated images (CGI) in the 1990s, when the idea was both to leave the audience stunned at how clever and amazing said effects were but also not to overwhelm them with questions of "how did they do that," i.e., break the audiences' willing suspension of disbelief.

Fiction, whether in film or any other medium, relies on suspension of disbelief: yes, we know inherently that we're simply seeing still images speeded up to suggest movement on the part of human (or cartoon) characters, just as we know the words on a book's page don't really mean that this person we're reading about (be it Captain Ahab, Beowulf, or Kim Kardashian) has ever actually existed. There have been movements to call attention to such artifice, of course, and each time this is done those practitioners of whatever "shocking revelation about the nature of fiction/cinema/art/whatever" pat themselves on the back and think "gee, weren't we clever?" But the truth is, art needs that disbelief to both be present and also to be suspended, at least until the audience is lured in and can't turn away. And movie effects have been a large part of that.

In the olden days, for instance, a monster in a horror film was just some poor schmuck (probably the second cousin or brother-in-law of the director) stuffed into a suit and told to walk around with a chainsaw or axe in hand to threaten the idiotic teenagers who thought it'd be a good idea to spend a night in the haunted house/abandoned campgound, etc. But effects that seem tame today could sometimes be revolutionary at the time, pointing to new avenues for artistic expression (2001 helped George Lucas realize his vision for Star Wars). The xenomorph in the original Alien (1979) was just a guy in a suit, but because of the audience's willing suspension of disbelief, we could believe that this creature was real (and that we wanted nothing to do with it). With the advent of CGI, it was believed that more realistic monsters and creatures could be imagined, without distracting the audience from the artificial nature of the creature in question. Of course it meant that actors were usually reacting to something on a green screen, but the poor brother-in-law of the director got to sit down and relax until someone needed to make a coffee-and-cocaine run for the crew.

But as Manovich points out, there's been a shift in the thinking: movies like Sin City thrive not on the realistic integration of CGI effects but in the very highlighting of that artifice for dramatic effect (see, the book had an effect on my writing!). By calling the audiences' attention to the very artificialness of what's onscreen, they play with the notion that disbelief needs to be suspended by realistic action, in a sense.

Not to exhaust the point, but consider a film that's been made and remade a couple of times: The Thing (1951, 1982, 2011). In the original version, directed by Howard Hawks (yeah, I know the credits say Christian Nyby, but it's a Hawks movie through and through), the alien creature that threatens the camp of intrepid Arctic scientists is a giant carrot, basically, and played by James Arness as a walking, grunting "Other" that can be defeated using good old American know-how (and electricity). In John Carpenter's version, and the "prequel" that came out almost twenty years later, the alien is able to assume the identities of the guys in the camp, one at a time, and form a perfect copy that is convincing up until it's revealed as such. In this case, special effects of the real-world kind play the bad guy or guys: characters who are revealed as "Things" stretch and twist into grotesque manifestations of your worst nightmare when it comes to having your body torn apart. The most recent version (which I haven't seen much of, beyond the trailer) does this as well, but through the "magic" of CGI. We have the classic attempt to integrate CGI effects so that we both notice them and aren't distracted by them to forget what's going on onscreen (at least that's the filmmaker's hope). In that sense, the 2011 version is then not only a return to the premise of Carpenter's version, it's also a return to the "antiquated" idea of CGI both being integrated into the film and thus noticable. Once again, state of South Carolina, your money was well spent on my Film Studies minor.

I think, as someone who's interested in art, it matters to me whether CGI effects dominate a film (like Sin City), calling attention to themselves, or try to blend in (the most recent batch of Star Wars films) without necessarily doing so. No one is saying that having an actual person (again, the poor brother-in-law of the director) in a monster costume is infinitely better than having that same monster rendered by CGI (well, some people are; I think it's a case-by-case basis, myself), but software continues to redefine the logics and phsyics of film-making, and it will be an interesting time to view what sticks and what falls by the wayside in terms of computer effects.

Monday, October 13, 2014

How We Think

I have never done illegal narcotics, nor too many legal narcotics, in my lifetime. There was that one time I passed a car where the occupants inside were obviously smoking a joint (my work buddy that I was walking by the car with helpfully pointed out that that's what pot smells like) and got a little contact high, but beyond that and the occasional alcoholic experience, I haven't much time for the drugs, as the kids might say. It's not that I have a moral stand necessarily against an individual person's right to enjoy a hit of reefer every now and then, and I honestly think the drug "war" would be a lot less wasteful if we legalized some stuff that isn't legal now (I'm sure the drug cartels will find some other way to fund their operations, perhaps by branching off into highly addictive coffee beans). I just know that my mind is weird anyways, without any outside help.

How We Think by Katherine Hayles is a bit like my explanation of why I don't do drugs, in that it chronicles in the latter stages (heh-heh, "chronic"-les...sorry) the rise of more computer-friendly fiction. There's a huge argument going on in the book about narrative versus databases (i.e., the stuff we study as English and humanities majors versus the stuff we use to store the info we've gathered), and I have to say that I was intrigued by the discussion of the two works that Hayles cites (The Raw Shark Texts and Only Revolutions) because I tend to gravitate towards the odder end of the literary spectrum, if only to dip my toe in with some authors while embracing some of the more fantastical writers (Pynchon, some DeLillo, William S. Burroughs, Vonnegut). I don't always understand what I'm reading (at least I'm honest), but I find the journey enjoyable in and of itself.

I couldn't help but think of Burroughs' work with the "Cut-Up Trilogy," three books that he fashioned together out of cut-up words and phrases from other publications, when I started reading about Raw Shark. I haven't read any of the trilogy, nor the Raw Shark novel, but I think that sort of experimentation, playing with narrative expectations, can be exciting (well, occasionally frustrating, but exciting too). I read Naked Lunch over the summer, and straight through; when I read on Wikipedia that Burroughs had meant for people to be able to start wherever they wanted to skip around as they chose (a sort of junkie Choose Your Own Adventure) I wondered if I'd read the book wrong, or if there was *any* right way to read it (this was Wikipedia that I was looking at, of course, and someone could have added that detail as a goof or an inaccuracy). There's a certain sense of playfulness in the descriptions of both Raw Shark and Only Revolutions, as if, while both works have their seriousness, they have an anarchic side too, something that deviates from the path. Something that makes the reader less passive than he would normally be.

All that said, I might be hesitant to actually try and *read* either of the books mentioned. I remember loving the movie Trainspotting when I saw it (still the best depiction of Scottish heroin junkies I've ever seen, by the way), and I was excited when I found the novel that the film was based on at a local bookstore. I got it home, turned to the first page, and was gobsmacked by the heavy Scottish dialect of the first few pages. I literally got a headache (I'm not exaggerating for comic effect). I stuck with the book, however, because (thankfully) it was a multiple-narrator novel (really a collection of short stories that fit together, from differing points of view) so that the heavy Scottish dialect parts weren't the sole part of the story. Several years later, I turned to the opening page of Finnegans Wake and decided after reading a couple of lines that James Joyce was batshit crazy, so I stopped.

I think, in the clash of narrative v. database, we'll see a happy (or unhappy) marriage of the two as time and technology progresses (at least until Skynet wipes out humanity). As Hayles argues (and as I agree with her), we are a narrative-based species, always searching for the story of how we came to be, or how we came to live in the places that we live, or why it is that we die, what happens when we die, etc. The ghost in the machine may be our need for a narrative, after all; databases store information, and they do a damn good job of it, but so far they can't tell a story. But narratives let us down too, in need of constant revision as more facts become known (just look at the narrative that the NFL was trying to sell us on the whole Ray Rice incident, before the second video came out, to cite a real-world example). You constantly hear "narrative" used with cynical connotations (such as "what is our narrative for the way events unfolded in Iraq"), but it's one of our defining characteristics. That being said, a database can provide information that wasn't known when we crafted our original narrative. It's a brave new world of narrative-database hybrids, as represented by the two works cited in Hayles. It may be a bit over my head, but I'm on board with at least trying.

Tuesday, October 7, 2014

The Ecstasy of Influence

In Matthew Jocker's book Macroanalysis: Digital Methods & Literary History, we get an expansion on the premise of the previous book (Moretti's Graphs, Maps, Trees), specificially in the idea of "mapping out" or charting the ways in which books relate to one another (and differ) due to issues of gender, nationality, and chronology. I found this book frustratingly interesting, if that makes any sense. I am not mathematically inclined, so all the citations of various equations used to arrive at algorithims and what not may have gone over my head or led me to jump ahead to the next paragraph. But I liked the idea of trying to chart books more than just via close reading.

Now seems like a bad time to admit this, but I've never been that good of a close reader (or that close of a close reader, if you prefer). In my years at Tri-County Tech and further onwards, speed was the name of the game; I had to have sections 1-7 of The Odyssey or whatever read by Tuesday, and ready to talk about them in some depth. Oftentimes, speed trumped paying-attention-to, though as a lifelong (it seems like) English major I could bullshit with the best of them. Sometimes I got more from the class discussions than I would've gotten just reading the text alone, without the context of in-class discussion. So while I go to the Church of Close Reading, I often spend more time trying to get to an arbitrarily-chosen page number to round out my day of reading.

That being said, I can see why close reading is essential to literature studies; if you didn't pay attention to Moby-Dick, you might think it was a happy adventure tale (or if you watched the Demi Moore film of The Scarlet Letter, that Indians attacked the Puritan town, and Hester and Dimmesdale were able to flee to safety together). But as my Literary Theory class is good at pointing out, there are many ways to read a text (to even close-read a text); you could be all about the text minus any contextual grounding or even authorial biography (i.e., Formalism/Structuralism/Post-Structuralism) or you could be all about all that (Historicism). Word definitions can change over time (gay means something totally different to an older generation, no matter how many times my friends and I used to snicker about the former Myrtle Beach landmark "The Gay Dolphin"). Works can be neglected in their time, only becoming relevant when a new generation discovers them (think of all the cult films from the Seventies, box-office flops or cheap cash-in genre pieces that are now elevated to the ranks of Citizen Kane in some critics' eyes) or misinterpreted then and even now ("Civil Disobediance" is far more inflammatory when you see it through Thoreau's eyes than through that of Gandhi or MLK).

I found it interesting that Jockers tries to advocate for both close and "distant reading," as he points out that any truly scholarly attempt to document everything from any given time period (like English novels in the 19th century) is ultimately doomed because of the sheer bulk of materials at hand. Also, we can get a distorted view based on our limited resources or perceptions (the discussion of Irish-American literature was fascinating, because in all honesty I never thought that Irish-American literature existed in the West, not in the 19th century). I laughed a little at the word cloud for topics or phrases that came up in English novels from the 1800s (because "hounds and shooting sport" was the largest such phrase to crop up), but there's valuable insights to be gained from such "unconventional" searches. Any attempt at a close reading of primary sources from just about any era would be a daunting task and (as Jockers suggests) impossible to achieve during the course of a normal human lifespan.

I'm drawn more to late-20th century literature as a reader, because on some level I've always wanted a class that could include Salinger and Pynchon as well as the more established leading lights of literature (and I did, while as an undergrad; I had a World Lit class that included Murakami, and a 20th-century lit class that covered everything from Camus to Toni Morrison), so it's disheartening to realize that a lot of the tools used on older, public-domain literature might not be applicable to more recent work because of copyright restrictions. But given enough time, more and more work should be available for downloading and disassembling (perhaps more likely in the lifetime of any hypothetical children I have in the next twenty years or so, but still). For now, we have to make due with Jane Austen (whom I enjoy) and other dead authors who can't sue us for using their work in ways that they couldn't have imagined.

I worked at a public library for about a year (late 2009 to mid-2010), and this was at what seemed like the height of Twilight-mania, as you had the movies starting to appear (in all their moody, under-acted glory) and copycat books appear by authors who may or may not have been "influenced" by Twilight (but who were certainly influenced by the dollar signs that appeared for any book that rode that wave). I'd constantly see patrons (mostly teenage girls) come in and request various variations on the supernatural themes of Twilight ("I'm in love with a boy who, at midnight, turns into a 1973 Ford Pinto" and so forth), and you literally couldn't tell them that these books were garbage (mostly because I didn't read them, and also you weren't supposed to make fun of patrons' reading habits. Tom Clancy still had a huge audience amongst middle-aged white guys who disliked then-brand-new President Obama). When Jockers started to chart some of the works that had those kind of similarities over a number of years, it made me think of all the vampire-esque books we had at that point in time, the seemingly endless variations on the particular theme that cropped up almost overnight. Obviously, books take a long time to come together, but the sheer confluence of Twilight-Lite works made me wonder if this wasn't planned years in advance (granted, the Twilight books must have pre-dated the movies by a certain number of years, though I'm too lazy to look that up on Wikipedia). It's humbling to realize that "literature with a capital L" could often be as trendy as hashtags on Twitter about various celebrities or what not, but also humanizing. For every Stephanie Meyer or Bram Stoker, there are a million what-the-hell-is-their-name(s) out there jumping on the bandwagon. But bandwagons get awful crowded, and people have a tendency to fall off.

Monday, September 29, 2014

Graphs, Maps, and Trees (Oh My)

I liked this book, even if I'm not sure I understood it (a common occurance when I encounter something weird in literature: sometimes I think I want to like something more because I don't understand it but want to). The idea of graphing out literary genres in terms of their rise and fall is appealing for the simple fact that they put the lie to the notion that "we've always had [insert literary genre here] around." There's no doubt in my mind that, with certain literary genres, movements, and forms, history has a way of informing both their rise and fall. Formalism begat Structuralism, which begat Post-Structuralism (seems like you can create whole new schools of thought simply by adding "post-" to the front of existing realms), and so on. Literature doesn't happen in a vaccum, at least I'd like to think. Sometimes the world isn't ready for a particular genre, sometimes the genre isn't ready for the world.

The idea of mapping out literary locations, at least in terms of how they correspond to the text, was illustrated intriguingly enough with the "village novels" of the nineteenth century. We like to think of the past as somehow being less complicated than our present, with self-sufficient villages doting the landscape and safe havens from the problems of the outside world. But as the maps demonstrate, a lot of the outside world was encroaching on the villages at the time that PBS would have you think that villages were still universes into themselves (not Downton Abbey perhaps, but more like the Austen adaptations and other nineteenth-century-set projects, where the outside world only comes into play through letters or news from relatives about what a scoundrel this Napoleon Bonaparte fellow is being on the continent). Interconnectivity isn't an innovation of the twenty-first century; the world first began to shrink thanks to the telegraph, then the telephone, and other forms of communication brought into the same world that hosted Little Women or Moby-Dick.

I'm a sucker for graphs charting the rise and fall of things (I have never, ever liked math, but graphs give me something to envision to make sense of the numbers), and I keep going back to the graphs charting the rise of the modern novel through its various incarnations. It's surprising to me that the first wave of gothic fiction seemed to peter out around 1815, seeing as the novel I most associate with gothic fiction (Frankenstein) was published three years later. The tastes of the public have as much to do with the rise or fall of a genre, it seems; artistic merits be damned, if the public ain't buying it we need to cut back on the production. I'm a bit of a free-market capitalist when it comes to culture, because as stupid as a lot of popular things seem now I know from past experience (MC Hammer, Beverly Hills 90210, mullets) that eventually something will come along to replace it (that replacement might be equally stupid, but at least it's new and stupid, as opposed to old and stupid). Like I said about gothic fiction, it seemed to nosedive into irrelevance for most of the nineteenth century, but made a comeback via Dracula in the 1890s.

I'm guilty of thinking that Sherlock Holmes has been around for a long time (and technically he has), but I always find it surprising to see how recent his creation really was; in terms of overall history, the 1890s aren't that far away from our own time. The evolution of the clue (whether it was revealed in detective fiction or kept just off the page) was interesting to consider, as well as the names of Conan Doyle's contemporaries whose inability to adapt as well as he did meant their literary doom. Genres live or die by the best work of the master of the craft, and it's hard to think of a bigger name in the evolution of the detective genre than Arthur Conan Doyle (cameos in Shanghei Knights nonwithstanding). We always tend, in looking back on an author's career, to think that he or she was always that person, always that creative force who fashioned fantasies out pure air, with no one able to compete. It helps to know that Conan Doyle was, at one point, simply one of many authors applying himself to a genre that might or might not have led anywhere, if he'd not worked hard at it.

There are some works that need maps (Lord of the Rings, treasure-hunting or travelogue works) and there are some that don't. Graphs and trees are fantastic and weird ways to bring literature to life. I think I understood this book, but I'm not sure.

Monday, September 22, 2014

Hacking the Academy

This week's reading was like a mini-"Debates in the Digital Humanities," only better organized (I think we gave last week's reading hell for seeming like an afterthought after the first two readings we did in the Debates book). There were a lot of interesting things to dissect, which I'm sure we'll get to in class, but I want to address two things.

First, not to bore you with another "when I was a freelance writer" story, but when I was starting out in the freelance writing world, the target was always more print-based than digital publishing (or, as we called it back then, "getting on a website"). I was still of the mindset that real writing was done for magazines that put out editions in print (it took me forever to adjust to the idea of a magazine being a solely online venture, but I caught on to that notion quicker than did the editors of print media, I will say). I can't tell you how many times I sent something to the New Yorker in the vague hope that it would somehow leap off the (electronic) page and make me famous (because I honestly thought that getting published in The New Yorker was how you got famous).

In Hacking the Academy, this ties into the peer-review old-school style of getting published versus the new-fangled open-source online publishing that a lot of scholars in DH advocate. In practice, then, I've been open-sourcing my work for years: like I said last week, the bulk of my published "work" is online, and much of it lost to the cosmos of websites crashing (I liked the article which cautioned against scholars assuming that, just because they publish something online, that means it's preserved forever; "forever" in online terms could be as fleeting as the very same week, if someone on the website side of things fucks up, blows all their money on Vegas, or accepts a friend request from a spambot intent on infecting the world with ads for boner pills). I think it's telling that some scholars think in these terms, for while books may rot, be burned, water-logged, or otherwise compromised, we like to think that books are a permenant way of preserving knowledge. In this sense, the notion of DHers that online means always online (that what you publish online can be kept safe from the changing whims of departamental heads, because it's online) is rooted more in the past than the present: we're projecting the myth of knowledge preservation in book form upon the online variation, and it doesn't hold water. Does this immediately invalidate the research published online, as somehow not being "worthy" of being kept in books (on the face more physically preservable than websites)? That is the assumption that DHers are working against, which comes from the side of the academy that doesn't understand (or want to understand) the possibilities of digital preservation.

Secondly, one of the sources of heated debate amongst DHers and more traditional academics is the notion of "open-source" online publishing, that is posting work online (not through the hurdles of peer review that could take months or even years and eventually result in a book that, for all its merits, would likely have a small audience even within the stated target audience of academics devoted to the subject at hand). Blogs are cited as a way to get the word out, and even though I dispute the notion of Twitter as being anything other than something that celebrities get on to annoy the rest of us, I agree that in the right hands it could be a tool for promoting more helpful things than just another picture of Kim Kardashian looking constipated in yet another selfie. I would caution against citing Wikipedia as an ideal form of this open-source, however; with all the controls in place, there are still the obvious mistakes or outright manipulations that give Wikipedia a bad name in academic circles (I cite the example I ran across when, looking up SS chief and architect of the Holocaust Heinrich Himmler, I read that, according to whoever wrote up the bio, Himmler was "a pretty cool dude." I hope that the editors of the site caught that in time to correct it). Wikipedia is a great resource, of course: by its very nature it both highlights the pros and cons of open-source and user-controlled data outlets.

I took away from the book a sense that traditional scholarship needs to change, to keep up with the demands of a world increasingly digital. It also needs to be vigilant not to lose the aspects of it that make it valuable (some sort of peer review is available online, even if it's in real time and not kept from the public eye like traditional peer review). I believe that education shouldn't be elitist in how it's applied. I've known people in life whose lack of formal education doesn't reflect their natural desire to learn and grow in intelligence, or indeed to posess more of it than some people for whom educational outlets are limitless. I think DH could be a conduit for a more democratic distribution of education, and having some sort of open forum where DHers can debate each other's work, not hidden from the public but openly engaging it, is called for. The academy could do with some hacking.

Monday, September 15, 2014

Why DH Matters (Unless It Doesn't)

In reading the last two sections of the Debates in Digital Humanities, I've had a hard time coming up with possible post topics for the Facebook page or indeed for this blog entry. There's a real sense of "a discipline in search of itself" that runs through the entire book (hence the debates), but a lot of the last two sections comes across as even more so. We see, in the section on teaching DH, a real awareness that the lab work DH is engaged in to some extent takes away from the pedagogy of it. You can show better than you can teach, seems to be the message of some essays, and with the fluid, amorphous definition of DH itself, a lot can be done by those outside DH (like, say, in college administration) to undercut the idea of DH being taught on campus. So in that sense, it's just as much a part of the humanities as the English department and other "traditional" disciplines.

It all seems to go back to justification, not self-justification so much as justification to the outside world, and in that I can relate. As someone who's pursued degrees in English for well over a decade (my undergrad career, first at South Carolina in 1997-1998, then Tri-County Tech in 2001-2004, and finally at Clemson in 2006-2008), my choices have left a lot of people in my family scratching their heads. I'm not sold on the idea of being a teacher, per se; I'm closer to acknowledging that it's probably where I'm headed more today than ever before, but at heart I'm a writer, or a wannabe writer. Problem is, my fiction is lacking (to me, anyway), and while I could tell you what makes a book good or bad, I'm not sure I have the ability to render a book myself (which bodes well for the "publish or perish" mentality of faculty, I'm sure). I've had a hard time justifying to others and to myself just why I am pursuing such avenues of education, if I can't "do" anything with it, at least not in the eyes of a lot of people.

A humanities degree in some circles seems like a waste; compared to pre-med or pre-law, it's easy to see why some might think that it's a waste of time. A digital humanities degree, or a class in DH, might be even harder to justify to someone unfamiliar with the concept. It all ties back to the idea of functionality in education: pursue a degree in something that gives back to the common good, the idea seems to be.

But art is important, if you'll forgive a lifelong English major saying it. Art informs life, makes it worth living, makes it bearable when science and other more rational, more fact-based ideas fail. I've gotten through more tough times with the help of good books (and even some bad ones) than with any other coping mechanism save humor (which is also an art). The divide in DH seems to be between emphasizing the cold mechanics of making a text available online and making the text come alive through interpretations, or doing something totally unthought of before. It reminds me of the Robin Williams speech in Dead Poets Society: all the sciences are noble pursuits, but poetry gives meaning to life. I believe that, anyway.

Perhaps in the texts to come, we can get away from debates about the merits of DH, because in a sense it doesn't matter. DH has the ability to make art, to make it accessible. That should be enough, I think, to justify any amount of expenditure from the higher-ups. Yes, the law schools and science labs might be more flashy, with their output more easily identified. But art has the ability of burrowing into your soul, changing your outlook on life. It's not easily quantifiable, and it shouldn't be. Perhaps if DH embraces the less results-oriented approach of its sister humanities, we wouldn't have to have debates about its merits.

Monday, September 8, 2014

Don DeLilo and Covert Racism

The readings for this week covered a lot of ground, but the essays that stuck with me were about Don DeLilo and racism, respectively. I can relate to the idea of an author's work either not being digitized (and thus lost to the ages) or an author's work being on the web on sites that no longer operate and thus being lost to the ages. Because it's happened to me.


Back in 2003, while attending Tri-County Tech, I started sending off short "witty" humor pieces (at least I thought they were witty at the time, but I was twenty-four and probably thought everything I did was "witty") to smaller humor websites that thrived on reader submissions. Your National Lampoons or Cracked.coms of the world were too good for my sophisticated humor (again, I thought it was sophisticated, odds are it wasn't), so I sent off pieces dashed off on my mom's computer or at the computer lab on campus to various websites until I hit paydirt. The first site to publish my work was the Neurotic Eclectic (I remember the name, even if no one else does). It was a website run by a guy living in Arizona (for some strange reason, most of the early success I had with freelance humor writing came via websites based in Arizona), and he accepted one of my pieces based on the premise of has-been celebrities doing "books on tape" of Cliff's Notes-versions of American classics. Like I said, I thought I was witty at twenty-four.


At any rate, I had a good run with the website, and with other sites where I got acceptance, and over the years you could say I built up a pretty good archive of original material. You'd be wrong, however; within a year of publishing me, a lot of those websites that welcomed my work went kaput. It was and remains a fact that internet web sites live or die by viewer traffic, and sometimes they get infected by viruses sent by people who have nothing better to do. A lot of the sites where I was most prolific seemed to suffer this fate: there is a likely chance that somewhere, there exist archives of my work alongside others who wrote for the sites back then, but I wouldn't bet on it ever seeing the light of day. Like I said, most of my writing at this time was done on my mom's PC (which bit the dust sometime around 2006 or so) or at the computer labs, and I didn't think to save anything to either of those. I got my own PC in 2004, with internet connection (via old-school dial-up), but once my Norton Anti-Virus ran out (it had been free for a year, but I was too cheap to pay for it after that), that computer became a victim of random computer viruses, often spread via the very same websites I was submitting to.


I learned an important lesson then: don't submit material that you don't already have saved somewhere else, multiple times over if you can (not just to your PC or laptop, but also on discs or flash drives or Nanos or what have you). I also learned that internet commentators can be assholes: one site that published me fairly regularly had comments sections for the articles, and I got reamed cyber-wise by people whom at first I took seriously and then later realized were just jealous that their work wasn't considered good enough for the website (us writers got a regular log-in ID and password, instead of having to send everything via the "we might take a look at this" email address for freelance submitters, many of whom I gathered were commentators and frustrated humorists themselves). At least that's what I tell myself.


At any rate, all the discussion of DeLillo's lost work (stories that vanished in the pre-internet ether of magazine submission bins or were published, but in magazines now lost to time) hit a nerve. I would never presume to say that the work I did during those years when I self-identified as a freelance humor writer would be up to the level of DeLillo (I've read White Noise and Great Jones Street, and I once started Underworld without getting too far into it, so I'd probably substitute some other writer whose work I'm more familiar with, like Pynchon or Lethem), but I do mourn the fact that, supposing I ever did become a famous author and someone wanted to do research on my early work, they'd be hard-pressed to find it (then again, that might be a good thing: "juvenile" wouldn't begin to cover the tone of much of that early internet work). Archiving an author's work (not just his published books or stories, but also essays, correspondence, and so on) is likely to be harder in this digital age, where the idea is "nothing is lost" but the reality is "websites crash, magazines lose their online presence, and shit just happens." One of my favorite books was the first volume of Hunter S. Thompson's collected letters, which got me into the Great Gonzo Writer; I wonder if a collected volume of "the emails of Trevor Seigler" would have the same resonance.


The discussion of the apparent "whiteness" of the digital humanities, at least as it related to the civil-rights era, was interesting too, and I hope I remember to bring up in class the idea that digital technology sought to streamline itself so as to avoid much of the upset and tumult of the Sixties. Interesting too was the discussion of the move from overt racism (whites-only signs) to covert racism over the years, as a response to the work of MLK and other civil-rights leaders. This past NBA season, we had an example of covert racism being made public (Donald Sterling's comments to his girlfriend) resulting in the public shaming of a horrible person and the forced sale of his team (the Clippers). It would seem that such instances might actually support the notion, contrary to covert racism seemingly codified by digital design, that the online world can expose such thoughts more than hide them. Again, I hope that we bring this up in class more, but I think it merits some discussion about how the internet can both be a conduit for such covert racism and at the same time an exposing agent for such thoughts that we (and we are all prejudiced in some way or another) might think of as "private conversation." It also gets to the debate about a possible "chilling effect" on free speech, versus the idea of accountability for the things that you say.

Thursday, September 4, 2014

Is DH the Hip-Hop of the Humanities?

A random thought popped into my head after class yesterday, and I'm posting it here because I wonder if it can come across effectively on the class' Facebook page (and also I'm a bit afraid that it may be ridiculous once I type it up and look at it, so I figured it was safe here): what if digital humanites is like hip-hop/rap once was, at least in the early days of the genre?

Consider: when rap came into being in the late Seventies, it was a process of using established tools for listening to music (turn-tables, records, etc.) and turning them into music-making devices. A lot of this was due to chronic neglect in the inner-city, where buying instruments traditionally associated with music-making might be out of the question, and also a reaction against the fact that rock and roll (once a melding of black blues and white country) was now pretty much a whites-only genre, due to the prevalence of white rock stars (Hendrix was more the exception to the rule in 1967, after Chuck Berry and other black artists associated with rock either fell on hard times, died, or were just neglected by the record-buying public). Similarly, it could be said that the early practitioners of DH were using computers not for the process that the public might have imagined in terms of total immersion in "cyberspace," but to connect more to the world than to escape it, subverting the natural expectations that "the internet is apart from the world" in much the same way that hip-hop took the record-player turn-table and subverted expectations by scratching the records, creating new beats for rappers then to overlay their own work.

Bit of a tortured analogy, I know, but there was a mention of rap in the class last night, and it set me off on this path of intellectual wondering about whether, by virtue of its youth and relative new-ness on the scene, DH might be comparable in some ways to hip-hop pre-"Walk This Way" (the Aerosmith/Run DMC collaberation that is cited as the crossover moment for rap and hip-hop into the mainstream). The debates about what DH is and isn't, how inclusive/exclusive it should be, are issues that in some ways are reflected in the notion that rap has become omnipresent as the dominant medium for music, yet there are many who use rap who aren't rap (I doubt Katy Perry has tons of street cred).

Rap has benefited immensely from the advances of technology, with sampling and digital recording being but two of the more obvious examples of this. Humanities purists might sound like the same people who dismissed rap as "crap" because they either don't understand it or fear its impact. A progressive force meets a conservative object and chaos tends to ensue. In much the same ways that technology is chided by some as leading to a "coarsening" of our culture, rap was from its inception attacked for various reasons, but mostly because it upset the natural idea that musicians had to play instruments (but turn-tables *were* instruments).

Like I said, this might be total crap on my part so I figured it was safe here, for Dr. Morey's eyes only.

Tuesday, September 2, 2014

Why We Fight: Debates In Digital Humanities, Parts 1 & 2

One of the essays in the first section was titled "Why We Fight," which led to the Decemberists song of the same name being stuck in my head all the rest of that day that I read it. But I think it's an apt description of the "debate" going on in digital humanities about just what DH is.


All through the first section, we get an overview of the evolution of digital humanities, from its emergence as a topic for consideration in the Nineties (with the rise of the internet and more access to information than ever before) all through the various labor pains of its gestation and eventual naming. DH is a young 'un in the world of scholarly disciplines, coming of age just as I was, to an extent. It would be fair to say that I'd never even heard of the concept until I signed up for this class.


The definition of DH is fluid right now, because in a lot of ways we're still trying to figure out just what "digital humanities" are and what they should (or shouldn't) include. It's easy (if a little arrogant) to assume that History is "the story of the past and how it relates to us today," if you will, but History covers quite a bit of territory, and can be divided into various strains (European history alone would require quite a bit of leeway in terms of what can be covered within the parameters of a lecture course and what has to be left by the wayside for lack of time or relevance to the overall idea of the course). English is similarly both easy to define and slippery from such formal definition: we can all agree it's the study of writing and how writers write, but what can you cover (and what can't you cover in the time allotted)? It can encompass the study of literature (and that in and of itself is another huge chunk of territory, because every country and even regions of countries has their own brand; you could take enough courses on "Southern literature of the United States" to cover any pre-requirement for a major, though you run the risk of Faulkner overload). DH is facing much of the same concerns that disciplines far, far older than it have faced and (arguably) overcome, or shrunk in the face of.


The idea of the digital humanities eventually "becoming" the humanities (as one of the respondents to a questionnaire about DH predicted) is interesting, because the humanities could be said to cover not just the obvious (literature, grammar, etc.) but also defining characteristics of shared culture. I thought it was interesting how one article cited the seemingly random inclusion of film and music as falling under the future banner of DH at some point in time, though such citations weren't as well defined as the ones listed under perhaps more "typical" humanities concerns. Film, like DH, is relatively new to the scene in terms of other artistic mediums; literature's been around at least as long as Chaucer, if not even earlier, and music (in the sense of being performed, not recorded) is probably one of mankind's oldest means of expression. Film came along through photography, and the desire to tell stories through images (film, as magnetic as it is, is basically still photographs taken one upon another and run through a projector to create the illusion of movement, or at least it is when it's literally "on film." Digital filmmaking doesn't use film, of course). I know when I originally signed up for a film class during my undergrad term at Clemson, I didn't consider it an "art" at least because I'd taken it for granted that movies were there. But an intro to film class caused me to reconsider, to see the inherent beauty and flaws of the medium, and to embrace the notion that, while not all films are art, there is art in film.


Like DH, film studies can encompass a wide berth: for example, I took an entire course devoted to Jean-Luc Godard, with pit stops into the territory of Francois Truffaut, Agnes Varda, and other French filmmakers. Like DH, it's a new discipline, but one that has long since established criteria (it's hard for me to imagine that anyone would take a course in "the films of Michael Bay" except as a joke, or a tongue-in-cheek "celebration" of camp cinema, but it could easily happen within time). Film would be an ideal medium through which to examine the digital humanities, because it leads inevitably to the plethora of images we can see on the internet, which is where the concept of DH laid its roots.


I think that the debate over just what is or isn't DH is healthy and necessary, a discipline in search of itself and experiencing growing pains but finding the strength to assert itself. I have no doubt that one day the digital humanities will be defined conclusively. It may even encompass the whole of humanities, like the Blob eating up entire towns. But right now it's trying to find itself, and there may be some stumbles along the way or missteps. DH is at an interesting stage of its development, and I wonder how it will proceed from here.

Monday, August 25, 2014

Eversion, "The Emergence of Digital Humanities," and the Oregon Trail

I have a love/hate relationship with computers and the internet, it would be fair to say. When I was in elementary school back in the Eighties (leg warmers and Mr. T's gold chains, oh my!), we got our first computer lab which, of course, was supposed to be an educational "tool." It helped that there was no such thing as the "Internet" or "world wide web," or if there was then it wasn't connected to our bulky, size-of-a-small-country PCs. My distinct memory of the computer lab was the time spent playing educational games, like the Oregon Trail. My lack of success at this "pioneers against the elements" game (my family was usually racked by measles and Indian attacks before we got too far into the wilds of Nebraska) might have colored my view of the brave new world promoted by cyber-technology in the ensuing decades.

The Emergence of Digital Humanities suggests that I was right to be skeptical of any such claims made by early proponents of the Internet, because the "other world" of cyberspace is, as it turns out, all around us now, and the greatest science fiction writers couldn't envision that the "virtual reality" we were sold on as being "the next big thing" would suffer the same fate as Pokemon cards, Cosby sweaters, and the stand-up career of Andrew Dice Clay. Virtual reality, while fuel for some of the more paranoid fantasies of sci-fi writers since the coining of the term "cyberspace" in the Eighties, has been the proverbial gold-town-gone-bust, but it hasn't been all for naught. The world we have today, of smartphones and cell phone reception in even the remotest of the corners of the earth, means that the computer, into which we would once be relegated, is not laid open for us to dip into at will (or, in some cases, all the time).

Eversion is the term for this phenomenon, and it is the main thrust of Professor Jones' book, that everything around us is fodder for the digital humanities. This can be simple encryption of written books onto the internet or the technological advance of having action figures that, through a code, can be dropped into a virtual reality (in this case, a video game) and "played" with onscreen.

I have to wonder if I'm the ideal student for digital humanities, at this juncture: while I am no Luddite, I have reservations about the connected world we live in and the excess to which some go to maintain a profile online (I'm thinking of all those celebrities who get reality shows out of a sex tape, not naming any names). The implication seems to be, if you're not online you might as well be a non-entity. It's a notion that I have trouble with. I have a Facebook account but not a Twitter or Instagram account (or whatever is the hip, young "with it" site this week). I have a flip phone, and while I concede that at some point I will have to get a "smartphone," it isn't because I'll eventually fold so much as the store I go to will no longer stock flip phones at some point. I feel a bit like the grumpy old man that Jones seems to suggest Roland Barthes would be if he were still around, talking about all "these damn kids on their cellphones and with their Tweets and such." It's obviously not a position to take in a class about the concept of using tools like the Internet to make an object of study more accessible.

To get back to the reading, I thought it interesting that Jones brought up the anti-digital or "back to analog" movement, which seems focused on reclaiming for non-computer items of the past the relevance they had once upon a time (say, in the Eighties, while I was getting slaughtered along the Oregon Trail). I haven't bought into the notion that "vinyl is better" (a hipster credo that has gained traction, ironically enough, on Facebook and other social media sites that wouldn't have existed without the computer). I remember how much "fun" it was to place a record on a record player and try to cue up the needle just right (and the inevitable scratching one encountered no matter how carefully one placed that needle). There might be something "pure" in other people's minds (or in their ears) about the hiss of a vinyl record, but I'm not buying it. CDs, and later on iTunes and other music service sites, have been part of my existence since I bought my first CD (R.E.M.'s Out of Time) and had to borrow a relative's clunky portable CD player in order to listen to it. Vinyl is just a hassle, and it seems of  a kind with the warped futurism of the movie Brazil (set in a dystopian future where people have all kinds of wiring obviously showing in their homes, computers are removed from their shells so that the contents and wiring are exposed, and people literally drown in paperwork that a fully automated society wouldn't even think of).

Everytime I walk down the street, I see people with their phones out, checking their status or the statuses of their friends, or playing games (there's a recent Esurance commerical with a grandmotherly woman telling her young grandson how much she enjoys "Candy Crush," she is literally crushing candies with a hammer at the kitchen table). It's not going away anytime soon, and progress can be a good thing, for sure. I know for certain that I've benefited from the more social nature of the post-Nineties internet, the availability of like-minded people online to talk to about various things or issues (or just to talk about nothing at all, just so I feel the connection), and I will not be taking myself "off the grid" anytime soon. But I wonder if all this connectivity, all this liking of one another's statuses, all this illusionary sense of community online that Jones argues is translatable to the real world, if it's all really as good as it has been touted to be. Everytime Apple comes out with a new iPhone, my inner cynic reacts with the notion that the ghost of Steve Jobs won't rest until I buy into the hype and jump on the iPhone bandwagon. The study of digital humanities, and the practice itself, should raise questions about not just the good side of it but also the potential bad side. Because otherwise, we're just all on the Oregon Trail, a few kilometers from either surviving the combination smallpox-outbreak-and-Comanche-attack or not.

Thursday, August 21, 2014

This is not my first post

This is a test to make sure this is up and running (setting up a blog for my Digitial Humanties class). So...yay!