First off, a warning: I gave blood yesterday (Monday) and, while I'm pretty sure I'm almost full-on back to my normal self, I could conceivably go on many a tangent with relation to this entire endeavor, this last blog entry of the class. Also, to come full circle, I had to order the first book for this class online, and so it was with this one. Circle of life, I guess.
Anyway, something that stood out to me in considering this blog entry with regards to Ulmer's Avatar Emergency was that he pointed out how "brand" in the sense that it's used in social media and indeed in everyday conversation these days is different from "avatar." I'm not sure that I believe that, or that the distinction can be so easily made. For one thing, when I first began surfing the 'Net, going to message boards about favorite obscure bands from the Eighties (my particular drug of choice in this was Joy Division, which after Ian Curtis' suicide became New Order, the group responsible for "Blue Monday" and most of techno music in the Eighties), I had to create an "avatar," which I understood as being "me but not me." That is, it was something promised in the early days of the internet, to "lose yourself" in the creation of an online identity that shared some tertiary traits with you but which could be enhanced, downgraded (in case you didn't like yourself but the world saw you as confident or something), or just tolerated by the others. Said toleration could be exhausted if you were an asshole online (as, regrettably, I sometimes...okay, often was). We still see it today, with Internet "trolls" who, far from ever offering anything constructive to say, simply feast upon the insecurities of whomever they're pursuing, posting comments and other such ephemera to take down their intended target. I think that's why I posted the link to the Gamergate story from Deadspin (and also, to help increase my grade in class via Facebook group postings), a way of making amends for my own troll-like behavior in the past by pointing out more recent instances of it by people (usually men) who really should know better at this juncture. The internet is about twenty to twenty-five years old, at this point. It's time it started acting like an adult.
Another thing: the terms "brand" and "narrative" have been in use a lot lately (I watch ESPN a lot, and both come up whether discussing a particular player's image or how a story plays out), and what I think merits discussion is whether this is a good thing or not. Branding yourself (figuratively, at least) as a concept has been around for decades, but it's only recently, it seems, that we're more open about discussing it. Branding is a term from marketing, I once had a job interview with a marketing firm in Greenville and when the question of "what is marketing" came up on the application I should've just written "branding" (instead I wrote something nonsensical, in retrospect. No wonder I never heard back from them). "Narrative" is borrowed, to my mind anyway, from literary studies, that is, the basis of fiction as an art. If the narrative doesn't work, the fiction falls apart. This is a lesson many a screenwriter of various late-night Cinemax movies never learned (not that narrative was ever the motivating factor behind such films, or indeed action films, which leads me to think that the beats both porn and action strive to hit are so similar as to merit a discussion of what each owes the other and what each gets from the other. Told ya I'd ramble).
I think we hear a lot about brand and narrative today because, in a sense, these are analog responses to a digital future. These are concepts that have been around forever (or at least seem to) and the internet, while more than old enough to drink, vote, and die in a war, is still really young compared to more ancient and established media. In terms of both terms, allow me to mention two people who, really, should never be mentioned in any academic setting: Kim Kardashian and Jameis Winston. God help me, I know, but hear me out.
Kim Kardashian is branded as a sexy woman whose main attribute, whose sole contribution to Western society (apart from her sex tape or her reality show) is her buttocks. They are ample, indeed, and she has literally made a career of showing them off (most recently for a magazine desperate for the publicity). We know (or "know") that there's nothing of substance going on behind that face of hers, that her brain is simply a repository of Kanye West lyrics and "how can I show off my ass this week" queries. That is her brand, and we (even those of us who are sick of her) buy into it. But the narrative, such as it is, is flawed: clearly this woman has the intelligence and self-awareness to know that what sells (this image of her as a sexualized woman not afraid to flaunt it in various states of undress) is what keeps her business (the business of being Kim Kardashian) going. She is much, much smarter than we would give her credit for, in public. Her skills of manipulation when it comes to the media (even when they're talking about how sick they are of her, and what kind of message is she sending to our kids who idolize her) are worthy of any discussion of media strategies. In ten years, her celebrity will most likely fade (then again, people said that back when she got famous, and she's still in the news). At any rate, someday she'll be sitting on a huge pile of cash because we all bought into the narrative that her presence merits discussion.
Jameis Winston comes to mind because, well, the book we're studying is written by a professor who teaches at the University of Florida, but the color scheme on the book cover (orange and green) suggests associations with the University of Miami, so the quarterback at Florida State seems a fair topic. In terms of brand, it's this: he's never lost a game as a starter (and the way this season is going, he never will), he's got talent on loan from God (to borrow a phrase the ever-so-humble Rush Limbaugh applies to himself), and he's got some off-the-field issues. The narrative, however, is this: He's a villain because of his off-the-field issues, and every win his team experiences is a slap in the face of decent society, and oh boy what about his arm (but too bad it's connected to such a "thug"). Now, taking into account the fact that the media (sports media in particular) needs someone like Winston to galvanize discussion (in case the games themselves don't live up to the hype that ESPN and other networks invest in such displays of brutality), is any of this fair to Winston as a human being? It may or may not be, depending on the truth surrounding the allegations against him when it comes to women (recently, beloved entertainer Bill Cosby has himself been the target of rape allegations, whether these allegations are true or not will affect the narrative and brand of Cosby as "America's family man" and as a child of the Eighties, I hope it's not true, but as a child of post-OJ celebrity exposes, I fully expect there to be some truth to the allegations). Each competing narrative, each competing brand, seperates us further from the truth of the individual, the truth of the actual person. Online bullying exists because some people can't seperate themselves from their online selves, can't pass it off as just people picking on their avatars, brands, narratives, and not themselves. Maybe in time, such building up of narratives and brands can extinguish the flame of online bullies and trolls, help mediate between those being bullied and those doing the bullying.
Roger Goodell could certainly do with some re-branding or fashioning of a new narrative, because he's damned if he does (suspends Adrian Peterson for the year) and damned if he doesn't (suspends Ray Rice for two games, initially). All of this affects the real Roger Goodell (or if you believe South Park, the Goodell-bot), but what we see is the brand of "protector of the shield" in awkward, embarrassing press conferences. We live in an age of "no publicity is bad publicity," but I wonder if that's such a good thing. I feel outrage at Winston when I see him on TV, decrying what he's alleged of doing...all while being a fan of Woody Allen (alleged child molester), John Lennon (beat his first wife, neglected his first son, may have been under Yoko Ono's control), Hunter S. Thompson (decades of drug abuse), and so on. Morality might not have a place in the brands or narratives we construct today.
Also, narrative depend on the dominant group in charge, so maybe it's a good thing to have multiple narratives of events. I was joking around on a friend's Facebook page about how, in a certain light, Luke Skywalker is a mass murderer when he blows up the Death Star. It sounds ludicrous, of course, but that's just because we've bought into the narrative that Lucasfilm promoted. Always interesting to think about it in different terms, I think.
At any rate, I have enjoyed this class (and even come away understanding more about the subject that I initially thought), so if this is the end, it has been a trip.
The Most Awesome Digital Humanties Blog Ever
Tuesday, November 18, 2014
Tuesday, November 11, 2014
Finding Augusta...Oh, There It Is
I want to start this post with a story from my undergrad days: I was a Film Studies minor, and as such I was encouraged to attend a series of screenings by documentarians as a way to both broaden my mind and get some extra credit by writing up said screenings for one of my film classes. One such screening was a film called "All Rendered Truth," which is what one of the subjects of the film (a bunch of people who made art out of everyday and neglected objects, I guess they fall under the rubric "folk art" though I'm not sure if that term was in popular usage back then) said that "art" stood for, and I think it's a fantastic definition that holds up. Anyway, the two documentarians were there and took questions after the screening. One guy, notebook in hand (I'm guessing he wasn't there out of interest in the subject, any more than I might have been), asked "how did you find these artists," and the two guys described how they usually drove around, fielding phone calls or inquiring at local spots about artists in the area, and so on. Then the same guy raised his hand and repeated his question. A buddy of mine whispered under his breath, "they already answered your question, dumbass."
I bring this up not just because it's a funny story, but because I think it has a relation to the book Finding Augusta, in that the person at the center of the tale (Scott Nixon) is an enigma wrapped inside a riddle wrapped up inside another enigma (or something like that). We get some basic biographical detail (he was an insurance salesman who lived in Augusta, Georgia, and he liked to make films about the various Augustas or variations on "Augusta" that he encountered on his travels, he assembled that footage into a brief film called The Augustas but never really documented why or to what ends he did so). I keep wanting to ask not just "how did he find these Augustas" because clearly he looked at maps for the most obvious ones (like Augusta, Maine, for instance, or other Augustas throughout the country) but also "why." It's an answer that's elusive, for sure (no sled emblazoned with the word "Augusta" to be found here, just as it's heaped upon the funeral pyre of Charles Foster Kane's legacy). It's frustrating.
The book itself deals with issues both related to that frustration and seperate from it: Cooley spends some time talking about the construction of cellphones as something that "fits easily" in your hand, thus never apart from you. As anyone who's seen people bump into a street sign while checking their Twitter feed can attest (and I did see that sometimes, when I was working downtown and we'd get a break around ten in the morning and at three in the afternoon), the concept of a phone as being *not* an extension of yourself is becoming more alien now. I don't own a smartphone myself, so the QR codes for the Augusta App were useless to me (and I thought it interesting that, in a book dealing with the idea of surveillance as a tool of governance, we were being asked to add something to our phones that facilitated the surveillance by the author of this book. I'm not sure if that was the case, I defer to anyone who did add the Augusta App to their phone). But I think it's interesting how Steve Jobs (whom I've beaten up repeatedly in class and on this blog, but only because deep down I respect the fact that he changed our lives with his products, however ambiguous my feelings about those products may be) wanted the iPhone and other Apple products to avoid the "sleek, cold" designs of his competitors, when I think the case can be made that Apple products are by definition sleek and cold now. You can personalize the iPhone with a colorful case, but the basic design is sleek and white (or silver) and futuristic in a way old timey sci-fi movies imagined the future to be (aerodynamic spaceships with no "flaws" evident). I could be in the minority here, but I think Apple has gotten away from being "warm and fuzzy" in their designs. We have met the enemy, Apple might say, and they are us.
Also, the concept of the phone fitting the ideal hand, without taking into account variations or mutations (or the simple fact that not everyone's hands are the same size), is interesting to me. Does an iPhone really fit your hand well, or does it feel too small or too big? I think of the Seinfeld episode where Jerry's girlfriend has man-hands; she looks perfectly normal except for the fact that, when she tries to feed him at the restaurant or stroke his face, we cut to Bigfoot-sized paws imposing their will on his face. Also, and this may just be me, but the concept of "fitting in your hand" brought to mind the M&M slogan (hey, last week we talked about Eminem, so it's only fitting) "melts in your mouth, not in your hands." Perhaps in some way that only demented grad students might consider (raises hand, calls attention to self), the phone-in-hand concept is the inverse of that: it melts to your hand, becomes a part of you, so much so that you can't imagine living without it. Sounds bizarro, I know, but as I'm typing this I'm considering checking my phone to see if anyone's texted me (or if, more likely, my cameraphone has been operating all this time without my knowledge or initiative, taking pictures from the inside of my pants pocket that show up as black spaces), because I've got it on silent, so as to avoid disturbing anyone. However much I might have once been snarky about such ideas, the fact is that my phone is a part of me, even if I don't want it to be.
I was kinda hoping that (as with Digital Detroit and the author's references to Bob Dylan, Lester Bangs, and other tangentially-related Detroit-pop-culture ephemera) we'd get a chance to discuss James Brown, the most famous son of Augusta, Georgia, but he doesn't come up because really the book isn't actually about Augusta, or Augustas. It's about this network that we've all bought into, one that has become corporatized and overrun with services and apps that "offer" freedom but really exist to track us, our habits and search histories and buying trends and fetishes that we don't tell anyone about and so on. In a university where the email is through Google, the software provided by Adobe, and the drinking products by Coca-Cola, it's not just the web that is a corporate wasteland, beholden to signs (literally and figuratively) that we do not have agency over our own actions. Governance might very well be best when it governs least, but try telling that to either side of the aisle (for all their talk of "smaller government," the Republicans under Bush probably caused the most significant growth of bureaucracy since the Second World War. Now most of the top administration officials are yukking it up on Fox News, talking about Obama being the second coming of Hitler or something. Go figure). In the future, we're gonna have to guard our shit a little better, I guess. But don't worry, there's an app for that....
I bring this up not just because it's a funny story, but because I think it has a relation to the book Finding Augusta, in that the person at the center of the tale (Scott Nixon) is an enigma wrapped inside a riddle wrapped up inside another enigma (or something like that). We get some basic biographical detail (he was an insurance salesman who lived in Augusta, Georgia, and he liked to make films about the various Augustas or variations on "Augusta" that he encountered on his travels, he assembled that footage into a brief film called The Augustas but never really documented why or to what ends he did so). I keep wanting to ask not just "how did he find these Augustas" because clearly he looked at maps for the most obvious ones (like Augusta, Maine, for instance, or other Augustas throughout the country) but also "why." It's an answer that's elusive, for sure (no sled emblazoned with the word "Augusta" to be found here, just as it's heaped upon the funeral pyre of Charles Foster Kane's legacy). It's frustrating.
The book itself deals with issues both related to that frustration and seperate from it: Cooley spends some time talking about the construction of cellphones as something that "fits easily" in your hand, thus never apart from you. As anyone who's seen people bump into a street sign while checking their Twitter feed can attest (and I did see that sometimes, when I was working downtown and we'd get a break around ten in the morning and at three in the afternoon), the concept of a phone as being *not* an extension of yourself is becoming more alien now. I don't own a smartphone myself, so the QR codes for the Augusta App were useless to me (and I thought it interesting that, in a book dealing with the idea of surveillance as a tool of governance, we were being asked to add something to our phones that facilitated the surveillance by the author of this book. I'm not sure if that was the case, I defer to anyone who did add the Augusta App to their phone). But I think it's interesting how Steve Jobs (whom I've beaten up repeatedly in class and on this blog, but only because deep down I respect the fact that he changed our lives with his products, however ambiguous my feelings about those products may be) wanted the iPhone and other Apple products to avoid the "sleek, cold" designs of his competitors, when I think the case can be made that Apple products are by definition sleek and cold now. You can personalize the iPhone with a colorful case, but the basic design is sleek and white (or silver) and futuristic in a way old timey sci-fi movies imagined the future to be (aerodynamic spaceships with no "flaws" evident). I could be in the minority here, but I think Apple has gotten away from being "warm and fuzzy" in their designs. We have met the enemy, Apple might say, and they are us.
Also, the concept of the phone fitting the ideal hand, without taking into account variations or mutations (or the simple fact that not everyone's hands are the same size), is interesting to me. Does an iPhone really fit your hand well, or does it feel too small or too big? I think of the Seinfeld episode where Jerry's girlfriend has man-hands; she looks perfectly normal except for the fact that, when she tries to feed him at the restaurant or stroke his face, we cut to Bigfoot-sized paws imposing their will on his face. Also, and this may just be me, but the concept of "fitting in your hand" brought to mind the M&M slogan (hey, last week we talked about Eminem, so it's only fitting) "melts in your mouth, not in your hands." Perhaps in some way that only demented grad students might consider (raises hand, calls attention to self), the phone-in-hand concept is the inverse of that: it melts to your hand, becomes a part of you, so much so that you can't imagine living without it. Sounds bizarro, I know, but as I'm typing this I'm considering checking my phone to see if anyone's texted me (or if, more likely, my cameraphone has been operating all this time without my knowledge or initiative, taking pictures from the inside of my pants pocket that show up as black spaces), because I've got it on silent, so as to avoid disturbing anyone. However much I might have once been snarky about such ideas, the fact is that my phone is a part of me, even if I don't want it to be.
I was kinda hoping that (as with Digital Detroit and the author's references to Bob Dylan, Lester Bangs, and other tangentially-related Detroit-pop-culture ephemera) we'd get a chance to discuss James Brown, the most famous son of Augusta, Georgia, but he doesn't come up because really the book isn't actually about Augusta, or Augustas. It's about this network that we've all bought into, one that has become corporatized and overrun with services and apps that "offer" freedom but really exist to track us, our habits and search histories and buying trends and fetishes that we don't tell anyone about and so on. In a university where the email is through Google, the software provided by Adobe, and the drinking products by Coca-Cola, it's not just the web that is a corporate wasteland, beholden to signs (literally and figuratively) that we do not have agency over our own actions. Governance might very well be best when it governs least, but try telling that to either side of the aisle (for all their talk of "smaller government," the Republicans under Bush probably caused the most significant growth of bureaucracy since the Second World War. Now most of the top administration officials are yukking it up on Fox News, talking about Obama being the second coming of Hitler or something. Go figure). In the future, we're gonna have to guard our shit a little better, I guess. But don't worry, there's an app for that....
Saturday, November 8, 2014
Lipstick Traces (re: Digital Detroit)
It really didn't occur to me until after class Wednesday (and I was dissatisfied with the entry on Wikipedia about it, which is why I'm not posting this to the class's Facebook page), but in a lot of ways Digital Detroit reminded me of Greil Marcus' book Lipstick Traces: A Secret History of the Twentieth Century, which came out in 1989/1990-ish and which I discovered on my local library's shelf a few years ago (it has since been deleted from the collection because I was the only person who ever checked it out, apparently).
The book traces the various cultural and artistic movements throughout the century that, in Marcus' view, left very little evidence of their having existed. A lot of the work was "in the moment," and the moment was fleeting (as in the case of the French Situationists or the May 1968 revolts) or ignored in the face of more pressing concerns (the Dada art movement in the midst of the First World War). All of these things he explored as a way of talking about the punk rock movement of the mid-Seventies, specifically the Sex Pistols and their brief life (from about the end of 1975 until the American tour in January 1978, after which Johnny Rotten left the group and manager Malcolm McLaren tried to continue to cash in on the uproar over punk, but then Sid Vicious died in 1979).
Digital Detroit has an abundance of pop-culture references, ways in which the author thinks about the city of Detroit through the artifacts he uncovers (a Bob Dylan concert in town in 1965, the author's readership of Creem before even thinking about coming to Detroit, etc.). In much the same way that Jeff Rice tries to connect this cultural ephemera to his conception of Detroit, so Marcus tries to connect or suggest connections between the various art movements he cites and the brief flicker of punk rock in its initial stages (back when the movement wasn't yet codified by leather jackets, breakneck rhythms, and odd hairstyles). As John Lydon (the former Johnny Rotten) said in his memoir, the reason they put safety pins in their clothes was because the clothes were falling apart and they couldn't afford new ones, not as a fashion statement.
I think the argument Marcus was making (and which is echoed by Rice) is that these movements, however brief or "insubstantial" or unimportant in the grand scheme of things, did leave their traces in the way we relate to some certain things (like how Rice relates to the Maccabees building, once the site of a secret society whose exact purpose might not be evident anymore). It goes back to the idea of connectivity, that nothing is ever really "lost" on the internet. Marcus recently came out with a new book, The History of Rock and Roll In Ten Songs, which talks about songs and artists who might not be obvious contenders for discussion in some people's minds, but which show aspects about the history of popular music in the last century that we should pay attention to. I haven't got the time right now to read that book, unfortunately (I did read his entry on Joy Division's song "Transmission," at least, before realizing I needed to put more time into readings for my classes, and so returned the book to my local library), but I like what Marcus does in all his work (highlighting things that we might have missed the first listen or so, the first encounter with a piece of art or literature or film). Like in the case of Rice, I might not always buy that such a connection exists in the works Marcus cites, but it's never a dull read.
The book traces the various cultural and artistic movements throughout the century that, in Marcus' view, left very little evidence of their having existed. A lot of the work was "in the moment," and the moment was fleeting (as in the case of the French Situationists or the May 1968 revolts) or ignored in the face of more pressing concerns (the Dada art movement in the midst of the First World War). All of these things he explored as a way of talking about the punk rock movement of the mid-Seventies, specifically the Sex Pistols and their brief life (from about the end of 1975 until the American tour in January 1978, after which Johnny Rotten left the group and manager Malcolm McLaren tried to continue to cash in on the uproar over punk, but then Sid Vicious died in 1979).
Digital Detroit has an abundance of pop-culture references, ways in which the author thinks about the city of Detroit through the artifacts he uncovers (a Bob Dylan concert in town in 1965, the author's readership of Creem before even thinking about coming to Detroit, etc.). In much the same way that Jeff Rice tries to connect this cultural ephemera to his conception of Detroit, so Marcus tries to connect or suggest connections between the various art movements he cites and the brief flicker of punk rock in its initial stages (back when the movement wasn't yet codified by leather jackets, breakneck rhythms, and odd hairstyles). As John Lydon (the former Johnny Rotten) said in his memoir, the reason they put safety pins in their clothes was because the clothes were falling apart and they couldn't afford new ones, not as a fashion statement.
I think the argument Marcus was making (and which is echoed by Rice) is that these movements, however brief or "insubstantial" or unimportant in the grand scheme of things, did leave their traces in the way we relate to some certain things (like how Rice relates to the Maccabees building, once the site of a secret society whose exact purpose might not be evident anymore). It goes back to the idea of connectivity, that nothing is ever really "lost" on the internet. Marcus recently came out with a new book, The History of Rock and Roll In Ten Songs, which talks about songs and artists who might not be obvious contenders for discussion in some people's minds, but which show aspects about the history of popular music in the last century that we should pay attention to. I haven't got the time right now to read that book, unfortunately (I did read his entry on Joy Division's song "Transmission," at least, before realizing I needed to put more time into readings for my classes, and so returned the book to my local library), but I like what Marcus does in all his work (highlighting things that we might have missed the first listen or so, the first encounter with a piece of art or literature or film). Like in the case of Rice, I might not always buy that such a connection exists in the works Marcus cites, but it's never a dull read.
Monday, November 3, 2014
Invisible Cities
In 2012, I got the chance to travel to New Orleans, Louisiana, for a Jeopardy tryout that was being held there for folks who did good enough on the online test back in February of that year to be considered for the show. It was August, so New Orleans was particularly muggy even early in the morning (I remember going outside one day at about nine in the morning and being alarmed at how hot it already was), and from our hotel room in East New Orleans we had a bit of a journey to make to get to Canal Street, the main street of the city. Now, I'd been to cities before (New York once in 1997, Washington DC a couple of times, Atlanta enough to know that I didn't even want to think about living there, and Greenville if you want to count it as a "city" compared to the other ones), but New Orleans was different. For one thing, it took a hella long time to drive there (duties for driving split between my sister and I, my future brother-in-law along for the ride as it was a month before the planned wedding and this was the closest they could afford to a honeymoon). For another, we were coming to a city that, between the three of us, we knew almost nothing about. I had read The Moviegoer years back (wasn't yet in my Walker Percy appreciation phase; the trip to New Orleans helped), and also A Confederacy of Dunces, but apart from jazz music and Hurricane Katrina, I was woefully ignorant about the Cresecent City.
It's interesting how, in Digital Detroit, Jeff Rice addresses the idea of cities having narratives, and how those narratives affect our perceptions of place, because when I got to New Orleans there was still evidence of Katrina's destruction, either manifested in ruins and abandoned buildings, or in the psyche of people we met, the people whose livelihoods depended on bringing people in to see their city not as a casualty but as a phoenix, rising again from lowdown no-good times. In order to get to Canal Street from our hotel, we had to drive past a crumbling mess of what had once been a building, lorded over by construction workers in hard-hats and working through the rubble with power tools and vehicles; it didn't matter if the building was demolished because of something non-Katrina related (after all, we visited right before the seventh anniversary of the storm; it would seem an awful long time for something to be a heaping pile of rubble without just now being sorted over), it came to symbolize in my mind the work that still needed to be done, to rebuild not just the city but its people. The narrative of New Orleans had once been as the birthplace of jazz, as the home of Mardi Gras, as a destination for sin and depravity deep in the conservative Bible Belt; now it was the city whose destruction at the hands of an unimaginable force uprooted more than a third of its residents, turned its sporting arena into a ghetto for the unwashed and unloved, and merited little more than a cursory glance by our then-President, who proceeded to keep on flying and ignoring the problems of the city that neither he nor his family gave a damn about because they weren't his kind of people (yes, I just referenced Kanye West in a Digital Humanities context. I am ashamed).
Detroit is another victim of narrative, this one of racial disharmony and corruption on all levels that has left the ordinary citizens as lone survivors of an apocalyptic terror that really took hold in the era of Bush and bailouts, but whose seeds of destruction were sown long, long ago. I read Charlie LeDuff's searing look at his hometown over the summer (definitely not "light, fun beach reading"), and I can grasp a little of what makes Detroit unique in the annals of "failed cities." Unlike the ghost towns of the American West, Detroit was tied to something a little more labor-intensive than gold-mining (manufacturing cars for the world), yet it fell pray to the same forces that doomed the silver and gold towns that once doted the landscape between here and San Francisco: people found a better way to do what Detroit did best, and the demand went with it. Where the ghost towns ran out of precious minerals, Detroit ran out of interest from the outside world in what it was selling. Henry Ford would be turning over in his grave right now if he could see Detroit.
Which isn't to say that Detroit didn't deserve it, in some sense: the concept of manufacturing on an assembly line, with little or no allowances made for the workers, is horrifying because of its automated nature (and attractive to the very same capitalists we see celebrated in glamorous profiles in business magazines, all because a lot of them manage to outsource that kind of labor far from prying eyes). When a city, when a factory town, is tied to a mode of production that is ultimately doomed, with no allowances for change, it's hard to see much in the way of sympathy. This isn't to say that I want to see Detroit razed to the ground, left to crumble like an ancient Mayan civilization, the decals on the factory doors serving as hieroglyphs for future scholars to puzzle over. But there's a good chance it'll happen.
Rice mentions Italo Calvino's Invisible Cities, a work that I'm familiar with thanks to my World Lit class when I was an undergrad. In the book, you have a series of cities described (some briefly, some in detail) over the course of the work (it's hard to classify, it's a work of fiction but not necessarily a novel, except maybe in a modernist sense). When I went to New Orleans, I had some experience with cities real and imagined (I was an avid reader of works that took place in big cities, because as a small-town boy I had to believe there was a bigger, more exciting world out there than the one offered by my humble home town), but this was one of the few times I'd been in a real city, and it was overwhelming. I remember beginning to walk down Bourbon Street on a Sunday afternoon, all the sex shops and racy souvenirs (and all the people walking down the street), and I had the very "small town" response that I would've thought I was too sophisticated for: I felt out of place. I made a joke to my sister later, about how all those people should've "been in church," but the truth is a lot of them probably *did* come straight from church, at least the well-dressed ones.
Growing up in a small town is frustrating when you have dreams of greater, grander things, but I guess it helps you appreciate the majesty of big cities more than you might if you grew up right in the heart of Manhattan, New Orleans, or Atlanta. Walhalla, my home town, will never win any claim to being a sophisticated city; you drop the main drag of town, the street where most of the businesses are located, right in the center of Manhattan and never touch Harlem to the north nor Wall Street to the south (though you'd have an invasion of hipsters from Brooklyn because of all the antique shops that Walhalla has. They'd say they're being ironic but I think they'd be genuninely thrilled at our stock of vinyl records and crappy, broke-down toys). I've always had a problem envisioning cities bigger than Walhalla, more contained, more spread-out but not in a country way (i.e., you have parts of Walhalla that you get to only by driving down lonely-looking back streets full of grass, trees, and other non-urban trappings). I've been to big cities that stretched on forever, that included neighborhoods where I might be best advised to steer clear of (either because of my ethnicity or because of my gullibility when dealing with someone looking to seperate me from whatever cash I have on hand).
When I went to the University of South Carolina, I was right in the heart of an urban enviroment. I lived in a dorm that was in the middle of campus, yet not far removed from the downtown section, right around the State House; there wasn't much beyond a news stand, a CD shop, and some boarded-up buildings, but I remember taking epic walks around the area surrounding the State House (usually in daytime, though I did like a night stroll from time to time). I didn't drive yet, but there was no need; most everything was within a short (or long) walk from my dorm. There were parts of Columbia that I couldn't get to by walking, of course, but that's what I had friends with cars for. I'm sure a lot of the time, people who worked downtown or knew some of the dangers posed by a big city (even one like Columbia) wondered who the crazy-ass white boy wandering around was, but truthfully except for a few times I never really felt any danger or unease. To this day, my narrative of Columbia is based on those jaunts I took, especially when I should've been studying for class instead, but I don't regret it, really.
So if a city is a database, its various inhabitants can create a narrative to suit their purposes. For me, my narratives are thus:
New York: imposing, overwhelming, exciting, not terrifying too much to be in a tall skyscraper (this was pre-9/11).
Washington DC: spread-out, old-fashioned architecture, good touristy sights (Air and Space Museum), powerful but folksy.
Atlanta: murder to drive through, no idea where anything is, a nice place to visit but you couldn't pay me to live there.
New Orleans: reminded me of what a small-town kid I really am, exhilerating, bewildering, magical, scary, fantastic, I'd like to go back (but not in August; way too hot that time of year).
It's interesting how, in Digital Detroit, Jeff Rice addresses the idea of cities having narratives, and how those narratives affect our perceptions of place, because when I got to New Orleans there was still evidence of Katrina's destruction, either manifested in ruins and abandoned buildings, or in the psyche of people we met, the people whose livelihoods depended on bringing people in to see their city not as a casualty but as a phoenix, rising again from lowdown no-good times. In order to get to Canal Street from our hotel, we had to drive past a crumbling mess of what had once been a building, lorded over by construction workers in hard-hats and working through the rubble with power tools and vehicles; it didn't matter if the building was demolished because of something non-Katrina related (after all, we visited right before the seventh anniversary of the storm; it would seem an awful long time for something to be a heaping pile of rubble without just now being sorted over), it came to symbolize in my mind the work that still needed to be done, to rebuild not just the city but its people. The narrative of New Orleans had once been as the birthplace of jazz, as the home of Mardi Gras, as a destination for sin and depravity deep in the conservative Bible Belt; now it was the city whose destruction at the hands of an unimaginable force uprooted more than a third of its residents, turned its sporting arena into a ghetto for the unwashed and unloved, and merited little more than a cursory glance by our then-President, who proceeded to keep on flying and ignoring the problems of the city that neither he nor his family gave a damn about because they weren't his kind of people (yes, I just referenced Kanye West in a Digital Humanities context. I am ashamed).
Detroit is another victim of narrative, this one of racial disharmony and corruption on all levels that has left the ordinary citizens as lone survivors of an apocalyptic terror that really took hold in the era of Bush and bailouts, but whose seeds of destruction were sown long, long ago. I read Charlie LeDuff's searing look at his hometown over the summer (definitely not "light, fun beach reading"), and I can grasp a little of what makes Detroit unique in the annals of "failed cities." Unlike the ghost towns of the American West, Detroit was tied to something a little more labor-intensive than gold-mining (manufacturing cars for the world), yet it fell pray to the same forces that doomed the silver and gold towns that once doted the landscape between here and San Francisco: people found a better way to do what Detroit did best, and the demand went with it. Where the ghost towns ran out of precious minerals, Detroit ran out of interest from the outside world in what it was selling. Henry Ford would be turning over in his grave right now if he could see Detroit.
Which isn't to say that Detroit didn't deserve it, in some sense: the concept of manufacturing on an assembly line, with little or no allowances made for the workers, is horrifying because of its automated nature (and attractive to the very same capitalists we see celebrated in glamorous profiles in business magazines, all because a lot of them manage to outsource that kind of labor far from prying eyes). When a city, when a factory town, is tied to a mode of production that is ultimately doomed, with no allowances for change, it's hard to see much in the way of sympathy. This isn't to say that I want to see Detroit razed to the ground, left to crumble like an ancient Mayan civilization, the decals on the factory doors serving as hieroglyphs for future scholars to puzzle over. But there's a good chance it'll happen.
Rice mentions Italo Calvino's Invisible Cities, a work that I'm familiar with thanks to my World Lit class when I was an undergrad. In the book, you have a series of cities described (some briefly, some in detail) over the course of the work (it's hard to classify, it's a work of fiction but not necessarily a novel, except maybe in a modernist sense). When I went to New Orleans, I had some experience with cities real and imagined (I was an avid reader of works that took place in big cities, because as a small-town boy I had to believe there was a bigger, more exciting world out there than the one offered by my humble home town), but this was one of the few times I'd been in a real city, and it was overwhelming. I remember beginning to walk down Bourbon Street on a Sunday afternoon, all the sex shops and racy souvenirs (and all the people walking down the street), and I had the very "small town" response that I would've thought I was too sophisticated for: I felt out of place. I made a joke to my sister later, about how all those people should've "been in church," but the truth is a lot of them probably *did* come straight from church, at least the well-dressed ones.
Growing up in a small town is frustrating when you have dreams of greater, grander things, but I guess it helps you appreciate the majesty of big cities more than you might if you grew up right in the heart of Manhattan, New Orleans, or Atlanta. Walhalla, my home town, will never win any claim to being a sophisticated city; you drop the main drag of town, the street where most of the businesses are located, right in the center of Manhattan and never touch Harlem to the north nor Wall Street to the south (though you'd have an invasion of hipsters from Brooklyn because of all the antique shops that Walhalla has. They'd say they're being ironic but I think they'd be genuninely thrilled at our stock of vinyl records and crappy, broke-down toys). I've always had a problem envisioning cities bigger than Walhalla, more contained, more spread-out but not in a country way (i.e., you have parts of Walhalla that you get to only by driving down lonely-looking back streets full of grass, trees, and other non-urban trappings). I've been to big cities that stretched on forever, that included neighborhoods where I might be best advised to steer clear of (either because of my ethnicity or because of my gullibility when dealing with someone looking to seperate me from whatever cash I have on hand).
When I went to the University of South Carolina, I was right in the heart of an urban enviroment. I lived in a dorm that was in the middle of campus, yet not far removed from the downtown section, right around the State House; there wasn't much beyond a news stand, a CD shop, and some boarded-up buildings, but I remember taking epic walks around the area surrounding the State House (usually in daytime, though I did like a night stroll from time to time). I didn't drive yet, but there was no need; most everything was within a short (or long) walk from my dorm. There were parts of Columbia that I couldn't get to by walking, of course, but that's what I had friends with cars for. I'm sure a lot of the time, people who worked downtown or knew some of the dangers posed by a big city (even one like Columbia) wondered who the crazy-ass white boy wandering around was, but truthfully except for a few times I never really felt any danger or unease. To this day, my narrative of Columbia is based on those jaunts I took, especially when I should've been studying for class instead, but I don't regret it, really.
So if a city is a database, its various inhabitants can create a narrative to suit their purposes. For me, my narratives are thus:
New York: imposing, overwhelming, exciting, not terrifying too much to be in a tall skyscraper (this was pre-9/11).
Washington DC: spread-out, old-fashioned architecture, good touristy sights (Air and Space Museum), powerful but folksy.
Atlanta: murder to drive through, no idea where anything is, a nice place to visit but you couldn't pay me to live there.
New Orleans: reminded me of what a small-town kid I really am, exhilerating, bewildering, magical, scary, fantastic, I'd like to go back (but not in August; way too hot that time of year).
Tuesday, October 28, 2014
Life is a game/Game is a life
I was all ready for more unnecessary italicizing of ideas that seemed important to Manovich whenever we moved on to Ian Bogost's "Unit Operations"....excuse me, I apologize.
Anyway, Unit Operations: An Approach to Videogame Criticism has proven to be much less hateful than Manovich, precisely because it's about something that I think we can all relate to and something that was made possible by software...sorry, won't happen again. Bogost explores the idea of videogames, and how they relate to other media, in a pretty interesting way. And it's well-deserved.
We as a society are slow to embrace the idea that something we grew up with (and something that is so seemingly "current" that we're at a loss to consider that it has a history beyond our chronological introduction to it) could be worthy of scholarly discussion. Well, maybe it's just me; I never found myself thinking (in the midst of screwing up yet again to get beyond the basic beginner level in Super Mario) "hey, I wonder what this says about society, and about our interaction with the game versus our interaction (or lack thereof) with other mediums." Cut me some slack, I was a pre-teen.
But I did grow up with videogames, we had the old-school Atari and I recall fondly the badly pixilated thrills of games that required a joystick and which featured one button besides the one on top of the joystick, and if there was such a thing as "cheat codes" back then, I didn't know it (I've always felt like cheat codes were, ahem, cheating, both by you of the game and of the code by you by reducing your enjoyement of the game to figuring out ways to beat it that went off the beaten path. I was a bit more law-and-order then, I guess). There was still a filter, of sorts, between the videogame onscreen and your real life, the one going on around you (and the one in which cheat codes were probably used against you, to be honest. It was the Reagan/Bush era, and the nostalgia/homoerotic love-fest Republicans have for that time bewilders me). The concept of an "immersive gaming experience" consisted of Tron, which is confusing as hell when you're a little kid and what you're watching is basically Jeff Bridges in a suit made of Nite-Lites. But nowadays, of course, the game interacts with your real life, in ways that would've seemed impossible to artists back then. I have never played a Wii (there's a fantastic Key & Peele sketch about that, it veers into NSFW territory towards the end so I didn't post it to the group's Facebook page), but I have played Rock Band: it's reducing the musicianship of people I admire (and Gene Simmons) to controls on a panel, albeit a guitar-shaped one. The experience of playing live music is turned into a game in which you collect points based on how "well" you "played," and the quotation marks are appropriate. However, the italicizing could be considered excessive on my part.
I thought the discussion of non-game games (i.e., simulations like The Sims or Star Wars: Galaxies) was interesting because those games seem to re-define the purpose of videogames (i.e., the escape from reality that is such a conducive force for much of the stereotypical gaming set, the ones that aren't good with basic social interactions). Games have gone from fanciful journeys (hero-quests, to borrow some Joseph Campbell because I too have seen Star Wars and will get around to The Hero With a Thousand Faces at some point) to almost blah recreations of the real world (or in the case of Galaxies, a mundane rendering of what was originally a more cosmic idea). At what point does the idea of "life as a game" cross over from "wow, this is exciting, I get to collect points and do things in real life that I could only do in games" to IRS Audit:The Game, in which you have to navigate the legal and fiduciary respobsibilities that come with real life situations.
Scott Pilgrim Versus The World, to my mind, is the best of the "videogame brought to cinema" movies because it's not actually based on a game; the source material is a graphic novel (which, like videogame movies, is a hybrid of two things: the comic book and the novel-like narrative structure, because a lot of comic books are one-and-done affairs while a graphic novel has the potential to grow over many issues. This is a gross simplification of both comic books and novels, of course, but it works for the example). In the film, Scott has to "battle" the ex-lovers of his current flame, Ramona, in videogame-style contests that recall for me the battles one would encounter in Mortal Combat (all they needed to complete the illusion was the final "Finish him!" that confirmed MC's bloodlust in the eyes of concerned parents who, as usual, overreacted to something they didn't understand, much the same with violent rap lyrics or over-the-top slasher films). The rules of real life (you can't go around fighting people, when they die they don't increase your own chances of living nor turn into coins) are broken throughout the film, because otherwise the film is just a typical romantic comedy with a pretty good soundtrack. In a game world, Scott can defeat the evil exes and inch closer to becoming the kind of guy Ramona can live with. Complications arise, of course, as in games. But the overall feel of the movie, hyper as it is, suggests a videogame with only one outcome: Scott gets the girl. In videogames, there are multiple ways the narrative can end, and even points where it can end before you reach the supposed conclusion (as I've learned when trying to tackle Tetris, you can't really win, you can only hope to keep going).
I've never really looked at videogames as being "worthy" of such critical approaches, but that's not because I overwhelmingly think they don't deserve it. I'd just never considered it, and while I don't buy into the premise that they are always worthy of such discussion (c'mon, Donkey Kong could probably be read as a Marxist text on the fetishization of empty barrels used to crush Italian plumbers, but that's a really awkward stretch), I do think it opens up a new world for serious discussion. I think, in true High Fidelity fashion, that we can be defined by our tastes in pop culture (though in HF it's more about individuals, not groups), and as a group we can be defined by the games we embrace as much as we can the cinema, music, or (increasingly less likely) literature. Videogame studies also embrace the notion that I think we've been ignoring throughout the course, that the humanities isn't just literature. There's a philosophy behind even the simplest games, and I think we can try to discuss it (ahem, sorry) try to discuss it as seriously as we take the philosophy behind Moby-Dick or Star Wars. I just hope Manovich isn't there to italicize everything...
Anyway, Unit Operations: An Approach to Videogame Criticism has proven to be much less hateful than Manovich, precisely because it's about something that I think we can all relate to and something that was made possible by software...sorry, won't happen again. Bogost explores the idea of videogames, and how they relate to other media, in a pretty interesting way. And it's well-deserved.
We as a society are slow to embrace the idea that something we grew up with (and something that is so seemingly "current" that we're at a loss to consider that it has a history beyond our chronological introduction to it) could be worthy of scholarly discussion. Well, maybe it's just me; I never found myself thinking (in the midst of screwing up yet again to get beyond the basic beginner level in Super Mario) "hey, I wonder what this says about society, and about our interaction with the game versus our interaction (or lack thereof) with other mediums." Cut me some slack, I was a pre-teen.
But I did grow up with videogames, we had the old-school Atari and I recall fondly the badly pixilated thrills of games that required a joystick and which featured one button besides the one on top of the joystick, and if there was such a thing as "cheat codes" back then, I didn't know it (I've always felt like cheat codes were, ahem, cheating, both by you of the game and of the code by you by reducing your enjoyement of the game to figuring out ways to beat it that went off the beaten path. I was a bit more law-and-order then, I guess). There was still a filter, of sorts, between the videogame onscreen and your real life, the one going on around you (and the one in which cheat codes were probably used against you, to be honest. It was the Reagan/Bush era, and the nostalgia/homoerotic love-fest Republicans have for that time bewilders me). The concept of an "immersive gaming experience" consisted of Tron, which is confusing as hell when you're a little kid and what you're watching is basically Jeff Bridges in a suit made of Nite-Lites. But nowadays, of course, the game interacts with your real life, in ways that would've seemed impossible to artists back then. I have never played a Wii (there's a fantastic Key & Peele sketch about that, it veers into NSFW territory towards the end so I didn't post it to the group's Facebook page), but I have played Rock Band: it's reducing the musicianship of people I admire (and Gene Simmons) to controls on a panel, albeit a guitar-shaped one. The experience of playing live music is turned into a game in which you collect points based on how "well" you "played," and the quotation marks are appropriate. However, the italicizing could be considered excessive on my part.
I thought the discussion of non-game games (i.e., simulations like The Sims or Star Wars: Galaxies) was interesting because those games seem to re-define the purpose of videogames (i.e., the escape from reality that is such a conducive force for much of the stereotypical gaming set, the ones that aren't good with basic social interactions). Games have gone from fanciful journeys (hero-quests, to borrow some Joseph Campbell because I too have seen Star Wars and will get around to The Hero With a Thousand Faces at some point) to almost blah recreations of the real world (or in the case of Galaxies, a mundane rendering of what was originally a more cosmic idea). At what point does the idea of "life as a game" cross over from "wow, this is exciting, I get to collect points and do things in real life that I could only do in games" to IRS Audit:The Game, in which you have to navigate the legal and fiduciary respobsibilities that come with real life situations.
Scott Pilgrim Versus The World, to my mind, is the best of the "videogame brought to cinema" movies because it's not actually based on a game; the source material is a graphic novel (which, like videogame movies, is a hybrid of two things: the comic book and the novel-like narrative structure, because a lot of comic books are one-and-done affairs while a graphic novel has the potential to grow over many issues. This is a gross simplification of both comic books and novels, of course, but it works for the example). In the film, Scott has to "battle" the ex-lovers of his current flame, Ramona, in videogame-style contests that recall for me the battles one would encounter in Mortal Combat (all they needed to complete the illusion was the final "Finish him!" that confirmed MC's bloodlust in the eyes of concerned parents who, as usual, overreacted to something they didn't understand, much the same with violent rap lyrics or over-the-top slasher films). The rules of real life (you can't go around fighting people, when they die they don't increase your own chances of living nor turn into coins) are broken throughout the film, because otherwise the film is just a typical romantic comedy with a pretty good soundtrack. In a game world, Scott can defeat the evil exes and inch closer to becoming the kind of guy Ramona can live with. Complications arise, of course, as in games. But the overall feel of the movie, hyper as it is, suggests a videogame with only one outcome: Scott gets the girl. In videogames, there are multiple ways the narrative can end, and even points where it can end before you reach the supposed conclusion (as I've learned when trying to tackle Tetris, you can't really win, you can only hope to keep going).
I've never really looked at videogames as being "worthy" of such critical approaches, but that's not because I overwhelmingly think they don't deserve it. I'd just never considered it, and while I don't buy into the premise that they are always worthy of such discussion (c'mon, Donkey Kong could probably be read as a Marxist text on the fetishization of empty barrels used to crush Italian plumbers, but that's a really awkward stretch), I do think it opens up a new world for serious discussion. I think, in true High Fidelity fashion, that we can be defined by our tastes in pop culture (though in HF it's more about individuals, not groups), and as a group we can be defined by the games we embrace as much as we can the cinema, music, or (increasingly less likely) literature. Videogame studies also embrace the notion that I think we've been ignoring throughout the course, that the humanities isn't just literature. There's a philosophy behind even the simplest games, and I think we can try to discuss it (ahem, sorry) try to discuss it as seriously as we take the philosophy behind Moby-Dick or Star Wars. I just hope Manovich isn't there to italicize everything...
Tuesday, October 21, 2014
Manovich, Manovich, Manovich!
I started Software Takes Command thinking, "I can handle a book that has a fifty-page introduction, important or percieved-to-be important ideas in italics, and asks the questions that not too many people ask anymore (like 'how does Word work?')." I'm not so sure now, but I have some definite ideas about what this book tries to say.
First off, let me say this: Apple (helmed by the evil Steve Jobs, even after death) and Microsoft have made their living off keeping us away from the actual viewing of hardware and software, i.e., "how the sausage is made." Apart from that one Apple computer in the early part of the 2000s with the inside portion visible through different-colored bodies, both companies have made it a priority to keep the user at a safe distance. This is understandable from a business sense (unless you want to take apart the computer or product to figure out how it works and how to steal said design, you're not likely to succeed in said intellectual theft), but it also seems to be the real-life Revenge of the Nerds that the films only hinted at. The IT guy is the most important figure in any company, because he (or she, to be politically correct) knows what to do when everyone's computers start misbehavin'. Sleek designs and "wowee zowee!" graphics on our phones (well, not mine, I'm the one guy in America who still has a flip phone) keep us from asking the pertinant question "how does this work?" And that's usually how progress rolls.
Think back to all the various "new things" technology has given us just in the last half of the twentieth century, and how "new" and "exciting" they once were compared to how they are viewed now, seeing as they're more commonplace today. I think Kubrick's 2001: A Space Odyssey plays differently today than it did in 1968, just because we're immune through media saturation to the wonders of outer space presented in the film. If today, a filmmaker tried to get away with a space shuttle docking with the International Space Station that takes up a good chunk of screen time (not to mention being set to the "Blue Danube Waltz"), he'd be laughed out of Hollywood. Trains were the space shuttles and rockets of the nineteenth century, as Manovich alludes to; gradually through constant use, the novelty wore off and we stopped asking "how does steam cause the train to run?" Luckily for us (well, some of us), Manovich is here to ask the questions in regards to software.
Some of his insights are worth considering, but I feel like we have to slog through an awful lot of "yes, I know how that works, but thank you for going into exhausting detail for me." I don't want to bash Manovich (I just love that name, it sounds like some crazy-eyed inventor of the nineteenth century), so I'll restrain myself and move on to the next thing that the book got me thinking about.
In the last chapter (which, full disclosure, I haven't finished as of this posting), Manovich describes the incorporation of software advances into film-making, and here's where I get to show off my Film Studies minor (money well spent, state of South Carolina!). What was interesting to me was how Manovich highlighted the use of computer-generated images (CGI) in the 1990s, when the idea was both to leave the audience stunned at how clever and amazing said effects were but also not to overwhelm them with questions of "how did they do that," i.e., break the audiences' willing suspension of disbelief.
Fiction, whether in film or any other medium, relies on suspension of disbelief: yes, we know inherently that we're simply seeing still images speeded up to suggest movement on the part of human (or cartoon) characters, just as we know the words on a book's page don't really mean that this person we're reading about (be it Captain Ahab, Beowulf, or Kim Kardashian) has ever actually existed. There have been movements to call attention to such artifice, of course, and each time this is done those practitioners of whatever "shocking revelation about the nature of fiction/cinema/art/whatever" pat themselves on the back and think "gee, weren't we clever?" But the truth is, art needs that disbelief to both be present and also to be suspended, at least until the audience is lured in and can't turn away. And movie effects have been a large part of that.
In the olden days, for instance, a monster in a horror film was just some poor schmuck (probably the second cousin or brother-in-law of the director) stuffed into a suit and told to walk around with a chainsaw or axe in hand to threaten the idiotic teenagers who thought it'd be a good idea to spend a night in the haunted house/abandoned campgound, etc. But effects that seem tame today could sometimes be revolutionary at the time, pointing to new avenues for artistic expression (2001 helped George Lucas realize his vision for Star Wars). The xenomorph in the original Alien (1979) was just a guy in a suit, but because of the audience's willing suspension of disbelief, we could believe that this creature was real (and that we wanted nothing to do with it). With the advent of CGI, it was believed that more realistic monsters and creatures could be imagined, without distracting the audience from the artificial nature of the creature in question. Of course it meant that actors were usually reacting to something on a green screen, but the poor brother-in-law of the director got to sit down and relax until someone needed to make a coffee-and-cocaine run for the crew.
But as Manovich points out, there's been a shift in the thinking: movies like Sin City thrive not on the realistic integration of CGI effects but in the very highlighting of that artifice for dramatic effect (see, the book had an effect on my writing!). By calling the audiences' attention to the very artificialness of what's onscreen, they play with the notion that disbelief needs to be suspended by realistic action, in a sense.
Not to exhaust the point, but consider a film that's been made and remade a couple of times: The Thing (1951, 1982, 2011). In the original version, directed by Howard Hawks (yeah, I know the credits say Christian Nyby, but it's a Hawks movie through and through), the alien creature that threatens the camp of intrepid Arctic scientists is a giant carrot, basically, and played by James Arness as a walking, grunting "Other" that can be defeated using good old American know-how (and electricity). In John Carpenter's version, and the "prequel" that came out almost twenty years later, the alien is able to assume the identities of the guys in the camp, one at a time, and form a perfect copy that is convincing up until it's revealed as such. In this case, special effects of the real-world kind play the bad guy or guys: characters who are revealed as "Things" stretch and twist into grotesque manifestations of your worst nightmare when it comes to having your body torn apart. The most recent version (which I haven't seen much of, beyond the trailer) does this as well, but through the "magic" of CGI. We have the classic attempt to integrate CGI effects so that we both notice them and aren't distracted by them to forget what's going on onscreen (at least that's the filmmaker's hope). In that sense, the 2011 version is then not only a return to the premise of Carpenter's version, it's also a return to the "antiquated" idea of CGI both being integrated into the film and thus noticable. Once again, state of South Carolina, your money was well spent on my Film Studies minor.
I think, as someone who's interested in art, it matters to me whether CGI effects dominate a film (like Sin City), calling attention to themselves, or try to blend in (the most recent batch of Star Wars films) without necessarily doing so. No one is saying that having an actual person (again, the poor brother-in-law of the director) in a monster costume is infinitely better than having that same monster rendered by CGI (well, some people are; I think it's a case-by-case basis, myself), but software continues to redefine the logics and phsyics of film-making, and it will be an interesting time to view what sticks and what falls by the wayside in terms of computer effects.
First off, let me say this: Apple (helmed by the evil Steve Jobs, even after death) and Microsoft have made their living off keeping us away from the actual viewing of hardware and software, i.e., "how the sausage is made." Apart from that one Apple computer in the early part of the 2000s with the inside portion visible through different-colored bodies, both companies have made it a priority to keep the user at a safe distance. This is understandable from a business sense (unless you want to take apart the computer or product to figure out how it works and how to steal said design, you're not likely to succeed in said intellectual theft), but it also seems to be the real-life Revenge of the Nerds that the films only hinted at. The IT guy is the most important figure in any company, because he (or she, to be politically correct) knows what to do when everyone's computers start misbehavin'. Sleek designs and "wowee zowee!" graphics on our phones (well, not mine, I'm the one guy in America who still has a flip phone) keep us from asking the pertinant question "how does this work?" And that's usually how progress rolls.
Think back to all the various "new things" technology has given us just in the last half of the twentieth century, and how "new" and "exciting" they once were compared to how they are viewed now, seeing as they're more commonplace today. I think Kubrick's 2001: A Space Odyssey plays differently today than it did in 1968, just because we're immune through media saturation to the wonders of outer space presented in the film. If today, a filmmaker tried to get away with a space shuttle docking with the International Space Station that takes up a good chunk of screen time (not to mention being set to the "Blue Danube Waltz"), he'd be laughed out of Hollywood. Trains were the space shuttles and rockets of the nineteenth century, as Manovich alludes to; gradually through constant use, the novelty wore off and we stopped asking "how does steam cause the train to run?" Luckily for us (well, some of us), Manovich is here to ask the questions in regards to software.
Some of his insights are worth considering, but I feel like we have to slog through an awful lot of "yes, I know how that works, but thank you for going into exhausting detail for me." I don't want to bash Manovich (I just love that name, it sounds like some crazy-eyed inventor of the nineteenth century), so I'll restrain myself and move on to the next thing that the book got me thinking about.
In the last chapter (which, full disclosure, I haven't finished as of this posting), Manovich describes the incorporation of software advances into film-making, and here's where I get to show off my Film Studies minor (money well spent, state of South Carolina!). What was interesting to me was how Manovich highlighted the use of computer-generated images (CGI) in the 1990s, when the idea was both to leave the audience stunned at how clever and amazing said effects were but also not to overwhelm them with questions of "how did they do that," i.e., break the audiences' willing suspension of disbelief.
Fiction, whether in film or any other medium, relies on suspension of disbelief: yes, we know inherently that we're simply seeing still images speeded up to suggest movement on the part of human (or cartoon) characters, just as we know the words on a book's page don't really mean that this person we're reading about (be it Captain Ahab, Beowulf, or Kim Kardashian) has ever actually existed. There have been movements to call attention to such artifice, of course, and each time this is done those practitioners of whatever "shocking revelation about the nature of fiction/cinema/art/whatever" pat themselves on the back and think "gee, weren't we clever?" But the truth is, art needs that disbelief to both be present and also to be suspended, at least until the audience is lured in and can't turn away. And movie effects have been a large part of that.
In the olden days, for instance, a monster in a horror film was just some poor schmuck (probably the second cousin or brother-in-law of the director) stuffed into a suit and told to walk around with a chainsaw or axe in hand to threaten the idiotic teenagers who thought it'd be a good idea to spend a night in the haunted house/abandoned campgound, etc. But effects that seem tame today could sometimes be revolutionary at the time, pointing to new avenues for artistic expression (2001 helped George Lucas realize his vision for Star Wars). The xenomorph in the original Alien (1979) was just a guy in a suit, but because of the audience's willing suspension of disbelief, we could believe that this creature was real (and that we wanted nothing to do with it). With the advent of CGI, it was believed that more realistic monsters and creatures could be imagined, without distracting the audience from the artificial nature of the creature in question. Of course it meant that actors were usually reacting to something on a green screen, but the poor brother-in-law of the director got to sit down and relax until someone needed to make a coffee-and-cocaine run for the crew.
But as Manovich points out, there's been a shift in the thinking: movies like Sin City thrive not on the realistic integration of CGI effects but in the very highlighting of that artifice for dramatic effect (see, the book had an effect on my writing!). By calling the audiences' attention to the very artificialness of what's onscreen, they play with the notion that disbelief needs to be suspended by realistic action, in a sense.
Not to exhaust the point, but consider a film that's been made and remade a couple of times: The Thing (1951, 1982, 2011). In the original version, directed by Howard Hawks (yeah, I know the credits say Christian Nyby, but it's a Hawks movie through and through), the alien creature that threatens the camp of intrepid Arctic scientists is a giant carrot, basically, and played by James Arness as a walking, grunting "Other" that can be defeated using good old American know-how (and electricity). In John Carpenter's version, and the "prequel" that came out almost twenty years later, the alien is able to assume the identities of the guys in the camp, one at a time, and form a perfect copy that is convincing up until it's revealed as such. In this case, special effects of the real-world kind play the bad guy or guys: characters who are revealed as "Things" stretch and twist into grotesque manifestations of your worst nightmare when it comes to having your body torn apart. The most recent version (which I haven't seen much of, beyond the trailer) does this as well, but through the "magic" of CGI. We have the classic attempt to integrate CGI effects so that we both notice them and aren't distracted by them to forget what's going on onscreen (at least that's the filmmaker's hope). In that sense, the 2011 version is then not only a return to the premise of Carpenter's version, it's also a return to the "antiquated" idea of CGI both being integrated into the film and thus noticable. Once again, state of South Carolina, your money was well spent on my Film Studies minor.
I think, as someone who's interested in art, it matters to me whether CGI effects dominate a film (like Sin City), calling attention to themselves, or try to blend in (the most recent batch of Star Wars films) without necessarily doing so. No one is saying that having an actual person (again, the poor brother-in-law of the director) in a monster costume is infinitely better than having that same monster rendered by CGI (well, some people are; I think it's a case-by-case basis, myself), but software continues to redefine the logics and phsyics of film-making, and it will be an interesting time to view what sticks and what falls by the wayside in terms of computer effects.
Monday, October 13, 2014
How We Think
I have never done illegal narcotics, nor too many legal narcotics, in my lifetime. There was that one time I passed a car where the occupants inside were obviously smoking a joint (my work buddy that I was walking by the car with helpfully pointed out that that's what pot smells like) and got a little contact high, but beyond that and the occasional alcoholic experience, I haven't much time for the drugs, as the kids might say. It's not that I have a moral stand necessarily against an individual person's right to enjoy a hit of reefer every now and then, and I honestly think the drug "war" would be a lot less wasteful if we legalized some stuff that isn't legal now (I'm sure the drug cartels will find some other way to fund their operations, perhaps by branching off into highly addictive coffee beans). I just know that my mind is weird anyways, without any outside help.
How We Think by Katherine Hayles is a bit like my explanation of why I don't do drugs, in that it chronicles in the latter stages (heh-heh, "chronic"-les...sorry) the rise of more computer-friendly fiction. There's a huge argument going on in the book about narrative versus databases (i.e., the stuff we study as English and humanities majors versus the stuff we use to store the info we've gathered), and I have to say that I was intrigued by the discussion of the two works that Hayles cites (The Raw Shark Texts and Only Revolutions) because I tend to gravitate towards the odder end of the literary spectrum, if only to dip my toe in with some authors while embracing some of the more fantastical writers (Pynchon, some DeLillo, William S. Burroughs, Vonnegut). I don't always understand what I'm reading (at least I'm honest), but I find the journey enjoyable in and of itself.
I couldn't help but think of Burroughs' work with the "Cut-Up Trilogy," three books that he fashioned together out of cut-up words and phrases from other publications, when I started reading about Raw Shark. I haven't read any of the trilogy, nor the Raw Shark novel, but I think that sort of experimentation, playing with narrative expectations, can be exciting (well, occasionally frustrating, but exciting too). I read Naked Lunch over the summer, and straight through; when I read on Wikipedia that Burroughs had meant for people to be able to start wherever they wanted to skip around as they chose (a sort of junkie Choose Your Own Adventure) I wondered if I'd read the book wrong, or if there was *any* right way to read it (this was Wikipedia that I was looking at, of course, and someone could have added that detail as a goof or an inaccuracy). There's a certain sense of playfulness in the descriptions of both Raw Shark and Only Revolutions, as if, while both works have their seriousness, they have an anarchic side too, something that deviates from the path. Something that makes the reader less passive than he would normally be.
All that said, I might be hesitant to actually try and *read* either of the books mentioned. I remember loving the movie Trainspotting when I saw it (still the best depiction of Scottish heroin junkies I've ever seen, by the way), and I was excited when I found the novel that the film was based on at a local bookstore. I got it home, turned to the first page, and was gobsmacked by the heavy Scottish dialect of the first few pages. I literally got a headache (I'm not exaggerating for comic effect). I stuck with the book, however, because (thankfully) it was a multiple-narrator novel (really a collection of short stories that fit together, from differing points of view) so that the heavy Scottish dialect parts weren't the sole part of the story. Several years later, I turned to the opening page of Finnegans Wake and decided after reading a couple of lines that James Joyce was batshit crazy, so I stopped.
I think, in the clash of narrative v. database, we'll see a happy (or unhappy) marriage of the two as time and technology progresses (at least until Skynet wipes out humanity). As Hayles argues (and as I agree with her), we are a narrative-based species, always searching for the story of how we came to be, or how we came to live in the places that we live, or why it is that we die, what happens when we die, etc. The ghost in the machine may be our need for a narrative, after all; databases store information, and they do a damn good job of it, but so far they can't tell a story. But narratives let us down too, in need of constant revision as more facts become known (just look at the narrative that the NFL was trying to sell us on the whole Ray Rice incident, before the second video came out, to cite a real-world example). You constantly hear "narrative" used with cynical connotations (such as "what is our narrative for the way events unfolded in Iraq"), but it's one of our defining characteristics. That being said, a database can provide information that wasn't known when we crafted our original narrative. It's a brave new world of narrative-database hybrids, as represented by the two works cited in Hayles. It may be a bit over my head, but I'm on board with at least trying.
How We Think by Katherine Hayles is a bit like my explanation of why I don't do drugs, in that it chronicles in the latter stages (heh-heh, "chronic"-les...sorry) the rise of more computer-friendly fiction. There's a huge argument going on in the book about narrative versus databases (i.e., the stuff we study as English and humanities majors versus the stuff we use to store the info we've gathered), and I have to say that I was intrigued by the discussion of the two works that Hayles cites (The Raw Shark Texts and Only Revolutions) because I tend to gravitate towards the odder end of the literary spectrum, if only to dip my toe in with some authors while embracing some of the more fantastical writers (Pynchon, some DeLillo, William S. Burroughs, Vonnegut). I don't always understand what I'm reading (at least I'm honest), but I find the journey enjoyable in and of itself.
I couldn't help but think of Burroughs' work with the "Cut-Up Trilogy," three books that he fashioned together out of cut-up words and phrases from other publications, when I started reading about Raw Shark. I haven't read any of the trilogy, nor the Raw Shark novel, but I think that sort of experimentation, playing with narrative expectations, can be exciting (well, occasionally frustrating, but exciting too). I read Naked Lunch over the summer, and straight through; when I read on Wikipedia that Burroughs had meant for people to be able to start wherever they wanted to skip around as they chose (a sort of junkie Choose Your Own Adventure) I wondered if I'd read the book wrong, or if there was *any* right way to read it (this was Wikipedia that I was looking at, of course, and someone could have added that detail as a goof or an inaccuracy). There's a certain sense of playfulness in the descriptions of both Raw Shark and Only Revolutions, as if, while both works have their seriousness, they have an anarchic side too, something that deviates from the path. Something that makes the reader less passive than he would normally be.
All that said, I might be hesitant to actually try and *read* either of the books mentioned. I remember loving the movie Trainspotting when I saw it (still the best depiction of Scottish heroin junkies I've ever seen, by the way), and I was excited when I found the novel that the film was based on at a local bookstore. I got it home, turned to the first page, and was gobsmacked by the heavy Scottish dialect of the first few pages. I literally got a headache (I'm not exaggerating for comic effect). I stuck with the book, however, because (thankfully) it was a multiple-narrator novel (really a collection of short stories that fit together, from differing points of view) so that the heavy Scottish dialect parts weren't the sole part of the story. Several years later, I turned to the opening page of Finnegans Wake and decided after reading a couple of lines that James Joyce was batshit crazy, so I stopped.
I think, in the clash of narrative v. database, we'll see a happy (or unhappy) marriage of the two as time and technology progresses (at least until Skynet wipes out humanity). As Hayles argues (and as I agree with her), we are a narrative-based species, always searching for the story of how we came to be, or how we came to live in the places that we live, or why it is that we die, what happens when we die, etc. The ghost in the machine may be our need for a narrative, after all; databases store information, and they do a damn good job of it, but so far they can't tell a story. But narratives let us down too, in need of constant revision as more facts become known (just look at the narrative that the NFL was trying to sell us on the whole Ray Rice incident, before the second video came out, to cite a real-world example). You constantly hear "narrative" used with cynical connotations (such as "what is our narrative for the way events unfolded in Iraq"), but it's one of our defining characteristics. That being said, a database can provide information that wasn't known when we crafted our original narrative. It's a brave new world of narrative-database hybrids, as represented by the two works cited in Hayles. It may be a bit over my head, but I'm on board with at least trying.
Subscribe to:
Posts (Atom)