Repercussions of Amassed Data

I had the pleasure of meeting Mél Hogan while she was doing her postdoctoral work at CU Boulder. I think her research area is vital, though it’s difficult to summarize. But that won’t stop me, so here goes: investigating how one can “account for the ways in which the perceived immateriality and weightlessness of our data is in fact with immense humanistic, environmental, political, and ethical repercussions” (The Archive as Dumpster).

Data flows and water woes: The Utah Data Center is a good entry point for this line of inquiry. The article explores the above quoted concerns (humanistic, environmental, political, and ethical) at the NSA’s Utah Data Center, near Bluffdale. It has suffered outages and other operational setbacks since construction. These initial failures are themselves illuminating, but even assuming such disruptions are minimized in the future, the following excerpt clarifies a few of the material constraints of the effort:

Once restored, the expected yearly maintenance bill, including water, is to be $20 million (Berkes, 2013). According to The Salt Lake Tribune, Bluffdale struck a deal with the NSA, which remains in effect until 2021; the city sold water at rates below the state average in exchange for the promise of economic growth that the new waterlines paid for by the NSA would purportedly bring to the area (Carlisle, 2014; McMillan, 2014). The volume of water required to propel the surveillance machine also invariably points to the center’s infrastructural precarity. Not only is this kind of water consumption unsustainable, but the NSA’s dependence on it renders its facilities vulnerable at a juncture at which the digital, ephemeral, and cloud-like qualities are literally brought back down to earth. Because the Utah Data Center plans to draw on water provided by the Jordan Valley River Conservancy District, activists hope that a state law can be passed banning this partnership (Wolverton, 2014), thus disabling the center’s activities.

As hinted at in a previous post on Lanier, I often encounter a sort of breathlessness invoked when descriptions of cloud-based reserves of data and computational prowess are discussed. Reflecting on the material conditions of these operations, as well as their inevitable failures and inefficiencies (e.g. the apparently beleaguered Twitter archive at the Library of Congress, though I would be more interested in learning about the constraints and stratagems of private operations) is a wise counterbalance that can help refocus discussions on the humanistic repercussions of such operations. And to be sure, I would not exclude archives from that scrutiny.

Hannah Sullivan, The Work of Revision

2014-02-07_03.03.27_01 2014-02-07_03.03.27_00

I’ve been reading Hannah Sullivan’s The Work of Revision, and really enjoying it. Here are a couple of excerpts from her chapter on T.S. Eliot’s The Waste Land, centering on Ezra Pound’s editorial input on the poem.

She makes a good case that Eliot’s style of revision indicated a profoundly different aesthetic than the excisive revisions that Pound (apparently vigorously) put forward. It’s a bit of a counter-narrative to the story of a team-up; rather Pound’s revisions antagonized Eliot’s original vision, creating a poem somewhat apart from both of them, but perhaps more in Pound’s camp. 

Book Review: Racing the Beam [re-post]

A re-post from the Preserving Games blog, February 12, 2010.

Montfort, N., & Bogost, I. (2009). Racing the Beam: The Atari Video Computer System. Platform Studies. Cambridge, Massachusetts: MIT Press.

Racing the Beam
Racing the Beam: The Atari Video Computer System

Just want to give a brief rundown on a really great read I’ve come across. MIT has started a “Platform Studies” series of books where the idea is to examine a platform and its technologies to understand how this informs creative work done on the platform. Platforms could range from gaming consoles, to a programming language, to an operating system, or even the Web itself if this is the platform upon which creative work is being made. The platform in this case is the Atari Video Computer System, the first Atari home system, later referred to as the Atari 2600 in the wake of the newer Atari 5200.

The authors examine the Atari VCS as a computing system, and take care to elaborate the unique (really exceptionally odd) constraints found there. Six games are investigated in chronological order, giving the reader a sense of the programming community’s advancing skill and knowledge of the system: Combat (1977), Adventure (1980), Yar’s Revenge (1981), Pac-Man (1982), Pitfall! (1982), and Star Wars: The Empire Strikes Back (1982).

The most prominent technical details are explained in first few chapters, and they illuminate each game’s construction as an exceptional act of engineering and ingenuity. Just to give an idea of the unique affordances of the Atari VCS, here are a few of the most characteristic details:

  • The custom sound and graphics chip, the Television Interface Adapter (TIA), is specifically designed to work with a TV’s CRT ray. The ray itself sprays the electrons onto the inside of a TV screen, left to right, one horizontal scan line at a time, taking a brief break at the end of each line (a “horizontal blank”) and a longer break at the bottom line, before resetting to the top and starting over again (a “vertical blank”). A programmer only has those tiny breaks to send any instructions to the TIA, and really only the vertical break provided enough time to send any game logic to the system.
  • It was imperative that game logic be sent at these breaks because the Atari VCS had no room for a video buffer. This meant there was no way to store an image of the next frame of the game, all graphic instructions are written in real time (sound instructions had to be dropped in on one of the breaks). A designer or programmer could choose to restrict the visual field of the game in exchange for more time to send game logic instructions. Pitfall! is an example of this.
  • This means there are no pixels on the Atari VCS. Pixels require horizontal and vertical planes, but for the Atari VCS, there is only horizontal scan lines. There is no logical vertical division at all for the computational system. As the beam goes across the screen, a programmer can send a signal to one of the TIA’s register to change the color. Thus, the “pixels” are really a measure of time (the clock counts of the processor) and not space.
  • Sprites, such as they existed for the Atari VCS, were hard-coded into the ROM of the system. Programmers had five: two player sprites, two missiles, and one ball. Reworking that setup (clearly designed for Pong and the like) into something like Adventure, Pitfall!, or even the Pac-Man port is an amazing feet.

The book doesn’t refrain from the technical. I could have used even more elaboration than what is presented in the book, but after a certain point the book would turn into an academic or technical tome (not that there’s anything wrong with that), so I appreciate the fine line walked here. The authors succeed at illuminating technical constraints enough for the general reader to understand the quality of the engineering solutions being described. Moreover, the authors leave room to discuss the cultural significance of the platform, and to reflect on how the mechanics and aesthetics of these Atari titles have informed genres and gameplay presently.

Making the Water Move: Techno-Historic Limits in the Game Aesthetics of Myst and Doom [re-post]

A re-post from the Preserving Games blog, January 24, 2010.

Hutchison, A. (2008). Making the Water Move: Techno-Historic Limits in the Game Aesthetics of Myst and Doom. Game Studies, 8(1). Retrieved from http://gamestudies.org/0801/articles/hutch

 

This 2008 Games Studies article examines the effect technology (or the “techno-historic” context of a game work) has on game aesthetics. The author defines the “game aesthetics” as “the combination of the audio-visual rendering aspects and gameplay and narrative/fictional aspects of a game experience.” It is important to note that audio-visual aspects are included in this definition along with the narrative/fictional components. This is because the author later argues that advancing audio-visual technology will play an important role in advancing the narrative aspect of games.

The article begins with a comparison of two iconic computer games of the mid 1990s: Myst and Doom. Specifically the design response in each game to the technological limitations of PCs at the time is examined. Very briefly, we see that Myst takes the “slow and high road” to rendering and first-person immersion, while Doom adopts the “fast and low road.” As the author explains, each response was prompted by the limitations of rendering that a personal computer could perform at the time. For its part, Myst’s design chooses to simply skip actual present-time 3D rendering and use only pre-rendered, impeccably crafted (at the time) images to move the player through the world. Minor exceptions exist when Quicktime video is cleverly overlaid onto these images to animate a butterfly, bug, moving wheel, etc. This overall effect very much informs the game’s aesthetic, as anyone who played the original can recall. Myst is a quiet, still, contemplative and mysterious world. Continuous and looping sound is crucial to the identity of the world and the player’s immersion. Nearly every visual element is important and serves a purpose. The designers could not afford to draw scenes extraneous to the gameplay. The player’s observation of the scenes available is key, and the player can generally be assured that all elements in the Myst world warrant some kind of attention. Hardware limitations of the time, such as the slow read time of most CD-ROM drives, serve to reinforce this slow, methodical gameplay and visual aesthetic.

Doom by contrast uses realtime rendering at the expense of visual nuance and detail. Doom achieves immersion through visceral and immediate responsiveness, and its aesthetic is one of quick action and relentlessly urgency. The low resolution of the art and characters is compensated by the quick passing of those textures and objects, and by the near-constant survival crisis at hand. Redundancy of visual elements and spaces is not an issue: the player can face down hordes of identical opponents in similar spaces (sometimes the exact same space) and not mind at all because the dynamism of the gameplay is engaging enough to allow such repetition. Pac-Man had the same strength.

From this comparison the author goes on to speculate how techno-historic limitations inform aesthetics in general, and whether the increasing capacity of personal computers to render audio-visual components in extreme and realtime detail will inform the narrative/fictional aspects of games as well. One only needs a passing familiarity with games to know that this aspect of games has been widely disparaged in the media and in some academic writing. Some quotes the author uses to characterize the degenerative trend of popular media and the game industry’s complicity in the coming intellectual apocalypse:

Perhaps lending strength to this phenomenon is a current popular culture stylistic trend which emphasises “spectacle” over narrative and gameplay. Peter Lunenfeld has identified this broad movement in popular culture generally:

Our culture has evacuated narrative from large swaths of mass media. Pornography, video games, and the dominant effects-driven, high concept Hollywood spectaculars are all essentially narrative-free: a succession of money shots, twitch reflex action, and visceral thrills strung together in time without ever being unified by classic story structure (Lunenfeld, 2000, p.141).

And more specifically dealing with games:

“It is a paradox that, despite the lavish and quite expensive graphics of these productions, the player’s creative options are still as primitive as they were in 1976” (Aarseth, 1997, p.103).

Most interesting is the observation that richer media capabilities does not necessarily translate to glossier, superficial renderings. Richer media can mean a more meaningful experience for the player. Nuance and subtlety can be introduced, more information-rich media can mean more powerfully conveyed characters and a more fully realized narrative.

On top of this, one can expand the definition of “story” and “narrative” as id developer Tom Willits argues in this Gamasutra report:

“If you wrote about your feelings, about your excitement, the excitement you felt when new areas were uncovered [in Doom] — if you wrote it well, it would be a great story,” Willits says. “People call it a ‘bad story,’ because the paper story is only one part of the game narrative — and people focus on the paper story too much when they talk about the story of a game.”

Information, he maintains, is learned through experiences, and the experience of playing a game is what forms a narrative, by its nature. Delivering a story through the game experience is the “cornerstone” of id Software’s game design, and the key when developing new technology.

Whatever your opinion on what constitutes story and narrative in media, the author of this piece has made a compelling argument that advancing technical capabilities could directly inform the narrative/fictional aspect of a game’s aesthetics, and certainly has done so in the past.

Hardware gimmick or cultural innovation? [re-post]

A re-post from the Preserving Games blog, October 22, 2009.

Y. Aoyama and H. Izushi, “Hardware gimmick or cultural innovation? Technological, cultural, and social foundations of the Japanese video game industry,” Research Policy 32, no. 3 (2003): 423–444.

This 2002 article (written 2001) looks at the success of the Japanese video game industry and attempts to illuminate the unique factors behind its success. Japan’s video game industry is especially remarkable given the dominance of “English language-based exportable cultural products” and the origin of the video game industry, which began in the US with Steve Russell’s programming of Space War for the PDP-10 and Nolan Bushnell’s subsequent creation of Atari to market and sell such arcade games.

Mega Man
Mega Man, hero of the early NES platformers. His design has characteristics of the manga style.

The authors give a history of the industry and observe Nintendo’s very early interest and involvement with electronic toy games. This began as early as the 1960s with the emerging popularity of shooting games with optical sensors. Nintendo was able to recruit technical expertise from consumer electronics and provided them with early successes like Game and Watch and Color TV Game (totally cool old ad at gamepressure). But Nintendo’s historic rise in the console market with both the Famicon and NES was due in no small part to its attention to quality software; the company made sure to foster in-studio works (Donkey Kong, Super Mario Brothers) and hold alliances with outside game developers.

After the mid 90s Nintendo falters by retaining cartridges for their games rather than the CD-ROM; this among other factors allows Sony to rise in the market. The authors continue the brief history up to the approximate time of the article, but one main point can be drawn from the narrative: hardware and software are intricately linked and related; success frequently hinges on a deep synchronicity between the two engineering pursuits. The authors go on to elaborate this point, emphasizing Nintendo’s early collaboration with domestic electronic consumer goods firms.

The article describes three types of software publishers:

  • in-house publishers of platform developers (e.g. Nintendo)
  • comprehensive software publishers with in-house capability for most development (e.g. Square)
  • publishers that act as producer/coordinator and outsource most functions (e.g. Enix)
Continue reading “Hardware gimmick or cultural innovation? [re-post]”

What Went Wrong? A Survey of Problems in Game Development [re-post]

A re-post from the Preserving Games blog, October 19, 2009.

Fábio Petrillo et al., “What went wrong? A survey of problems in game development,” Computers in Entertainment 7, no. 1 (2, 2009): 1-22.

This February 2009 article from the Computer in Entertainment magazine of ACM takes a look at the game industry and compares its difficulties to the larger software industry. Specifically the authors analyze twenty postmortems from the archives of Gamasutra.com to characterize the problems that plague game development. I believe Gamasutra has discontinued this series but postmortems are still published by sister publication Game Developer.

A postmortem “designates a document that summarizes the project development experience, with a strong emphasis on the positive and negative aspects of the development cycle.” After reviewing the literature discussing problems present in the software industry, the authors begin to analyze the problems described in the postmortems. The games covered and the problems identified and quantified are in a table that describes the number of occurrences and overall frequency (click for larger image). Note: sometime in the future (the Web 3.0 future?) I would provide a link to the actual dataset rather than a .PNG  showing you a picture of the dataset.

What Went Wrong Interview Table
What Went Wrong Interview Table

The authors’ categories provide a helpful navigation to the issues that arise in a game development project. As they note, this study sees the most cited problems as unreal or ambitious scope and features creep, both constituting 75% of all problems described. Notable for game archivists is a 40% frequency for the lack of documentation problem as well. The authors note low occurrences for crunch time and over budget (25%), both “said to be ‘universal.'” It’s difficult however to draw expansive conclusions from a small dataset. Moreover postmortems were not team projects or collaboratively written, rather a single participant is responsible for the postmortem. The authors usefully provide other limitations to put the data in context.

The authors conclude that the electronic games industry does indeed suffer from problems in the larger software industry (overly ambitious plans and poor requirements analysis) as well as woes peculiar to itself: the first to experiment with new technologies, tool problems, and collaboration between disparate professionals, among others.

On a final note, the postmortems are still available at Gamasutra, and they are really fascinating reads. It becomes clear just how young an engineering and creative discipline digital game-making is, and how much fluctuation there is in how a game turns out. There are some great examples and stories there; the authors of this article cite quite a few of them.

PAWN and Producers

The Producer-Archive Workflow Network (PAWN) is a platform for handling the ingestion of artifacts into a long term digital repository like Fedora of DSpace. As such it focuses on the Producer-Archive interaction portion described in the Open Archival Information Systems [pdf]. It strives for flexibility in accommodating different producer-archive relationships, most likely found in a distributed system. For example an archivist or repository manager may use PAWN to handle disparate types of data or package producers (manufacturers, individual scholars, students, etc.) who are all going to have different metadata to fill out before submitting the package for ingestion, and for which the processing may be different in regards to individual metadata elements. PAWN is part of a larger tool set being developed by ADAPT (An Approach to Digital Archiving and Preservation Technology).

PAWN seems most applicable in a distributed repository that sees submissions from a variety of different producers. It’s unlikely the Goodwill Computer Museum would need such flexibility for its own repository. That repository will be fairly centralized, only maintaining multiple clients in the building (and perhaps a few remotely in time). Our producers will always be staff or trained volunteers. But the project does highlight that the museum will be seeing submissions from at least two different ‘producers’: the recycling department, individual donations, and perhaps institutional donations.

The recycling department ‘producer’ effectively makes the real producer anonymous. The only exception would be provenance information found on or in the artifact itself (stickers, names in books, disk storage, etc.). Despite the presence of such information, I can’t imagine the museum would be able to use it, as very likely it is outside of the recycling departments’ right to disburse such information.

Mechanisms: An Annotation

Kirschenbaum, M. (2008). Mechanisms: New media and the forensic imagination. Cambridge: MIT Press.

Matthew Kirschenbaum, Associate Professor of English and Associate Director at the Maryland Institute for Technology in the Humanities (MITH), here examines digital media in the context of traditional textual studies and bibliography. Kirschenbaum presents to the reader forensic techniques for data recovery and investigation that reveal how digital media, typically assigned attributes like ephemeralness, repeatability and variability (what he terms a traditional “screen essentialism” attitude about digital media), actually fulfills traditional bibliographic requirements of individualism, provenance and inscription.

Central to understanding these qualities of new digital media is an understanding of the affordances and technical mechanics of the dominant storage device for the last twenty or so years: the magnetic hard disk drive. Kirschenbaum reveals how data inscription on these devices (the magnetic fluxes inscribed on the drive’s multiple platters) can identify past events and previous inscriptions in a discrete spatial territory, much like the clues traditionally found textual scholars. The author makes a distinction between this forensic materiality and the more familiar formal materiality of digital media: its carefully controlled and highly engineered behavior we see on the screen. The author elaborates on how software engineering and extensive error checking at every level of the computer works to migrate magnetic fluxes to actual human-readable documents on the screen. Even at the formal materiality level many bibliographic and textual details are overlooked for lack of close inspection: multiple versions, multiple operating environments, actual textual differences between works, etc.

Three case studies illuminate these topics: a forensic and textual analysis of a Mystery House disk image, a bibliographic and historic look at the multiple versions of Afternoon: A Story by Michael Joyce, and a look at the social and textual transmissions of William Gibson’s “Agrippa.”

Kirschenbaum’s central argument is that traditional characterizations of electronic texts and media (fluid, repeatable, identical, ephemeral) is insufficient for bibliographic, preservationist, and textual purposes, and that the media itself, upon closer examination, supports none of these characterizations.