Dwarf Fortress Interviews

Dwarf Fortress screen

Before the week is out I wanted to post to the NYT interview with the Adams brothers, who design and build the incredible labor of love that is Dwarf Fortress.

I had the opportunity to interview Tarn Adams (audio and transcript available), who programs the game, for the game preservation project I worked on in school (all interviews are here at the Center for American History). Tarn is a standout guy, who is awfully generous with his time considering the colossal task ahead of he and his brother. He gave a great interview that illuminated important parts of their game-making, which is in kind with the idiosyncratic and singular quality of Dwarf Fortress.

Check out the NYT interview — Tarn has thoughtful and provoking comments on playing games these days.

And, if you haven’t tried Dwarf Fortress, give it a go sometime. I played it for a year on and off – one day I’d like to make a return to it. It’s not as hard as all that, really – although you should have the wiki open as you play.

Book Review: Racing the Beam [re-post]

A re-post from the Preserving Games blog, February 12, 2010.

Montfort, N., & Bogost, I. (2009). Racing the Beam: The Atari Video Computer System. Platform Studies. Cambridge, Massachusetts: MIT Press.

Racing the Beam: The Atari Video Computer System
Racing the Beam: The Atari Video Computer System

Just want to give a brief rundown on a really great read I’ve come across. MIT has started a “Platform Studies” series of books where the idea is to examine a platform and its technologies to understand how this informs creative work done on the platform. Platforms could range from gaming consoles, to a programming language, to an operating system, or even the Web itself if this is the platform upon which creative work is being made. The platform in this case is the Atari Video Computer System, the first Atari home system, later referred to as the Atari 2600 in the wake of the newer Atari 5200.

The authors examine the Atari VCS as a computing system, and take care to elaborate the unique (really exceptionally odd) constraints found there. Six games are investigated in chronological order, giving the reader a sense of the programming community’s advancing skill and knowledge of the system: Combat (1977), Adventure (1980), Yar’s Revenge (1981), Pac-Man (1982), Pitfall! (1982), and Star Wars: The Empire Strikes Back (1982).

The most prominent technical details are explained in first few chapters, and they illuminate each game’s construction as an exceptional act of engineering and ingenuity. Just to give an idea of the unique affordances of the Atari VCS, here are a few of the most characteristic details:

  • The custom sound and graphics chip, the Television Interface Adapter (TIA), is specifically designed to work with a TV’s CRT ray. The ray itself sprays the electrons onto the inside of a TV screen, left to right, one horizontal scan line at a time, taking a brief break at the end of each line (a “horizontal blank”) and a longer break at the bottom line, before resetting to the top and starting over again (a “vertical blank”). A programmer only has those tiny breaks to send any instructions to the TIA, and really only the vertical break provided enough time to send any game logic to the system.
  • It was imperative that game logic be sent at these breaks because the Atari VCS had no room for a video buffer. This meant there was no way to store an image of the next frame of the game, all graphic instructions are written in real time (sound instructions had to be dropped in on one of the breaks). A designer or programmer could choose to restrict the visual field of the game in exchange for more time to send game logic instructions. Pitfall! is an example of this.
  • This means there are no pixels on the Atari VCS. Pixels require horizontal and vertical planes, but for the Atari VCS, there is only horizontal scan lines. There is no logical vertical division at all for the computational system. As the beam goes across the screen, a programmer can send a signal to one of the TIA’s register to change the color. Thus, the “pixels” are really a measure of time (the clock counts of the processor) and not space.
  • Sprites, such as they existed for the Atari VCS, were hard-coded into the ROM of the system. Programmers had five: two player sprites, two missiles, and one ball. Reworking that setup (clearly designed for Pong and the like) into something like Adventure, Pitfall!, or even the Pac-Man port is an amazing feet.

The book doesn’t refrain from the technical. I could have used even more elaboration than what is presented in the book, but after a certain point the book would turn into an academic or technical tome (not that there’s anything wrong with that), so I appreciate the fine line walked here. The authors succeed at illuminating technical constraints enough for the general reader to understand the quality of the engineering solutions being described. Moreover, the authors leave room to discuss the cultural significance of the platform, and to reflect on how the mechanics and aesthetics of these Atari titles have informed genres and gameplay presently.

Games that Made Me: Microsurgeon

Microsurgeon Banner

I’ve rediscovered an Intellivision game I played as a kid: Microsurgeon (1982). This was one of the great cooperative console games of my youth, along with General Chaos (Sega Genesis, 1994) and Contra (NES, 1987).

The Intellivision must have been my friend’s father’s — we had both grown up with the NES as the big prize. The console’s controllers each had an analogue directional disc, which struck us as impossibly weird and archaic (but still interesting after too many failed rounds of Sonic the Hedgehog).

The real-world weightiness of the this game made a mark on me. You control a micro-ship inside a human body, where you battle cancer. It was very hard. You could target certain parts of the patient’s body for healing: eyes, brain, lungs, etc. The tumor spread relentlessly and you would find yourself urgently manipulating your directional disc in an effort to hold back advancing grey blocks of cancer cells.

Microsurgeon Medical Chart
Microsurgeon Medical Chart

The challenge was always compelling: you wanted to save this patient.  My most vivid memory though is of the tumor overwhelming whatever organ I was engaged in and the patient dying. Despite the morbid conclusion, the idea of a triumphant heal kept us returning.

Microsurgeon took the mathematical progression of difficulty found in many early arcade games (Space Invaders, Centipede, etc.) and applied it to the body’s battle with disease. I would say it was tragic but my unfamiliarity with Aristotelian tragedy would advise against it. I will just say that it was really sad and a little bit scary to lose. Microsurgeon is still how I visualize cancer doing away with me.

My search phrase (intellivsion health game) also turned up an excerpt from the book Lucky Wander Boy by D.B. Weiss. It’s good read; I look forward to reading more.

The second part of this game I remember so well are the visuals, which were gorgeous and appealingly abstract. The banner graphic above and the medical chart display are taken from user Servo’s contributions to the stock of images at MobyGames. The banner graphic reminds me of Basquiat’s popular painting, Unknown (Skull) (1981):

Basquiat, Unknown (Skull)
Unknown (Skull)

There’s some resemblance, isn’t there? Sure the Intellivision’s representation of the skull is medical and diagrammatic, and Basquiat’s is expressive and descriptive. But both skulls are essentially tackled in pieces.

Attending IDCC ’10

6th International Digital Curation Conference banner

I’m happy to break the months-long silence here just to say I’ll be heading to the 6th International Digital Curation Conference in Chicago, Monday to Wednesday of this week. I’ll be manning the poster for the Dr. Winget’s Preserving Games research project, explaining to all willing passerby our findings regarding record creation in video game development and some key implications for curation of these records.

I’ll be able to catch a few talks on Tuesday and Wednesday before heading out. Just a few I’m interested in hearing:

  • “Idiosyncrasy at Scale: Data Curation in the Humanities.” John Unsworth, Dean & Professor, Graduate School of Library and Information Science & Director Illinois Informatics Institute, University of Illinois at Urbana-Champaign.
  • “Linking to Scientific Data: Identity Problems of Unruly and Poorly Bounded Digital Objects” Laura Wynholds, University of California, Los Angeles.
  • “DataStaR: Using the Semantic Web approach for Data Curation” Huda Khan, Brian Caruso, Brian Lowe, Jon Corson-Rikert, Diane Dietrich & Gail Steinhart, Cornell University.
  • “Dependency Analysis of Legacy Digital Materials to Support Emulation Based Preservation” Aaron Hsu & Geoffrey Brown, Indiana University.
  • “What constitutes successful format conversion? Towards a formalisation of “intellectual content” C.M.Sperberg-McQueen, Black Mesa Technologies LLC.
  • “Assessing the preservation condition of large and heterogeneous electronic records collections with visualizations” Maria Esteva, Weijia Xu, Suyog Dutt Jain & Jennifer Lee, University of Texas at Austin

DCC seems to be pretty serious about “amplifying” the conference to non-attendees and attendees alike. There’s a Twitter account (@idcc10) and a Netvibes dashboard, which will host all manner of media and feeds for the conference.

Here’s the ‘Minute Madness’ slide, which accompanies (appropriately enough) a one-minute rundown of the project:

Winget-Sampon IDCC '10 Minute Madness Slide
IDCC '10 Minute Madness Slide

Here’s the PowerPoint slide.

Making the Water Move: Techno-Historic Limits in the Game Aesthetics of Myst and Doom [re-post]

A re-post from the Preserving Games blog, January 24, 2010.

Hutchison, A. (2008). Making the Water Move: Techno-Historic Limits in the Game Aesthetics of Myst and Doom. Game Studies, 8(1). Retrieved from http://gamestudies.org/0801/articles/hutch


This 2008 Games Studies article examines the effect technology (or the “techno-historic” context of a game work) has on game aesthetics. The author defines the “game aesthetics” as “the combination of the audio-visual rendering aspects and gameplay and narrative/fictional aspects of a game experience.” It is important to note that audio-visual aspects are included in this definition along with the narrative/fictional components. This is because the author later argues that advancing audio-visual technology will play an important role in advancing the narrative aspect of games.

The article begins with a comparison of two iconic computer games of the mid 1990s: Myst and Doom. Specifically the design response in each game to the technological limitations of PCs at the time is examined. Very briefly, we see that Myst takes the “slow and high road” to rendering and first-person immersion, while Doom adopts the “fast and low road.” As the author explains, each response was prompted by the limitations of rendering that a personal computer could perform at the time. For its part, Myst’s design chooses to simply skip actual present-time 3D rendering and use only pre-rendered, impeccably crafted (at the time) images to move the player through the world. Minor exceptions exist when Quicktime video is cleverly overlaid onto these images to animate a butterfly, bug, moving wheel, etc. This overall effect very much informs the game’s aesthetic, as anyone who played the original can recall. Myst is a quiet, still, contemplative and mysterious world. Continuous and looping sound is crucial to the identity of the world and the player’s immersion. Nearly every visual element is important and serves a purpose. The designers could not afford to draw scenes extraneous to the gameplay. The player’s observation of the scenes available is key, and the player can generally be assured that all elements in the Myst world warrant some kind of attention. Hardware limitations of the time, such as the slow read time of most CD-ROM drives, serve to reinforce this slow, methodical gameplay and visual aesthetic.

Doom by contrast uses realtime rendering at the expense of visual nuance and detail. Doom achieves immersion through visceral and immediate responsiveness, and its aesthetic is one of quick action and relentlessly urgency. The low resolution of the art and characters is compensated by the quick passing of those textures and objects, and by the near-constant survival crisis at hand. Redundancy of visual elements and spaces is not an issue: the player can face down hordes of identical opponents in similar spaces (sometimes the exact same space) and not mind at all because the dynamism of the gameplay is engaging enough to allow such repetition. Pac-Man had the same strength.

From this comparison the author goes on to speculate how techno-historic limitations inform aesthetics in general, and whether the increasing capacity of personal computers to render audio-visual components in extreme and realtime detail will inform the narrative/fictional aspects of games as well. One only needs a passing familiarity with games to know that this aspect of games has been widely disparaged in the media and in some academic writing. Some quotes the author uses to characterize the degenerative trend of popular media and the game industry’s complicity in the coming intellectual apocalypse:

Perhaps lending strength to this phenomenon is a current popular culture stylistic trend which emphasises “spectacle” over narrative and gameplay. Peter Lunenfeld has identified this broad movement in popular culture generally:

Our culture has evacuated narrative from large swaths of mass media. Pornography, video games, and the dominant effects-driven, high concept Hollywood spectaculars are all essentially narrative-free: a succession of money shots, twitch reflex action, and visceral thrills strung together in time without ever being unified by classic story structure (Lunenfeld, 2000, p.141).

And more specifically dealing with games:

“It is a paradox that, despite the lavish and quite expensive graphics of these productions, the player’s creative options are still as primitive as they were in 1976” (Aarseth, 1997, p.103).

Most interesting is the observation that richer media capabilities does not necessarily translate to glossier, superficial renderings. Richer media can mean a more meaningful experience for the player. Nuance and subtlety can be introduced, more information-rich media can mean more powerfully conveyed characters and a more fully realized narrative.

On top of this, one can expand the definition of “story” and “narrative” as id developer Tom Willits argues in this Gamasutra report:

“If you wrote about your feelings, about your excitement, the excitement you felt when new areas were uncovered [in Doom] — if you wrote it well, it would be a great story,” Willits says. “People call it a ‘bad story,’ because the paper story is only one part of the game narrative — and people focus on the paper story too much when they talk about the story of a game.”

Information, he maintains, is learned through experiences, and the experience of playing a game is what forms a narrative, by its nature. Delivering a story through the game experience is the “cornerstone” of id Software’s game design, and the key when developing new technology.

Whatever your opinion on what constitutes story and narrative in media, the author of this piece has made a compelling argument that advancing technical capabilities could directly inform the narrative/fictional aspect of a game’s aesthetics, and certainly has done so in the past.

There’s A Symposium Going On

The place to be is UTA 1.208 (the large classroom) in the UTA building of the UT Austin campus, at 1616 Guadalupe, on Friday, October 8.

The presentations are open to all, so come have a seat if you can! There will be questions too. The agenda:

Disciplines Converge: Representing Videogames for Preservation and Cultural Access

1:00 PM Bonnie Nardi (University of California at Irvine)
The Many Lives of a Mod
The concept of “mod” (software modification) is deceptively simple. Starting with a thought in a player’s mind on how to improve a video game, the activity of modding ramifies to problems of power, law, culture, inequality, and technological evolution. Even an expansive concept such as participatory culture does not capture the lives of a mod which enter wider arenas of activity at corporate and national levels. As we write game history (the project of ethnography) and preserve digital artifacts (the project of preservationists) is there a way to move the two projects more closely together to provide future generations more theorized representations of video games?

2:00 PM Henry Lowood (Stanford University)
Video Capture: Machinima, Documentation, and the History of Virtual Worlds
The three primary methods for making machinima during its brief history—code, capture, and compositing—match up neatly with three ideas about how to document the history of virtual worlds. These linkages between machinima and documentation are provocative for thinking about what we can do to save and preserve the history of virtual worlds in their early days. As it turns out, they also suggest how we might begin to think about machinima as a documentary medium.

3:00 PM Jerome McDonough (University of Illinois at Urbana-Champaign)
Final Report of the Preserving Virtual Worlds Project
This presentation will provide a summary of the findings from the Preserving Virtual Worlds project, a collaborative investigation into the preservation of video games and interactive literature by the Rochester Institute of Technology, Stanford University Libraries, the University of Illinois and the University of Maryland.  This research was conducted as one of the Preserving Creative America projects sponsored by the Library of Congress’ NDIIPP program.  The summary will touch on issues of intellectual description and access of games, collection development, legal issues surrounding game preservation, the results of our evaluations of preservation strategies, and a discussion of possible further research agendas within this arena.

4:00 PM Megan Winget (University of Texas at Austin)
We Need A New Model: The Game Development Process and Traditional Archives
This presentation will relate findings from our IMLS project focused on the video game creation process. Data includes eleven qualitative interviews conducted with individuals involved in the game development, spanning a number of different roles and institution types. The most pressing findings relate to the nature of documentation in the video game industry: project interviews indicate that game development produces significant documentation as traditionally conceived by collecting institutions. This documentation ranges from game design documents to email correspondence and business reports. However, traditional documentation does not adequately, or even, at times, truthfully represent the project or the game creation process.

In order to accurately represent the development process, collecting institutions also need to seek out and procure versions of games and game assets. The term version here refers to formally produced editions of the game (the Xbox 360, Wii, Playstation 2, and Nintendo DS versions of the same game, for example), as well as versions that are natural byproducts of the design process, such as alpha and beta builds, vertical slices, or multiple iterations of game assets. In addition to addressing the specifics of the game design process, this presentation will make the case for developing new archive models that accurately represent the real work of game creation.

Hardware gimmick or cultural innovation? [re-post]

A re-post from the Preserving Games blog, October 22, 2009.

Y. Aoyama and H. Izushi, “Hardware gimmick or cultural innovation? Technological, cultural, and social foundations of the Japanese video game industry,” Research Policy 32, no. 3 (2003): 423–444.

This 2002 article (written 2001) looks at the success of the Japanese video game industry and attempts to illuminate the unique factors behind its success. Japan’s video game industry is especially remarkable given the dominance of “English language-based exportable cultural products” and the origin of the video game industry, which began in the US with Steve Russell’s programming of Space War for the PDP-10 and Nolan Bushnell’s subsequent creation of Atari to market and sell such arcade games.

Mega Man, hero of the early NES platformers. The design has characteristics of the <i>manga</i> style.
Mega Man, hero of the early NES platformers. His design has characteristics of the manga style.

The authors give a history of the industry and observe Nintendo’s very early interest and involvement with electronic toy games. This began as early as the 1960s with the emerging popularity of shooting games with optical sensors. Nintendo was able to recruit technical expertise from consumer electronics and provided them with early successes like Game and Watch and Color TV Game (totally cool old ad at gamepressure). But Nintendo’s historic rise in the console market with both the Famicon and NES was due in no small part to its attention to quality software; the company made sure to foster in-studio works (Donkey Kong, Super Mario Brothers) and hold alliances with outside game developers.

After the mid 90s Nintendo falters by retaining cartridges for their games rather than the CD-ROM; this among other factors allows Sony to rise in the market. The authors continue the brief history up to the approximate time of the article, but one main point can be drawn from the narrative: hardware and software are intricately linked and related; success frequently hinges on a deep synchronicity between the two engineering pursuits. The authors go on to elaborate this point, emphasizing Nintendo’s early collaboration with domestic electronic consumer goods firms.

The article describes three types of software publishers:

  • in-house publishers of platform developers (e.g. Nintendo)
  • comprehensive software publishers with in-house capability for most development (e.g. Square)
  • publishers that act as producer/coordinator and outsource most functions (e.g. Enix)

Continue reading “Hardware gimmick or cultural innovation? [re-post]”