Thoughts On Electronic Records Training

The Age of Electronicus record cover
In window of Hollywood Boulevard vinyl store, flickr user Rich_Lem

In October I ventured to three locations in Mississippi with a coworker to deliver records management training to municipal clerks. My portion of the training addressed electronic records in the state. Here I discuss strategies I used and share some thoughts on teaching what is frequently dry material for an (often reluctant) audience.

Background

A little context on government records in Mississippi: for local government, all electronic records are managed and maintained by the originating agency. If electronic records are scheduled as permanent, they’re kept with that agency forever — they don’t go to the state archives.

By contrast, there are two primary supporting resources for state agencies. The first is a tape backup service offered by us, as well as the ability to take their permanent electronic records into the state archives. The second is the counsel, services, and guidelines of the state IT department. Local government of course has our counsel with any of their records concerns, but we don’t offer any services to them.

Because few municipalities (if any) have the resources to employ a records manager, it’s not atypical for electronic records management to be distributed among all municipal employees in an ad-hoc and uncoordinated manner. Professional document or records management software is out of scope for most, since such packages are too expensive and the volume of electronic records produced is typically too low to consider the purchase. The same is true of email archiving services. Open source would appear to be ideal but those solutions really do require dedicated IT administration, which is limited for many municipal projects.

My portion of the workshop lasts an hour, and the goal was to give attendees the knowledge to manage their electronic records better than they do now. Outside of the constraint that all record management has to occur with the agency, there are a few other hurdles to teaching effectively in this hour:

  • Little foreknowledge of each municipality’s specific tech setup or electronic records management strategy.
  • Little foreknowledge of each attendees’ computer literacy.
  • No foreknowledge of attendees’ specifics jobs or the records they regularly handle.

Unfortunately these constraints were outside of my control. However as I hope to share this doesn’t mean the hour can’t be successful.

Continue reading “Thoughts On Electronic Records Training”

DPOE National Calendar

I want to give a brief shout out to the DPOE National Calendar, brand spanking new as of June 2011.

The idea is to have a single, general purpose calendar that covers digital preservation workshops, talks, etc., across the country. If you’re giving a talk or workshop, no matter how small the audience, consider submitting it here. And of course you can check the calendar to attend events, whether online or local to your area.

A longer post on DPOE is still forthcoming.

Next Week: Digital Preservation in D.C.

Next week I’ll be attending a train-the-trainer workshop hosted by the Library of Congress in D.C. I’m thrilled to be attending and I’m really looking forward to meeting the other participants.

The Digital Preservation Outreach and Education (DPOE) program is a recent initiative by LOC to “foster national outreach and education to encourage individuals and organizations to actively preserve their digital content.”

Since attendees are coming from a variety of institutions, it’s going to be really interesting to discuss the different contexts in which digital preservation can be introduced. Audiences and clients can make a big difference in how you articulate a subject – and identifying the core issues within those variations is a (perhaps lofty) goal of mine for this workshop.

That, as well as feedback on training and workshop execution, of which my position requires a good deal of, cannot be too welcome!

I hope to have a post or two on the workshop during or shortly after.

Old Site Exhumed, Mostly Gone

Lulu glares out from a sea of compression artifacts.

During my last two years of high school (c. 1998 to mid-2000), a friend and I began an “art website.” Our intention was to have a place to post our writings and visual work, and to solicit similar submissions from others around Internet. Our mascot was the enraged chimp pictured above, Lulu. The site received some modest interest from various users around the Web, and from a few of our friends at school. All in all, not an unsuccessful project.

However as high school came to a close, we grew tired of maintaining the site. We tossed around the idea of maintaining it while we went to our respective colleges, but eventually decided to shut it down. We would be too busy with better endeavors, and no one wanted to log into our hosting service to keep  the old high school art site afloat. We posted the EOL announcement on the site and applied a bullet wound to old Lulu with MS Paint. It was most certainly over.

Continue reading “Old Site Exhumed, Mostly Gone”

Dwarf Fortress Interviews

Dwarf Fortress screen

Before the week is out I wanted to post to the NYT interview with the Adams brothers, who design and build the incredible labor of love that is Dwarf Fortress.

I had the opportunity to interview Tarn Adams (audio and transcript available), who programs the game, for the game preservation project I worked on in school (all interviews are here at the Center for American History). Tarn is a standout guy, who is awfully generous with his time considering the colossal task ahead of he and his brother. He gave a great interview that illuminated important parts of their game-making, which is in kind with the idiosyncratic and singular quality of Dwarf Fortress.

Check out the NYT interview — Tarn has thoughtful and provoking comments on playing games these days.

And, if you haven’t tried Dwarf Fortress, give it a go sometime. I played it for a year on and off – one day I’d like to make a return to it. It’s not as hard as all that, really – although you should have the wiki open as you play.

From My Archives: Derrida’s Archive Fever

Green Fire

Below is a review of Derrida’s Archive Fever. The idea was to relate the lecture to practicing archivists and record managers. This was a really engaging read, and I think Derrida successfully articulates the archive impulse, with all its attendant richness and strangeness.

Archive Fever: A Freudian Impression. Jacques Derrida. Chicago: University of Chicago Press, 1998. Translated by Eric Prenowitz. 113 pages. ISBN 0-226-14367-8 paper. $14.98.

French philosopher Jacques Derrida (1930-2004) is most commonly known as the founder of deconstruction, an investigative thinking that identifies contradictions in a subject and demonstrates the essentialness of this contradiction to the meaning of the subject. For a thinker so adept at analyzing the valences of meaning in language, Derrida was unsurprisingly hesitant about the broad appeal and use of the deconstruction term, and no doubt would find fault with an overly mechanistic summation as perhaps written here. In Archive Fever, Derrida applies his intensely critical thought and evaluation to the notion of the archive as it is manifested in Sigmund Freud’s oeuvre.

Archive Fever: A Freudian Impression is a translation from the French of a published lecture Derrida delivered in 1994, and is divided into six parts: an opening note, an exergue, a preamble, foreword, theses and postscript. Derrida delivered this lecture to an international colloquium entitled “Memory: The Question of the Archives.” This leads to two caveats for the interested reader. Although blurbs on the paperback reference Derrida’s discussion of electronic media and more broadly the role of inscription technology in the psyche and in the archives, this is not the focus of his discussion, but is only part of a larger examination of the archive notion in Freud’s works. The reader should also know that this a later work of Derrida, and as such references ideas and investigations discussed in earlier works, particularly the essay Freud and the Scene of Writing (1972). This means some of Derrida’s passages can be disorienting if the reader is not familiar with the works of Derrida and Freud. Thankfully Derrida takes pains to convey his meaning through multiple expressions, so the reader has many opportunities to understand the ideas at play.

Continue reading “From My Archives: Derrida’s Archive Fever”

Making the Water Move: Techno-Historic Limits in the Game Aesthetics of Myst and Doom [re-post]

A re-post from the Preserving Games blog, January 24, 2010.

Hutchison, A. (2008). Making the Water Move: Techno-Historic Limits in the Game Aesthetics of Myst and Doom. Game Studies, 8(1). Retrieved from http://gamestudies.org/0801/articles/hutch

 

This 2008 Games Studies article examines the effect technology (or the “techno-historic” context of a game work) has on game aesthetics. The author defines the “game aesthetics” as “the combination of the audio-visual rendering aspects and gameplay and narrative/fictional aspects of a game experience.” It is important to note that audio-visual aspects are included in this definition along with the narrative/fictional components. This is because the author later argues that advancing audio-visual technology will play an important role in advancing the narrative aspect of games.

The article begins with a comparison of two iconic computer games of the mid 1990s: Myst and Doom. Specifically the design response in each game to the technological limitations of PCs at the time is examined. Very briefly, we see that Myst takes the “slow and high road” to rendering and first-person immersion, while Doom adopts the “fast and low road.” As the author explains, each response was prompted by the limitations of rendering that a personal computer could perform at the time. For its part, Myst’s design chooses to simply skip actual present-time 3D rendering and use only pre-rendered, impeccably crafted (at the time) images to move the player through the world. Minor exceptions exist when Quicktime video is cleverly overlaid onto these images to animate a butterfly, bug, moving wheel, etc. This overall effect very much informs the game’s aesthetic, as anyone who played the original can recall. Myst is a quiet, still, contemplative and mysterious world. Continuous and looping sound is crucial to the identity of the world and the player’s immersion. Nearly every visual element is important and serves a purpose. The designers could not afford to draw scenes extraneous to the gameplay. The player’s observation of the scenes available is key, and the player can generally be assured that all elements in the Myst world warrant some kind of attention. Hardware limitations of the time, such as the slow read time of most CD-ROM drives, serve to reinforce this slow, methodical gameplay and visual aesthetic.

Doom by contrast uses realtime rendering at the expense of visual nuance and detail. Doom achieves immersion through visceral and immediate responsiveness, and its aesthetic is one of quick action and relentlessly urgency. The low resolution of the art and characters is compensated by the quick passing of those textures and objects, and by the near-constant survival crisis at hand. Redundancy of visual elements and spaces is not an issue: the player can face down hordes of identical opponents in similar spaces (sometimes the exact same space) and not mind at all because the dynamism of the gameplay is engaging enough to allow such repetition. Pac-Man had the same strength.

From this comparison the author goes on to speculate how techno-historic limitations inform aesthetics in general, and whether the increasing capacity of personal computers to render audio-visual components in extreme and realtime detail will inform the narrative/fictional aspects of games as well. One only needs a passing familiarity with games to know that this aspect of games has been widely disparaged in the media and in some academic writing. Some quotes the author uses to characterize the degenerative trend of popular media and the game industry’s complicity in the coming intellectual apocalypse:

Perhaps lending strength to this phenomenon is a current popular culture stylistic trend which emphasises “spectacle” over narrative and gameplay. Peter Lunenfeld has identified this broad movement in popular culture generally:

Our culture has evacuated narrative from large swaths of mass media. Pornography, video games, and the dominant effects-driven, high concept Hollywood spectaculars are all essentially narrative-free: a succession of money shots, twitch reflex action, and visceral thrills strung together in time without ever being unified by classic story structure (Lunenfeld, 2000, p.141).

And more specifically dealing with games:

“It is a paradox that, despite the lavish and quite expensive graphics of these productions, the player’s creative options are still as primitive as they were in 1976” (Aarseth, 1997, p.103).

Most interesting is the observation that richer media capabilities does not necessarily translate to glossier, superficial renderings. Richer media can mean a more meaningful experience for the player. Nuance and subtlety can be introduced, more information-rich media can mean more powerfully conveyed characters and a more fully realized narrative.

On top of this, one can expand the definition of “story” and “narrative” as id developer Tom Willits argues in this Gamasutra report:

“If you wrote about your feelings, about your excitement, the excitement you felt when new areas were uncovered [in Doom] — if you wrote it well, it would be a great story,” Willits says. “People call it a ‘bad story,’ because the paper story is only one part of the game narrative — and people focus on the paper story too much when they talk about the story of a game.”

Information, he maintains, is learned through experiences, and the experience of playing a game is what forms a narrative, by its nature. Delivering a story through the game experience is the “cornerstone” of id Software’s game design, and the key when developing new technology.

Whatever your opinion on what constitutes story and narrative in media, the author of this piece has made a compelling argument that advancing technical capabilities could directly inform the narrative/fictional aspect of a game’s aesthetics, and certainly has done so in the past.

There’s A Symposium Going On

The place to be is UTA 1.208 (the large classroom) in the UTA building of the UT Austin campus, at 1616 Guadalupe, on Friday, October 8.

The presentations are open to all, so come have a seat if you can! There will be questions too. The agenda:

Disciplines Converge: Representing Videogames for Preservation and Cultural Access

1:00 PM Bonnie Nardi (University of California at Irvine)
The Many Lives of a Mod
The concept of “mod” (software modification) is deceptively simple. Starting with a thought in a player’s mind on how to improve a video game, the activity of modding ramifies to problems of power, law, culture, inequality, and technological evolution. Even an expansive concept such as participatory culture does not capture the lives of a mod which enter wider arenas of activity at corporate and national levels. As we write game history (the project of ethnography) and preserve digital artifacts (the project of preservationists) is there a way to move the two projects more closely together to provide future generations more theorized representations of video games?

2:00 PM Henry Lowood (Stanford University)
Video Capture: Machinima, Documentation, and the History of Virtual Worlds
The three primary methods for making machinima during its brief history—code, capture, and compositing—match up neatly with three ideas about how to document the history of virtual worlds. These linkages between machinima and documentation are provocative for thinking about what we can do to save and preserve the history of virtual worlds in their early days. As it turns out, they also suggest how we might begin to think about machinima as a documentary medium.

3:00 PM Jerome McDonough (University of Illinois at Urbana-Champaign)
Final Report of the Preserving Virtual Worlds Project
This presentation will provide a summary of the findings from the Preserving Virtual Worlds project, a collaborative investigation into the preservation of video games and interactive literature by the Rochester Institute of Technology, Stanford University Libraries, the University of Illinois and the University of Maryland.  This research was conducted as one of the Preserving Creative America projects sponsored by the Library of Congress’ NDIIPP program.  The summary will touch on issues of intellectual description and access of games, collection development, legal issues surrounding game preservation, the results of our evaluations of preservation strategies, and a discussion of possible further research agendas within this arena.

4:00 PM Megan Winget (University of Texas at Austin)
We Need A New Model: The Game Development Process and Traditional Archives
This presentation will relate findings from our IMLS project focused on the video game creation process. Data includes eleven qualitative interviews conducted with individuals involved in the game development, spanning a number of different roles and institution types. The most pressing findings relate to the nature of documentation in the video game industry: project interviews indicate that game development produces significant documentation as traditionally conceived by collecting institutions. This documentation ranges from game design documents to email correspondence and business reports. However, traditional documentation does not adequately, or even, at times, truthfully represent the project or the game creation process.

In order to accurately represent the development process, collecting institutions also need to seek out and procure versions of games and game assets. The term version here refers to formally produced editions of the game (the Xbox 360, Wii, Playstation 2, and Nintendo DS versions of the same game, for example), as well as versions that are natural byproducts of the design process, such as alpha and beta builds, vertical slices, or multiple iterations of game assets. In addition to addressing the specifics of the game design process, this presentation will make the case for developing new archive models that accurately represent the real work of game creation.

Puzzle Games for Software?

I just read Robert Patrick’s essay on eMuseums, hosted at Paul McJones’ excellent Dusty Decks blog. It’s a great read and addresses some of the problems of presenting computer history in an effective, and extensible, fashion.

I was specifically interested in Mr. Patrick’s thoughts on presenting software history. Hardware is a more intuitive museum subject in significant ways (its object-ness among them), but of course museums successfully convey subjects which have no direct corresponding object (e.g. touching the actual clothes of a Civil Rights victim) quite well. Still software remains especially difficult to present in an interesting way.

Mr. Patrick states that software’s workings are opaque to users, and suggests a multithreaded approach to software history that documents the different software types (applications, subroutines, operating systems, etc.) as they emerge, ascend or recede over time as separate threads.

Along with this, I am specifically interested in conveying to the museum goer the architecture, engineering, and writing of software. There is no better way to communicate the human labor, ingenuity, and yes, the toil, that goes into software making. Quoting industry numbers does not tell the museum goer that software is frequently an epic engineering project with considerable drama, not just externally (between departments, coders, and investors), but internally as well (in engineering problem solving). How to convey this drama?

Software is both an engineering and creative endeavor, and it exercises a rich figurative language that suggests physical play and work: variables are passed, object are created, something is trimmed, cleaned or scrubbed, a request is made, an exception is thrown, a thread stops and starts, etc.

I think this language indicates a way to illustrate the software engineering problem space abstracted away from specific commands and syntaxes. For example, museum visitors could manipulate some system (either a physical system or video game-type piece) with certain constraints emulating those of the coder. Come to think of it, puzzle games do a fine job of such demonstration already (perhaps more Portal than Braid). They could likely be much better demonstrations, of course, if they were directed toward this specific purpose.

I would love to see the day when some of the problems, solutions, tricks, etc. of software engineering are conveyed as well as those of medieval cathedrals or the Giza pyramids.