First set of NSIDC glacier photos up

nsidc_glacierPhotos_AGS_F_1941_107
Field, William Osgood. 1941. Muir Glacier: From the Glacier Photograph Collection. Boulder, Colorado USA: National Snow and Ice Data Center. Digital Media.

The first set of glacier photos from the Roger G. Barry Archives are up now at CU Boulder. There are 950 photos here, and that is a fraction of the 30,000 in the collection. More will be added over the year. This is a great resource for those interested in glaciology and climate change – and many are stunning images regardless. Again, thank you to CLIR, and everyone at CU Boulder, that have been so critical to the work.

Book Out: The No-nonsense Guide to Born-Digital Content

9781783301959

I have a new book out with my colleague Heather Ryan, The No-nonsense Guide to Born-Digital Content

I started drafting chapters for this book in late 2016 when Heather, then the head of the Archives here and now director of the department, approached me about coauthoring the title. I had never written in chapter form before, nor for more a general audience. Approaching my usual stomping ground of born-digital collection material from this vantage was really intriguing, so I jumped at the chance.

To back up a little, our subject here is collecting, receiving, processing, describing and otherwise taking care of born-digital content for cultural heritage institutions. With that scope, we have oriented this book to students and instructors, as well as current practitioners who are aiming to begin or improve their existing born-digital strategy. We’ve included lots of real world examples to demonstrate points, and the whole of the book is designed to cover all aspects of managing born-digital content. We really discuss everything from collecting policy and forensic acquisition to grabbing social media content and designing workflows. In other words, I’m hoping this provides a fantastic overview of the current field of practice.

Our title is part of Facet Publishing’s No-nonsense series, which provides an ongoing run of books on topics in information science. Facet in general is a great publisher in this space (if you haven’t checked out Adrian Brown’s Archiving Websites, I recommend it), and I’m happy to be a part of it. I thank them for their interest in the book and their immense help in getting it published!

Update: The book is now available stateside in the ALA store.

“Revealing our melting past: Rescuing historical snow and ice data”

For the last year I have served as Co-PI for a fantastic project, supported by CLIR’s Digitizing Hidden Special Collections and Archives grant program, which centers on the metadata gathering and digitization of the National Snow and Ice Data Center’s (NSDIC) expansive collection of glacier and polar exploration prints within the Roger G. Barry Archives here in Boulder. We have a stellar project archivist leading the work, and we expect to begin posting images on our own site over the course of the year. Stay tuned for that.

The linked article here, posted in the last (ever, actually) issue of GeoResJ is a good summary of the project scope and value from everyone on the team, including our initial PI now at the University of Denver. We’re really excited to be contributing along with NSIDC to glaciology and earth history through this collection, and are planning on further promotion as processing continues along.

Revealing our melting past: Rescuing historical snow and ice data
Author links open overlay panel (ScienceDirect) (CU Scholar)

“Aggregating Temporal Forensic Data Across Archival Digital Media”

Last year I attended the Digital Heritage 2015 conference and presented a paper on digital forensics in the archive. The paper centers on collecting file timestamps across floppy disks into a single timeline to increase intellectual control over the material and to explore the utility of such a timeline for a researcher using the collection.

As I state in the paper, temporal forensic data likely constitutes the majority of forensic information acquired in archival settings, and in most cases this information is gathered inherently through the generation of a disk image  While we may expect further use of this data as disk images make their way to researchers as archival objects (and the community’s software, institutional policies and user expectations grow to support it), it is not too soon to explore how temporal forensic data can be used to support discovery and description, particularly in the case of collections with a significant number of digital media.

Many thanks to the organizers of Digital Heritage 2015 for the support and feedback; it was a wonderful and very wide-reaching conference.

Aggregating Temporal Forensic Data Across Archival Digital Media (IEEEXplore) (CU Scholar)

KryoFlux Webinar Up

In February, I took part in the first Advanced Topics webinar for the BitCurator Consortium, centered on using the KryoFlux in an archival workflow. My co-participants, Farrell at Duke University and Dorothy Waugh at Emory University both contributed wonderful insights into the how and why of using the floppy disk controller for investigation, capture and processing. Many thanks to Cal Lee and Kam Woods for their contributions, and Sam Meister for his help in getting this all together.

If you are interested in using the KryoFlux (or do so already) I recommend checking the webinar out, if only to see how other folks are using the board and the software.

An addendum to the webinar for setting up in Linux

If you are trying to set up KryoFlux in a Linux installation (e.g. BitCurator), take a close look at the instructions found in README.linux text file located in the top directory of the package downloaded from KryoFlux site. It contains instructions on dependencies needed and the process for allowing access to floppy devices through KryoFlux for a non-root user (such as bcadmin). This setup that will avoid many permissions problems down the line as you will not be forced to use the device as root, and I have found it critical to correctly setting up the software in Linux.

Repercussions of Amassed Data

I had the pleasure of meeting Mél Hogan while she was doing her postdoctoral work at CU Boulder. I think her research area is vital, though it’s difficult to summarize. But that won’t stop me, so here goes: investigating how one can “account for the ways in which the perceived immateriality and weightlessness of our data is in fact with immense humanistic, environmental, political, and ethical repercussions” (The Archive as Dumpster).

Data flows and water woes: The Utah Data Center is a good entry point for this line of inquiry. The article explores the above quoted concerns (humanistic, environmental, political, and ethical) at the NSA’s Utah Data Center, near Bluffdale. It has suffered outages and other operational setbacks since construction. These initial failures are themselves illuminating, but even assuming such disruptions are minimized in the future, the following excerpt clarifies a few of the material constraints of the effort:

Once restored, the expected yearly maintenance bill, including water, is to be $20 million (Berkes, 2013). According to The Salt Lake Tribune, Bluffdale struck a deal with the NSA, which remains in effect until 2021; the city sold water at rates below the state average in exchange for the promise of economic growth that the new waterlines paid for by the NSA would purportedly bring to the area (Carlisle, 2014; McMillan, 2014). The volume of water required to propel the surveillance machine also invariably points to the center’s infrastructural precarity. Not only is this kind of water consumption unsustainable, but the NSA’s dependence on it renders its facilities vulnerable at a juncture at which the digital, ephemeral, and cloud-like qualities are literally brought back down to earth. Because the Utah Data Center plans to draw on water provided by the Jordan Valley River Conservancy District, activists hope that a state law can be passed banning this partnership (Wolverton, 2014), thus disabling the center’s activities.

As hinted at in a previous post on Lanier, I often encounter a sort of breathlessness invoked when descriptions of cloud-based reserves of data and computational prowess are discussed. Reflecting on the material conditions of these operations, as well as their inevitable failures and inefficiencies (e.g. the apparently beleaguered Twitter archive at the Library of Congress, though I would be more interested in learning about the constraints and stratagems of private operations) is a wise counterbalance that can help refocus discussions on the humanistic repercussions of such operations. And to be sure, I would not exclude archives from that scrutiny.

Report on American Psychological Association and CIA

NYT reports today:

The American Psychological Association secretly collaborated with the administration of President George W. Bush to bolster a legal and ethical justification for the torture of prisoners swept up in the post-Sept. 11 war on terror, according to a new report by a group of dissident health professionals and human rights activists.

NYT has helpfully provided the referenced report on their site.

The Archives at CU Boulder has been collecting information on APA Psychological Ethics and National Security (PENS) debate since 2010. See the call for materials, as well as the report NYT has written up today, at the collection site.