Late last year a group of my colleagues and I – co-chaired by Angela Beking and Bradley Daigle – completed a curatorial guide to the NDSA’s new Levels of Preservation.
As I’m sure most practitioners know, there are many moving parts to the preservation effort – from storage and management to appraisal and processing strategy. I think the Levels of Preservation have proved a great assessment tool since 2013, and the revision last year continues that effort. With that said, the curatorial resources linked here are meant to expand the use case for the Levels a bit further.
With this guide (and accompanying decision tree), one can assess content before or after acquisition (appraisal or reappraisal), tracking both the need and the ability to deliver in areas like security, technical or intellectual access and collection development. This can illuminate the case for either reevaluating material or changing processing priority – or, it can build the case for moving from one point on the Levels of Preservation chart to another point. This is roughly structured as a series of questions and follow-on decision points – i.e., step-by-step. Broadly, I hope the guide provides a meaningful “walkthrough” of the resource, illustrating a way to use them that can lead to clear action items.
Of course, feedback is welcome on this through NDSA, as I hope this will be a resource that can see multiple iterations as more is learned about actual and preferred use of the Levels and material around the Levels.
Curatorial Guidance with Decision Guide and Decision Tree (NDSA)
I’ve been working on a paper with Keith Pendergrass, Tim Walsh and Laura Alagna that centers on the environmental impact of digital preservation, and it is now out in the Spring issue of The American Archivist.
This effort began at the BitCurator Users Forum 2017, where I heard Keith initially present on this subject. I’m very happy we’ve put together a longer work and I look forward to getting it in front of a larger audience. While other criteria for appraisal and retention of digital content have received significant consideration, environmental concerns have not yet factored into much of the discussion.
I’m very happy to announce that The No-Nonsense Guide to Born-Digital Content, which I coauthored with Heather Ryan last year, has received SAA’s Preservation Publication Award for 2019. This is a wonderful honor, and I want to thank both my coauthor, along with Trevor Owens for his excellent foreword to the book. Many thanks to SAA as well for their support of the book. Thank you!
This collection constitutes over 25,000 digitized images detailing glaciology, exploration and photography from the late 19th century to modern times. We believe this collection, which contains aspects of a massive dataset, a photographic history, and a human account of geographic exploration, will be of great value moving forward. The list of individuals and institutions who have cared for, stewarded and conducted extensive research for these materials over time is simply too long to share here, from NOAA to NSIDC, project archivists Allaina Wallace and Athea Merredyth, to the team here at CU Boulder, and to many, many others. Thank you all!
If you have, or expect to have, 8-inch floppies in your collections, you may be interested in a paper I have put together with Abby Adams at the Harry Ransom Center and Austin Roche, an independent collector of Datapoint hardware and media, also in Austin, Texas. This paper was presented at iPRES 2018.
Data Recovery and Investigation from 8-inch Floppy Disk Media: Three Use Cases (osf.io) (CU Scholar)
As the above diagram suggests, working with 8-inch disks requires a few more pieces than 3.5″ and 5.25″ disks – though KryoFlux is no less critical here. Namely, adapters need to be acquired to both power and connect an 8-inch drive (D Bit is a great resource here). Even then, translating a sector-level disk image into workable files for a researcher can be very tricky.
We go into more detail in the paper, and I hope this can be useful for any practitioners who have this media in their stacks. Though 8-inch floppies come from some of the earliest days of personal computing, there is extremely valuable content on them. As we show in the paper, major creative works are stored on those platters, from script drafts for blockbuster films to early experimental computer graphics work.
The first set of glacier photos from the Roger G. Barry Archives are up now at CU Boulder. There are 950 photos here, and that is a fraction of the approximate 25,000 in the collection. More will be added over the year. This is a great resource for those interested in glaciology and climate change – and many are stunning images regardless. Again, thank you to CLIR, and everyone at CU Boulder, that have been so critical to the work.
I started drafting chapters for this book in late 2016 when Heather, then the head of the Archives here and now director of the department, approached me about coauthoring the title. I had never written in chapter form before, nor for more a general audience. Approaching my usual stomping ground of born-digital collection material from this vantage was really intriguing, so I jumped at the chance.
To back up a little, our subject here is collecting, receiving, processing, describing and otherwise taking care of born-digital content for cultural heritage institutions. With that scope, we have oriented this book to students and instructors, as well as current practitioners who are aiming to begin or improve their existing born-digital strategy. We’ve included lots of real world examples to demonstrate points, and the whole of the book is designed to cover all aspects of managing born-digital content. We really discuss everything from collecting policy and forensic acquisition to grabbing social media content and designing workflows. In other words, I’m hoping this provides a fantastic overview of the current field of practice.
Our title is part of Facet Publishing’s No-nonsense series, which provides an ongoing run of books on topics in information science. Facet in general is a great publisher in this space (if you haven’t checked out Adrian Brown’s Archiving Websites, I recommend it), and I’m happy to be a part of it. I thank them for their interest in the book and their immense help in getting it published!
Update: The book is now available stateside in the ALA store.
For the last year I have served as Co-PI for a fantastic project, supported by CLIR’s Digitizing Hidden Special Collections and Archives grant program, which centers on the metadata gathering and digitization of the National Snow and Ice Data Center’s (NSDIC) expansive collection of glacier and polar exploration prints within the Roger G. Barry Archives here in Boulder. We have a stellar project archivist leading the work, and we expect to begin posting images on our own site over the course of the year. Stay tuned for that.
The linked article here, posted in the last (ever, actually) issue of GeoResJ is a good summary of the project scope and value from everyone on the team, including our initial PI now at the University of Denver. We’re really excited to be contributing along with NSIDC to glaciology and earth history through this collection, and are planning on further promotion as processing continues along.
Revealing our melting past: Rescuing historical snow and ice data
Author links open overlay panel (ScienceDirect) (CU Scholar)
Last year I attended the Digital Heritage 2015 conference and presented a paper on digital forensics in the archive. The paper centers on collecting file timestamps across floppy disks into a single timeline to increase intellectual control over the material and to explore the utility of such a timeline for a researcher using the collection.
As I state in the paper, temporal forensic data likely constitutes the majority of forensic information acquired in archival settings, and in most cases this information is gathered inherently through the generation of a disk image While we may expect further use of this data as disk images make their way to researchers as archival objects (and the community’s software, institutional policies and user expectations grow to support it), it is not too soon to explore how temporal forensic data can be used to support discovery and description, particularly in the case of collections with a significant number of digital media.
Many thanks to the organizers of Digital Heritage 2015 for the support and feedback; it was a wonderful and very wide-reaching conference.
In February, I took part in the first Advanced Topics webinar for the BitCurator Consortium, centered on using the KryoFlux in an archival workflow. My co-participants, Farrell at Duke University and Dorothy Waugh at Emory University both contributed wonderful insights into the how and why of using the floppy disk controller for investigation, capture and processing. Many thanks to Cal Lee and Kam Woods for their contributions, and Sam Meister for his help in getting this all together.
If you are interested in using the KryoFlux (or do so already) I recommend checking the webinar out, if only to see how other folks are using the board and the software.
An addendum to the webinar for setting up in Linux
If you are trying to set up KryoFlux in a Linux installation (e.g. BitCurator), take a close look at the instructions found in README.linux text file located in the top directory of the package downloaded from KryoFlux site. It contains instructions on dependencies needed and the process for allowing access to floppy devices through KryoFlux for a non-root user (such as bcadmin). This setup that will avoid many permissions problems down the line as you will not be forced to use the device as root, and I have found it critical to correctly setting up the software in Linux.