SAA 2017: Session 201 What We Talk About When We Talk About Processing Born-Digital: Building a Framework for Shared Practice

In advance of the 2017 Annual Meeting, we invited SNAP members to contribute summaries of panels, section meetings, forums, and pop-up sessions. Summaries represent the opinions of their individual authors; they are not necessarily endorsed by SNAP, members of the SNAP Steering Committee, or SAA.

Guest Author: Michael Barera, Archivist, Texas A&M University-Commerce

This session consisted of a panel-led presentation and discussion conducted by Sally DeBauche, Erin Faulder, Shira Peltzman, Kate Tasker, and Dorothy Waugh. Other members of their group who had contributed to the project but were not present at the session were Susanne Annand, Marty Gengenbach, Julie Goldsmith, and Laura Jackson.

The panel began with background on their project for developing a shared practice framework for processing born-digital materials, terming it the “start of the trail” in an Oregon Trail computer game theme that ran through their whole session. The genesis of the project was at the Born-Digital Archiving & eXchange (BDAX) conference in 2016. After the conclusion of the conference, the nine archivists (the five presenting this session and the other four aforementioned group members) banded together and first formed a charter before conducting a literature review (which is available online at https://www.zotero.org/groups/632302/born_digital_processing). Their goal: “build a framework to standardize and define levels of born-digital processing and related activities.”

The panelists then proceeded to given an overview of their survey, beginning with the rational: to identify activities and their functional purpose. They described the contents of the survey as being “a lot of rambling description…of practices,” and then related their experiences with analyzing and synthesizing it down to 30 activities (their draft list of activities is available online at http://bit.ly/2tjJsKU). In their words: “if this sounds complicated, that is because it is.”

From here, the panelists proceeded into their results. They began by noting that digital processing comprises tasks from accession to access, and that they utilized two overlapping standards with sometimes differing goals — in archival processing based on analog materials, the focus is on hierarchy and aggregate management, while in the OAIS digital preservation model, it is instead on relationships and item/object management. Furthermore, they observed that digital processing exists outside standard archival description tools. After this, they mentioned that activities are tied to tools and the limitations of those tools; in their experience, “the tools don’t always work smoothly with each other.” Additionally, they noted that not all materials about digital collections will “live in the finding aid.” In the end, though, they were able to find some consensus on which activities should be part of minimum digital processing.

The panel then turned their attention to framework. They determined that, regardless of its design, their model would need to be:

  • Flexible
  • Modular (usable either in its entirety or for a more limited application)
  • Useful (which they noted is more difficult to balance than meets the eye)
  • Simple
  • Extensible (to help the model stay relevant in a changing archival landscape)

Next, they discussed potentially using a pre-existing framework: they identified four models, but did not select any of them as appropriate for their needs. Therefore, they ultimately decided they had to develop a model of their own, which they dubbed the “Frankenmodel.”

The panel then proceeded to explain their model, including how each processing task has a corresponding grid in it and that these grids include possible scenarios and processing methods. In all, the “Frankenmodel” consists of four tiers corresponding to level of quality:

  • Baseline
  • Minimal
  • Moderate
  • Intensive

As explained by the panelists, processing activity and functional purpose are denoted at the top of each grid, while defining each processing task was necessary because people’s understanding of these tasks often varied. Furthermore, each tier in their model has a corresponding definition, and the purpose the tiers serve is to identify the actions necessary to process digital materials at increasing levels of granularity.

The panel then proceeded to define and give notes on typical collections processed at each of the tier levels:

  • Baseline
    • Definition: “The minimum recommended processing actions that should be taken for any born-digital material.”
    • Collections typically processed at this level: “All collections should undergo these processing actions at a minimum.”
  • Minimal
    • Definition: “These processing actions and methods do not typically require specialized tools and skillsets, and can usually be accomplished without substantial increases in funding or staffing.”
    • Collections typically processed at this level: “Have a low research value, are low risk, and can be made available as-is.”
  • Moderate
    • Definition: “These processing actions and methods may utilize forensic tools and require specialized skillsets.”
    • Collections typically processed at this level: “Have a somewhat higher research value, are somewhat higher risk, and can be made available as-is.”
  • Intensive
    • Definition: “These processing actions and methods are the most time consuming and resource intensive; processing collections at this level typically cannot be accomplished without specialized tools and skillsets.”
    • Collections typically processed at this level: “Are high value, are high risk, and have specific access restrictions or requirements.”

The panelists also noted that a collection should be processed at a specific tier if the general guidelines are specific to each processing task and if it helps users determine which tier is appropriate for a given task. Furthermore, possible methods include the “how-to” section of the model, while it also provides a range of possible methods for accomplishing a given task.

The panelists then had all session participants break out into 15 groups (with about 10 people in each group) for 20-25 minutes of group discussion mediated partially by group discussion packets that were handed out. The panelists assured all attendees that there are no wrong answers to their prompts, and they encouraged us to “tell us what you think.” Each interactive group discussion consisted of selecting a team leader and a notetaker, accessing each team’s folder on Google Drive, reviewing the grid (for approximately two minutes), and then discussing Question A and Question B in succession (for approximately eight or nine minutes each).

The panelists concluded their session by highlighting the next steps for their working group, which will be reviewing the feedback they received from the attendees. They then plan to regroup with all their working group members, create processing grids for the 30 digital processing activities that were identified, share updates via the ERS List / BloggERS, and (ultimately) create a white paper.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s