A few months ago, I wrote about selected digitization readings and how we were going to use them to overhaul our digitization workflows. We’re now a couple of months into our new digitization workflow, and things are starting to run smoothly, but during the process, we noticed that we wanted a better way to match our digitized files to their description without using semantic filenames or separate metadata sheets. Continue reading
Recently, I attended METRO’s Annual Conference where I presented on a panel titled “Getting More Out of (and Into) Your Collections Management System.” I spoke about my experience learning to code as a processing archivist and developing DACSspace. The following is the text from my presentation.
We’ve written a lot on this blog about things we’re doing with the ArchivesSpace API, ranging from find and replace operations in notes to reporting on our DACS compliance across our repository. It should be pretty obvious we’re big fans of the power and flexibility it provides to automate what otherwise would be some pretty tedious and error-prone, and also that the data model is getting us to think about archival description outside of the EAD box. Continue reading
As the newest member of the Processing Team, I have been working on writing a DACS compliance evaluation script called DACSspace. Creating this tool came with a lot of “firsts” – this was my first experience writing code as well as interacting with an API. After a successful (yet sometimes frustrating) three months, I am excited to introduce DACSspace to the archival community and share a reflection of my work.
To view DACSspace on GitHub click here.
In preparation for upcoming changes to the display of digital objects in DIMES, I’ve been pursuing some enhancements to data export from ArchivesSpace. This began with a plugin to improve METS exports, including embedded MODS records, but then grew into a more comprehensive project to automate the export of updated resource records, version that data, and then push it to DIMES.
Over the weekend, we finished up a year-long project to import description for almost every single grant record the Ford Foundation ever gave. This is the same project that I wrote a post about last October. To refresh your memory, we started with 54,644 grant files described in an Excel spreadsheet, and we wanted to transform much of that data into EAD, and then import it into ArchivesSpace. Normally this project wouldn’t require an entire year, but we realized over the course of the project that we did not have efficient ways to reconcile our structured data against Library of Congress vocabularies. The post in October laid out our methods for reconciling subjects against LoC data; this post will detail the methods we took to reconcile corporate names against the LCNAF. Continue reading
It’s been a busy couple of weeks for conferences! On Friday, Bonnie and I attended a Born-Digital Workflows CURATEcamp, held at the Brooklyn Historical Society. We gave a brief presentation on our workflows for arranging and descriping born-digital materials, and also learned a lot from other attendees. Continue reading
CUSTOMIZING THE APPLICATION – 22 hours in 4 months
While we were mostly happy with the base ArchivesSpace application, we did want to make a few changes to the display and functionality in order to make it more user-friendly. I started out by referencing the Customizing and Theming ArchivesSpace documentation as well as the developer screencasts. Continue reading