Whitefriars of Aberdeen: A Working List of Witnesses to Carmelite Charters

Aberdeen History alum Julia Vallius has created a Working List to identify information about people who were witnesses or affixed their wax seals to many of the charters of the Aberdeen Carmelite friars from 1338 to 1431.

Detail from MARISCHAL/1/6/1/3/15 in University of Aberdeen Museums and Special Collections, licenced under CC By 4.0.

What is it? The Working List tabulates information about the surviving Aberdeen Carmelite charters, listed chronologically. For instance, it records that in 1399 for a charter by which William Crab donated land to the Carmelites, the witnesses included the provost, Adam de Benyn, and twelve other named men. It also records that the seals of William Crab, and of two bailies of the burgh (Simon de Benyn and William Blyndcele) were attached to the charter. This work helps to identify activities of burgh officials and other prominent figures, including in the period from c.1414–c.1433 when there is a gap in the main council register series.

The charters listed here are from the Marischal College Archives, part of the University of Aberdeen’s Special Collections. The Marischal collection in part contains the charters of the Carmelites, or Whitefriars, first established in Aberdeen in 1273.

Two sample transcriptions of charters are included, one in the Middle Scots vernacular, which records a grant in 1421 by Elizabeth Gordon of Gordon, who was the mother of the first earl of Huntly. She made her own gift and also confirmed “ye gift of my lady my eldmoder [grandmother] dam margret of keth ye qwilk my eldmoder has gifin to my said bretheris [the friars] of before tyme gone“.

The charters concerned have some playful illuminations, including that of a cockerel, shown above, and the head of a crowned king in a charter of David II, and intertwined fish, shown below.

Detail from MARISCHAL/1/1/1/4/4 in University of Aberdeen Museums and Special Collections, licenced under CC By 4.0.

Where is it? The Working List is available under a Creative Commons licence on the OSF (Open Science Framework), at https://osf.io/rdsfg/. Its long title is Working List of Witnesses and Authentication of Carmelite Charters, Aberdeen: Held in the Marischal College Archives (University of Aberdeen Museums and Special Collections), version 1.0, https://doi.org/10.17605/osf.io/rdsfg

Detail from MARISCHAL/1/1/1/4/4 in University of Aberdeen Museums and Special Collections,
licenced under CC By 4.0.

Where does it come from? The Working List began as the appendix created by Julia Vallius for her Senior Honours Dissertation entitled ‘Textual identities and urban communities: Understanding the role of charters and burgh records in the formation and creation of community identities, using the Aberdeen Carmelites charters as a case study’ (April 2020), supervised by Jackson Armstrong. Julia’s dissertation won the Kathleen Edwards Prize in Medieval History. Julia is currently undertaking a PhD in Medieval History at the University of Glasgow.

Julia and Jackson have worked over time to compile a first version of the Working List of Witnesses. Future versions can update, extend and augment this resource. Julia and Jackson are grateful for the support of the Museums and Special Collections throughout this project. 

Working List DOI link [ https://doi.org/10.17605/osf.io/rdsfg ]

Burgh Court Roll of 1317 Digitised & Online

Earlier this year the Stair Society published a digital facsimile of the Aberdeen Burgh Court Roll of 1317. These images are available via the Stair Society’s digitised manuscripts page and are reproduced by kind permission of Aberdeen Archives, Gallery and Museums.

This is the single surviving roll from the medieval burgh courts of Aberdeen, and dates from the period August-October 1317. The roll thus predates the later council register volumes, which survive from 1398 onwards. The burgh court roll is on parchment and it represents a unique survival from the courts of a Scottish burgh dating from the early fourteenth century.

A translation of the roll into English, and a discussion of its contents is available in Andrew R. C. Simpson and Jackson W. Armstrong, ‘The Roll of the Burgh Courts of Aberdeen, August-October 1317’, in Miscellany Eight, ed. by A. M. Godfrey, Stair Society 67, (Edinburgh, 2020), 57-93 (see The Roll of the Burgh Courts of Aberdeen, August–October 1317 (stairsociety.org)).

For further discussion of the roll and its context, see Andrew R. C. Simpson, ‘Urban Legal Procedure in Fourteenth Century Scotland: A fresh look at the 1317 court roll of Aberdeen’, in Comparative Perspectives in Scottish and Norwegian Legal History, Trade and Seafaring, ed. by Andrew Simpson and Jørn Øyrehagen Sunde (Edinburgh, 2023), 181–208.

And for a recent overview of this new book see this post by Andrew Simpson on the Edinburgh Private Law Blog.

A link to the burgh court roll at the Stair Society is now featured within aberdeenregisters.org, with a dedicated page and linked in the main menu.

Student team builds experimental new ARO search tool

MSc students at Aberdeen have developed a new search platform, experimenting with different search functions and displays

Team Delta members with Jackson Armstrong

“Team Delta”, one of the student groups tasked to carry out a client-focused project as part of the MSc Information Technology course, worked with the Aberdeen Burgh Records Project and the Aberdeen City Archives in winter-spring 2023. The group members were Samuel Lawal, Zixiang Tang, Shahbaaz Hussain, Emmanuel Boamah, Yan Zhang, and Layek Ahmad.

The challenge for the team was to develop a search platform based in an application other than Ruby on Rails, and to develop search facilities that would assist researchers with identifying occurrences of personal names, and with identifying numbers of entries of courts of different types. Team Delta built the new search tool in Python and Django.

The new platform is called ‘Enhanced Search 1.0 for ARO’, and it is linked from here.

During the second semester, the team met regularly with Jackson Armstrong who represented the Aberdeen Burgh Records Project as the client. The challenge to create a way to search for courts proved to be the most successful element of the exercise: the courts heading division in the ARO XML corpus is searched specifically for particular strings of text, and the new tool then counts up entries following each heading which is returned in the result, as in the example below. This is a useful way for researchers to quantify how ‘busy’ different types of courts were, with the number of entries representing the cases heard after each heading in the registers.

The challenge to search for names, or groups of names (such as those in the ‘Working List of Provosts, Bailies and Sergeands’) proved more difficult, and the resulting name search facility functions as a keyword search across the whole ARO corpus. This is powerful but non-specific. For both new search facilities, ‘Enhanced Search 1.0 for ARO’ includes a graph feature which displays results visually, as number of occurrences by year. The example below shows occurrences for the search term ‘james’:

‘Enhanced Search 1.0 for ARO’ is available via ARO Resources and Publications linked from the ARO website. It is hosted for the time being on pythonanywhere.com.

One highlight of the project was Team Delta’s visit to the Charter Room at the Aberdeen City Archives to see the original council register volumes, to meet with City Archivist Phil Astley, and to demonstrate the new search tool.

The new trail blazed by Team Delta with ‘Enhanced Search 1.0’ is a great pathway for future development of new versions and features of the tool, and new ways for the ARO to be searched and presented.

The Aberdeen Burgh Records Project thanks all of Team Delta, and Bruce Scharlau who coordinated the MSc IT project course.

Playing in the Archives

LACR alumnus William Hepburn has begun a Fellowship to investigate how Aberdeen’s UNESCO-recognised medieval records could provide the inspiration for video games design.

In the role he will assess the effectiveness of video games as a scholarly medium for examining the burgh records and the historical subjects they inform.

The project is funded by an Arts and Humanities Research Council (AHRC) Creative Economies Engagement Fellowship through the Scottish Graduate School for Arts and Humanities. It is called ‘Playing in the Archives: Game Development with Aberdeen’s Medieval Records’.

William will spend nine months investigating the potential for creative development from the Burgh Records, working alongside experts from industry. See the recent media announcements at the links below:

Press release: https://www.abdn.ac.uk/news/12911/

SGSAH: https://www.sgsah.ac.uk/about/news/headline_633508_en.html

Who Killed David Dun? Home Version

Twine game

 

By William Hepburn

In 2017 I designed an event called ‘Who Killed David Dun?’ at the first Granite Noir festival. At the event I presented a fictional murder mystery narrative based on historical evidence from the Aberdeen Council Registers. The twist was that the narrative was a piece of fiction where audience choices, decided by majority vote, guided the story, a bit like the recent ‘Bandersnatch’ episode of Black Mirror on Netflix and sharing one of its sources of inspiration – the interactive adventure books of the 1980s and 1990s such as the Fighting Fantasy series.

The story was built using the interactive fiction tool Twine. However, the game was made in a bespoke fashion for a live setting and consisted of a framework of choices on Twine shown on a projector, a script of the all the narrative branches read by me as the audience progressed through the story and paper handouts for the audience containing extracts from the medieval Aberdeen Council Registers. I have now integrated these elements so that the story can be played on a computer or (hopefully!) mobile device. The only element of the original event not carried over is a series of transcription challenges the audience had to pass to progress the narrative.

The game can be played in your web browser here.

 

New UK guide to Archive and Higher Education collaboration

New national guidance has been published by The National Archives (TNA) in partnership with History UK: the ‘Guide to Collaboration between the Archive and Higher Education Sectors’.

LACR and the wider Aberdeen Burgh Records Project feature in two case studies within the guidance, launched this summer. One is entitled ‘From cooperation to coordination – developing collaborative working’, and the other is entitled ‘Not another database: digital humanities in action’.

TNA’s Higher Education Archive Programme (HEAP) and History UK have worked together to write this new guidance in the 2018 edition. This refreshes the original guidance of 2015 which was developed with TNA and Research Libraries UK. Its aim is to improve collaboration between archives and academic institutions of all kinds.

In addition to case studies of collaboration from across the archives and higher education sectors, the refreshed guidance includes:

  • Practical ways to identify, develop and sustain cross-sector collaborations
  • Insights into the drivers, initiatives, support, and language of the archives and higher education sectors
  • Explanations on how to understand outputs and outcomes, and organisational and project priorities
  • Guidance on measuring impact in cross-sector collaborations
  • An outline of recent updates to REF, TEF and Research Councils

For a short introduction to the guidance see this link given here. The LACR team – a strong Archives-HE collaboration itself – is delighted to have the project involved in this new guide!

Meet our new Text Enrichment Research Fellow, Wim Peters

By Wim Peters

I joined LACR in December 2017 as Text Enrichment Research Fellow and this role means I work to provide computational support for the transcription activities.

What is my background? Coming from a linguistic background (Classics, psycholinguistics) I entered the world of computational lexicography after my linguistics study in the Netherlands. From then on my main activity was the building of multilingual lexical knowledge bases, such as computational lexicons, machine translation dictionaries, term banks and thesauri.

From 1996 I worked as a Senior Research Associate/Research Fellow at the Natural Language Processing (NLP) Group in the Department of Computer Science at the University of Sheffield, where I received my PhD in the areas of computational linguistics and AI. The group works with GATE (General Architecture for Text Engineering), which is a framework for language engineering applications and supports efficient and robust text processing (http://www.gate.ac.uk). In Sheffield I specialised further in knowledge acquisition from text and the modelling of that knowledge into formal representations such as RDF and OWL. I participated in many projects and application fields such as fisheries, digital archiving and law.

NLP for legal applications is a growing area in which I have been engaged. Given the fact that legal texts mostly consist of unstructured text, NLP allows the automatic filtering of legal text fragments, the extraction of conceptual information and automated support for text interpretation through close reading. By using a combination of NLP and Semantic Web technologies such as XML and ontologies, novel methods can be developed to analyse the law, attempt conceptual modelling of legal domains and support automated reasoning. For instance, concerning case based reasoning, Adam Wyner and I applied natural language information extraction techniques to a sample body of cases, in order to automatically identify and annotate the relevant facts (or ‘case factors’) that shape legal judgments. Annotated case factors can then be extracted for further processing and interpretation.1

Another example of my activity in conceptual extraction and modelling is the creation of a semi-automatic methodology and application for identifying the Hohfeldian relation ‘Duty’ in legal text.2 Using the GATE tool for the automated extraction of Duty instances and its associated roles such as DutyBearer, the method provides an incremental knowledge base intended to support scholars in their interpretation.3

I also work on creating or transforming text representation structures. In a recent project with the Law School at the University of Birmingham my main task was the reformatting of legal judgments from national and EU legislation in 23 languages for storage and querying purposes into both the Open Corpus Workbench format (http://cwb.sourceforge.net/index.php) and inline XML TEI compliant format.

Finally back to the present. My main interest is in using language technology to serve Digital Humanities (DH) scholarly research. Interpreting text involves the methodological application of NLP techniques and the formal modelling of the knowledge extracted within a collaborative setting involving expert scholars and language technicians.

Language technology should assist interpretative scholarly processes. Computational involvement in DH needs to ensure that humanities researchers – a considerable part of whom still remain to be convinced of the advantages of this digital revolution for their research – will embrace language technology. This will further researchers’ aims in textual interpretation, for instance in the selection of relevant text fragments, and in the creation of an integrated knowledge structure that makes semantic content explicit, and uniformly accessible.

The collaborative automatic and manual knowledge acquisition workflow is illustrated in the figure below.

blog-wim-picture

Within the DH space of LACR, I appreciate the philological building blocks that are being laid. The XML structure allows further exploration of the data though querying using Xquery. XML-based analysis tools (e.g. AntConc4 and GATE) can be used for analysis and future addition of knowledge about the content of the registers, for instance the formulaic nature of the legal language used based on ngrams, and the semantic impact of some regularly used patterns of words. For example, the Latin phrase ‘electi fuerunt’ (‘they were elected’) collocates in the text with entities such as persons, dates and offices, which fit into a conceptual framework about ‘election’.

Looking into the future, standard representations such as TEI-XML ensure that information can be added flexibly and incrementally as metadata for the purpose of scholarly corpus enrichment. Knowledge acquisition through named entity recognition, term extraction and textual pattern analysis will help build an incremental picture of the domain. This knowledge can then be formalised through knowledge representation languages such as RDF and OWL. That will serve to provide an ontological backbone to the extracted knowledge, and enable connections to Linked Data across the Web (http://linkeddata.org/).


  1. Wyner, A., and Peters, W. (2010), Lexical semantics and expert legal knowledge towards the identification of legal case factors, JURIX 2010. 
  2. For a description of Hohfeld’s legal relations see e.g. http://www.kentlaw.edu/perritt/blog/2007/12/hohfeldian-primer.html). 
  3. Peters, W. and Wyner, A. (2015), Extracting Hohfeldian Relations from Text, JURIX 2015. 
  4. http://www.laurenceanthony.net/software/antconc/ 

Digital Humanities – What’s the fuss about?

by Anna D. Havinga

“Digital Humanities” (DH) has become a vogue word in academia in the last few decades. DH centres have been set up, DH workshops and summer schools are held regularly all over the world, and the number of DH projects is increasing rapidly. But what is all the fuss about?

 

What is Digital Humanities?

There are numerous articles that discuss what DH is and is not. It is generally agreed that just posting texts or pictures on the internet or using digital tools for research does not qualify as DH.1 There are, however, few works that give a concise definition of DH. Kirschenbaum quotes a definition from Wikipedia, which he describes as a working definition that “serves as well as any”.2 In my view, the definition for DH on Wikipedia3 has even improved since 2013, when Kirschenbaum’s article was published. I believe it now captures the essence of DH more accurately:

[…] [A] distinctive feature of DH is its cultivation of a two-way relationship between the humanities and the digital: the field both employs technology in the pursuit of humanities research and subjects technology to humanistic questioning and interrogation, often simultaneously. Historically, the digital humanities developed out of humanities computing, and has become associated with other fields, such as humanistic computing, social computing, and media studies. In concrete terms, the digital humanities embraces a variety of topics, from curating online collections of primary sources (primarily textual) to the data mining of large cultural data sets to the development of maker labs. Digital humanities incorporates both digitized (remediated) and born-digital materials [i.e. materials that originate in digital form, ADH] and combines the methodologies from traditional humanities disciplines (such as history, philosophy, linguistics, literature, art, archaeology, music, and cultural studies) and social sciences, with tools provided by computing (such as Hypertext, Hypermedia, data visualisation, information retrieval, data mining, statistics, text mining, digital mapping), and digital publishing. (https://en.wikipedia.org/wiki/Digital_humanities)

Our Law in Aberdeen Council Registers project can serve as a prime example for a DH project: We create digital transcriptions of the Aberdeen Burgh Records (1397–1511) with the help of computing tools. This means that we type the original handwritten text into a software programme in a format that can be understood by computers. More specifically, we use the oXygen XML editor with the add-on HisTEI to create transcriptions that are compliant with the Text Encoding Initiative (TEI) guidelines (version P5).4 In this way, we produce a machine-readable and machine-searchable text.5 But what benefits does this have? Why do we go through all this effort when the pictures of the Aberdeen Burgh Records are already available online?6

 

What are the benefits of a digital, transcribed version of a text?

Apart from the obvious benefit of a digital, transcribed version of text being much easier to read than the original handwriting, it allows for information to be added to the text. With the help of so-called ‘tags’, a text can be enriched with all kinds of structural annotations and metadata. Tagging here means adding XML annotations to the text. For example, the textual passages in the Aberdeen Burgh Registers, which are mainly written in Latin or Middle Scots, can be marked up as such, using the ‘xml:lang’ tag. A researcher who is interested in the use of Middle Scots in these registers could then search for and find all Middle Scots sections in the corpus very easily with the help of a text analysis tool such as AntConc or SketchEngine without having to plough through the sections written in Latin. More generally, enriching the text with tags means that a researcher does not have to read through all of the over 5,000 pages of the Aberdeen Council Registers that we will transcribe in order to find what s/he is looking for. A machine-readable and machine-searchable text does not only save time when researching a particular topic but is also generally more flexible than a printed version of text as further tags can be added and unwanted tags can be hidden. Furthermore, a digital text allows us to ask different questions of a text corpus. It is those possible questions plus a variety of other issues that have to be considered before embarking on a DH project.

Blog1_picture

Transcription of volume 7 of the Aberdeen Counil Registers (p. 60), annotated with XML tags

 

What has to be considered when setting up a DH project?

There are several major questions that have to be considered before starting a DH project of the sort we are carrying out: What is it that you want to get from the material you work on? Who else will be using it? In what way will it be used? Which research questions could be asked? Information on the possible users of the born-digital material is essential in order to decide which information should be marked up in the corpus of text. This is, of course, also a matter of time (and money) since adding information to the original text in form of tags takes time. The balance between time and enrichment has to be determined for each individual DH project. In our project we decided to go through different stages of annotation – starting with basic annotations (e.g. expansions, languages) first and adding further tags later (e.g. names, places etc.). Also, users will be able to add further annotations that may be specific to their research projects. Beyond these considerations, choices about software and hardware, tools, platforms, web development, infrastructure, server environment, interface design etc. have to be made before embarking on the DH project. Anything that is not determined at the beginning of the project may lead to considerable efforts at a later stage of the project.

It is certainly worth going through all this effort. To us it is clear why DH has become such a big thing. It eases research, extends the toolkits of traditional scholarship, and opens up material to a wider audience of users.7 With tags we can enrich the content of texts by adding additional information, which can then change the nature of humanities inquiry. DH projects are by nature about networking and collaboration between different disciplines, which is certainly the way forward in the humanities.

 

 


  1. Anne Burdick et al. 2012. Digital_Humanities. Cambridge, MA: MIT Press, p. 122. 
  2. Matthew G. Kirschenbaum. 2013. ‘What Is Digital Humanities and What’s It Doing in English Departments?’ In: Melissa Terras, Julianne Nyhan, Edward Vanhoutte (eds), Defining Digital Humanities. A Reader. Farnham: Ashgate, 195-204, p. 197. 
  3.  https://en.wikipedia.org/wiki/Digital_humanities [accessed 19.07.2016] 
  4.  http://www.tei-c.org/Guidelines/P5/ [accessed 25.07.2016] 
  5. In further blog posts, we will explain in more detail how we do this. 
  6. http://www.scotlandsplaces.gov.uk/digital-volumes/burgh-records/aberdeen-burgh-registers/ [accessed 25.07.2016] 
  7. Anne Burdick et al. 2012, p. 8.