Session 5 — TEI Friday

Friday 09:30 - 11:00

High Tor 3

Chair: Katherine Rogers

Creating Processing Models for Scholarly Digital Editions

  • James Cummings,
  • Magdalena Turska

University of Oxford

The Guidelines of the Text Encoding Initiative (TEI) are the de facto standard for the creation of high quality scholarly digital editions. One of the barriers for TEI use is the generalised nature and many kinds of textual phenomena that it enables users to document. At DHC 2012 James Cummings discussed how the customisation of the TEI for individual projects gives them a way to overcome this by documenting their project’s specific needs. This paper builds on the DHC 2012 paper to tackle a different problem: the difficulties for developers implementing processing workflows for this extremely generalistic scheme. Developers can, to be sure, look at the project’s schema and decide how to process individual elements for whatever outputs are needed, but there is no method of documenting a processing model in TEI ODD customisations. This lack of a model is because the intended uses are so wide and varied -- there can be many possible outputs from any one TEI document. You might generate a PDF, DOCX, RDF, EPUB, or HTML website from either a single document or a collection of them, but each of those might produce different many views on the document and/or extracted lists of linked metadata. It is important to ensure that these different processing models are documented consistently through planned revisions to the TEI ODD customisation language so that One Document to really Do-it-all. As part of the Marie Curie ITN ‘DiXiT’ project on scholarly digital editions the University of Oxford is investigating processing models for such editions. This is based on the TEI Simple initiative which the TEI Consortium is currently developing, and for which Oxford will lead on the definition of processing information. This paper will discuss how to document and generate processing workflows based on a project’s TEI customisation.

Remediating Giacomo Leopardi’s Zibaldone: Harvesting Semantic Networks in the Fragmentary Research Notebook

  • Silvia Stoyanova

Trier Center for Digital Humanities

The proposed paper will discuss the remediation of the Zibaldone (“commonplace book”) – the voluminous research notebook of the 19th century Italian author Giacomo Leopardi, into a hypertext research platform, and will suggest it as a paradigm for mediating the fragmentary research notebook genre, to which belong the Notebooks of Colerigde, of Joubert, of Valéry, etc.

The argument for remediation is posed by the nature of the text, which is written in fragments, marked by their date of composition and connected by thousands of references, establishing overlapping semantic networks. Since the Zibaldone evolves from the tradition of commonplace books into the modern typology of the intellectual diary, it also presents with a significant network of references to other works, with which the author critically engages. With the intention of translating his notebook into publications on a range of subjects, Leopardi furnished it with a thematic index making references to over 10,000 paragraphs.

The TEI encoding of the hypertextual features of the text, namely the references and the index tags; the structural divisions, such as pages, paragraphs, dates, marginal and interlinear annotations, quotations, bibliographic references; the content elements, such as person and place names, allows to exploit the relational dimensions of the text and to harvest its inter- and intra-textual semantic networks as histograms, network visualizations, statistical charts, etc.

In its larger scope, the research platform aspires to become a collaborative space, where users would be able to add and share their own annotations to the text and also contribute to expanding the site’s apparatus, such as a database of its critical bibliography.

The project’s future trajectory is to extend the computational methods adopted for the Zibladone in order to investigate the fragmentary notebook genre from the perspective of its phenomenological approach and the potential of digital technology to qualitatively mediate its fragmentariness.

Electrifying Intoxicants: Building a Database of Alcohol, Nicotine, Caffeine, and Opium in Early Modern England

  • James Brown

University of Sheffield

Established in October 2013, ‘Intoxicants and Early Modernity: England, 1580-1740’ is a three-year ESRC/AHRC research project exploring the importance of intoxicants and intoxication – alcohol, nicotine, caffeine, opium, and associated practices – to the economic, social, political, and cultural life of early modern England (http://www.intoxicantsproject.org). A central output of the project, and the evidential basis for many of its traditional research publications, is a relational database that will affiliate and make publically accessible several fresh datasets derived from analysis of a wide range of primary sources featuring intoxicants (including port books, court depositions, licensing materials, probate inventories, objects, and printed texts). This paper will provide an introduction to and overview of the digital methods adopted – and some problems confronted – by the project as it attempts to generate and federate a variety of heterogeneous materials not usually combined within the same electronic resource. It will describe its approach to ontology modelling and data design; discuss its bespoke online forms for data entry and creation, in use by two research associates across two case study sites (Cheshire and Norfolk); and introduce its plans for the discovery interface, which will go substantially beyond conventional results listings by incorporating a suite of dynamically generated, in-browser visualisations of intoxicant-related entities (graphs and charts, timelines, maps, topic models, and network diagrams). Overall, the paper will provide insights into the intellectual and technical development of what is hoped will become a major new tool for the social history of early modern England, while reflecting more broadly on the opportunities and challenges of collaborative digital scholarship in the humanities.