Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Workshop report: Easy Tools for Difficult Texts (18.-19.4.13, Den Haag)

Digital Humanities Workshop funded by the COST Action IS1005: Medieval Europe – Medieval Cultures and Technological Resources, and Huygens INGEasyTools

I had recently the opportunity to attend the workshop Easy Tools for Difficult Texts, a two-day event at Huygens ING, Den Haag attracting both Humanities scholars and computing experts with an overlapping interest – to create better and more sophisticated digital tools for solving scholarly inquiries. Being a philologist first and a historian second, I was curious to know, whether I can gain something from a workshop with such a theme, or whether, not having any advanced programming skills, I will remain a dumb-struck listener of overtly technical presentations having too often no clue about what was being discussed.

Luckily, the latter situation ensued only rarely and I was pleasantly surprised by the degree of awareness both on the part of the organizers and the presenting projects of the fact that the audience of the workshop consists not so much of computing specialists as of scholars. Digital Humanities is a somewhat amphibian discipline, since it balances two quite distinct methodologies and approaches. Those who work in this discipline strive for sophistication, complexity and scientific excellence of the tools, as much as ease of use and attractiveness for the scholarly users. One of the big questions of the workshop was thus, not surprisingly, whether the digital tools, which should make the work of the scholars easier, in fact do not make things more difficult simply by being unintuitive, complex, non-trivial and incompatible with other research tools. As one of the speakers, Gregor Middell from Würzburg remarked, we should speak not of easy tools for difficult texts, but of difficult tools and how to make them easier.

It was encouraging that many of the speakers touched upon this very issue and that the buzzwords of many of the presentations were intuitiveness, compatibility, accessibility, simplicity as well as flexibility and interoperability. In addition to these, I was surprised to find out about issues in digital humanities which I was unaware of: How to effectively crowd source a transcription or translation of manuscript material (yes, the general public can do the job, too!), how to open the data to everyone and in this way stimulate co-operation and new venues for research, how to combine different formats of data in order to make digital objects more real, how to make your scholarly research output “snatchable” by engines out there which can link them with other data on the web and create a network of knowledge, and, most scary of all, that the machines can these days read manuscripts and transcribe them into written text with great accuracy and that it will be perhaps possible soon to google through digitized manuscripts.

Rather than jumping through all the projects that were presented at the workshop, I will say share my impressions of the five most interesting talks that I happened to listen to (mind you, I am a Latin philologist and a semi-historian):

1.       Transcribe Bentham Project

Jeremy Bentham (1748-1832) was an important British philosopher and reform thinker, who left behind a formidable amount of manuscript material. The bulk of over 70 000 folia of hand-written text, often in draft form deterred scholars from editing and study. For this reason a team of scholars working on The Collected Works of Jeremy Bentham decided to involve public in transcribing the Bentham Papers and to make it possible to edit the material in a human life-time. It was interesting to see that such a project which involve non-specialist transcribers can be a success, if the task is made interesting, challenging and at the same time easy enough to attract crowds. Thanks to the enthusiasms of the volunteers, 25 000 folia have already been transcribed and the project team developed a clever wiki-like transcription environment which can be used for further projects of the same type in the future (and which is going to be further simplified to make the transcription even easier). So, if you want to try your hands on Bentham Papers, you are welcomed to volunteer, too!

2.      Shared Canvas

Have you ever wanted to make a remark about a manuscript directly into the digital image, just as if you were a medieval scribe annotating the page? Or attach a piece of music to a neumatized hymn to let other hear it? Could manuscripts be stripped of their layers of modern additions and annotations to see how they looked when they were freshly out of the scriptorium, or just left the hands of a medieval annotator? Will it be ever possible to switch between different digital representations of the same manuscript and layer them atop each other, an edition text on top of the manuscript image? Or to reassemble manuscript that is now deposited in a number of libraries into the form it might have existed centuries ago? This is now possible, or might be in future, thanks to Shared Canvas initiative that aims to allow for easy manipulation with digital data of different formats and to make digital manuscripts more plastic and “touchable”. As Shared Canvas strives to bring together distinct formats of data, one of the objectives of this project was to create such a data model that operates with as little rigid premises as possible, e.g. that a page must be always rectangular or else resemble a page, that it contains only a single text or text at all, or even that it physically exists. Shared Canvas may be of great help particularly in these more difficult cases, when manuscripts are not neat and they “misbehave”.

3.      Digital Editions are going Cognitive

Digital environment for editions (and other objects) can be much more than just a digital substitute for paper books or a re-vamp of tools that already exist, claims Eugene Lyman, who worked on the Piers Plowman Electronic Archive and the Elwood Viewer. The truth is that very often the past scholarly practices from before the digital age have bearing on what we are able to conceive of in the digital setting. But, we could do much more, by stepping out of the box and thinking of the digital tools not as extension of the pre-extant scholarly tools, but rather as of extensions of our cognitive tools that we carry in our heads! Why would we represent text on the screen in the way that makes it difficult for our eyes and brain to process them, if we can actually pre-digest the text for our mind with the help of the computer? A small example: it is almost impossible for a scholar to read through a text and based on that reading discern particular stylistic patterns or leaf through a manuscript and notice orthographic details peculiar for a particular scribe. Yet, we can spot both, if they are represented as computer-generated graphs, simply because our brain is keen on seeing patterns, but only if it is not overloaded by processing the text. These observations may seem trivial, and their implementation simple, but they beg for extending our perspective on what digital tools are, and how they influence both the questions we are asking and the answers we are looking for.

4.      HisDoc Project

This was perhaps the most intimidating of the projects presented as it suggested that perhaps within five years, computers will be effectively able to transcribe manuscripts and transform them into web-content that can be googled by anyone. Some ten years ago, this was just a dream, but the low error rate of this project shows that we are getting closer to the point where a Merovingian manuscript will be a piece of cake for a clever computer. Three universities in Germany and Switzerland teamed together to work on three aspects of the HisDoc Project: document image analysis (the computer is able to identify where the text is located on the page, where images are and distinguish between different types of textual material), handwriting recognition (the computer is able to transform the words written by hand on the parchment into a text, reading through abbreviations and seeing through corrections), and information retrieval (the computer is then able to search words and phrases in such an abstract text and go back to the manuscript image to pinpoint them there). The impressive feat is that the computer-generated transcription has the error rate around 6%, and that the query function can search for a word that has in the manuscript a different orthography, or contains an error. It is worthwhile to keep an eye on this project (and other, similar ones), because it may happen soon that we will benefit from computer-generated transcripts and from artificial intelligence that can learn how to spot minute details that escape human eye.

5.      Sharing Ancient Wisdom Project

Wisdom literature of Antiquity and Middle Ages has certain particular features: the wisdom sayings behave as independent micro-texts rather than being strictly rigid, thus they have tendency to transform quite freely and different collections are therefore dynamic objects that respect few boundaries. King’s College London, The Newman Institute in Uppsala and the University of Vienna started a project that aims to track movement of these dynamic micro-texts, from collection to collection, but also from one language (Greek) to another (Arabic). They hope to reveal links between florilegia that will help to understand intellectual currents in the Middle Ages, provide an insight into formation of such collections, analyze transformations that affected the material in the course of transmission and translation, and perhaps reconstruct texts that perished in their Greek originals, but survived as wisdom items in Arabic. Moreover, one of the objectives of the project is to make the findings of the team, and the texts themselves, available to the outsiders and to allow for connecting them with other pieces of information available online, e.g. sites that map the historic world, directories of persons and repertories of texts.

Evina Steinova

I am currently a VENI postdoc at Huygens ING, an institute of the Dutch Royal Academy of Arts and Sciences in Amsterdam. My postdoctoral project deals with the diffusion of innovations in Carolingian period and the role of intellectual networks in their successful spreading (or on the contrary the lack thereof). I have been active on Mittelalter since 2015 while still a PhD working on annotation practices in the early medieval Latin West. If you are seeking someone to explain you strange symbols appearing in the margins of early medieval codices, I may be the right person to consult. I have originally trained as a Latinist. My interests include Digital Humanities, medieval Jewish history and Judeo-Christian relations, history of science, and the study of early medieval manuscripts.

More Posts - Website


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Evina Steinova (6. Mai 2013). Workshop report: Easy Tools for Difficult Texts (18.-19.4.13, Den Haag). Mittelalter. Abgerufen am 11. Dezember 2024 von https://doi.org/10.58079/rgr6


Das könnte dich auch interessieren …

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.