[ Skip to Abstract | Return to Top ]

Commissioned Review

Commissioning Editor: Rebecca Welzenbach, University of Michigan.
Received: 2012-06-15
Revised: 2012-11-05
Published: 2013-02-11

Keywords: digital codicology; digital palaeography.


[ Return to Navigation]

§ 1    Readers of DM will not need reminding that, in recent years, we have seen impressive digitisation enterprises focusing on manuscripts, featuring mass-digitisation approaches (e.g. the manuscripts of the Diocesan and Cathedral Library in Cologne, <http://www.ceec.uni-koeln.de/>), digital editions (e.g. Codex Sinaiticus, <http://codexsinaiticus.org/en/>); and the application of cutting-edge technologies to recover unreadable text and investigate material aspects (e.g. the Archimedes Palimpsest Project, <http://archimedespalimpsest.org/>). Such enterprises have generated a great deal of public and scholarly interest, and it should certainly not be seen as a mere coincidence that research into the history, materials, and texts of Ancient and Mediaeval manuscripts are currently experiencing a revival as well, given the increased accessibility of manuscripts via their digital surrogates and the new possibilities opened up by the aforementioned digitisation enterprises. Recent manuscript studies, particularly when informed by digital approaches, seem to make a good case for the widely held view among Digital Humanists that digitisation and digital research not only add to traditional approaches but also initiate a qualitative transformation of an entire field of research. Hence the first English sentence at the publisher’s website introducing the volume to be reviewed here: Digital technology changes the way scholars work with manuscripts (<http://www.i-d-e.de/schriften/3-kpdz2>).

§ 2     Codicology and Palaeography in the Digital Age 2 (henceforth CPDA2) is the third volume of the relatively new but noteworthy series published by the Institut für Dokumentologie und Editorik. It represents a follow-up publication to the eponymous first collection of essays, CPDA 1. Unlike its predecessor published in 2009, however, CPDA2 is not the result of an international symposium organised to document the current state of computer-aided manuscript research, but the result of invited contributions and responses to an open call for papers, which were peer-reviewed and aimed at answering questions on digitisation and cataloguing, on character recognition and the analysis of script raised in response to the first volume. An additional aim was to widen the focus to include the fields of computer-aided manuscript research in musicology and history of art as well as methodologies applied in computational and natural sciences (ix).

§ 3    The result is a volume still held together by a focus on digital approaches to manuscript research but covering a wider spectrum of issues and case studies, and perhaps therefore even more international and representative than the predecessor. Articles are in English (13), German (5) and French (4) and are always preceded by abstracts in English and German (and French, when the article is in said language). The XVI + 444 pages of CPDA2 are divided into roughly even sections of 4-5 essays each, labelled Digital reproduction, Digital catalogue and semantics, Manuscripts and the sciences, Digital palaeography, and Transcription and text encoding, and preceded by a preface in German and English as well as an introduction by Franz Fischer and Patrick Sahle (Into the wide – into the deep: manuscript research in the digital age.)

§ 4    The first part, Digital reproduction, begins with Pádraig Ó Macháin’s presentation of the Irish Script on Screen (ISOS) project of the School of Celtic Studies at the Dublin Institute for Advanced Studies (Irish script on screen: the growth and development of a manuscript digitisation project, 1-19), which has taken it upon itself to digitise the entire Gaelic manuscript tradition. The reader will not only learn about the history and structure of the project and its impact on research but, more interestingly, also about how quickly such endeavours age and are in need of further development and maintenance, despite their technological cutting-edge at the time. Armand Tif’s contribution (Kunsthistorische Online-Kurzinventare illuminierter Codices in österreichischen Klosterbibliotheken, 21-32) presents two smaller projects that involved the digitisation of the illuminated manuscripts and incunabula at the monastery collections of Herzogenburg in Lower Austria and Stams in Tirol. The focus here is more on the art historical value of said books and their accessibility via relatively low-cost solutions. Alison Stones and Ken Sochats outline a still on-going project that brings together in text and in picture the extant manuscripts featuring the Arthurian Lancelot-Grail romance (Towards a comparative approach to manuscript on the web: the case of the Lancelot-Grail romance, 33-42). One interesting aspect of the project is the geographical mapping of the spread of the interest in the romance across Europe in 50-year increments, based on the place of production of the featured manuscripts. Melissa Terras offers a suitable conclusion to this section by addressing a more general, yet necessary, reflection on digitisation as representation of essential knowledge resources (Artefacts and errors: acknowledging issues of representation in the digital imaging of ancient texts, 43-61). She argues that proper technical standards and documentation for the digital surrogates along with the required technical literacy of the digital researcher alone are not enough. What is needed is also a more profound theoretical understanding of the complex relationship between the objects represented and their representations as digital surrogates and not as replicas.

§ 5    The Digital catalogue and semantics section of the volume opens with two case studies, one by Silke Schöttle and Ulrike Mehringer on the cataloguing and online accessibility of German manuscripts at Tübingen University Library (Handschriften, Nachlässe, Inkunabeln & Co.: Die Erschließung der deutschen Handschriften und die Bereitstellung von Sonderbeständen in Online-Katalogen an der Universitätsbibliothek Tübingen mit TUSTEP, 65-73), the other by Marilena Maniaci and Paolo Eleuteri on the cataloguing and selective digitisation of all the Greek codices held in Italian libraries (Das MaGI-Projekt: Elektronische Katalogisierung der griechischen Handschriften Italiens, 75-83). The juxtaposition of a local and a national project highlights both the common issues that need to be dealt with in such projects (choice of software, cataloguing environment, standards) and the different strategic considerations involved in making those same decisions. The remainder of the section is devoted to the more general complex of topics relating to data models, Semantic Web and Linked Data. Ezio Ornato’s substantial contribution (La numérisation du patrimonie livresque medieval: avancé decisive our miroir aux alouettes?, 85-115) tries to balance the now possible utopia of a digital with the challenges met on the way towards its realisation: conflicts of interest (particularly between the libraries and the scholar) impacting on major decisions (the selection of the objects to be digitised, resolution, accessibility etc.) as well as the need for a well designed underlying data model and an ontology. Ornato’s proposal is to develop a catalogue grand-ouvert, where digital manuscript surrogates and catalogue descriptions are juxtaposed and material, textual, and historical aspects of the manuscripts are dealt with both by librarians and researchers, leading thus to a separation of concern and distinguishing the more objective and more subjective aspects of catalogue descriptions. Toby Burrows proposes to deal with the dispersed and heterogeneous web services, which make manuscripts digitally available, via an international collaborative infrastructure based on Semantic Web and Linked Data approaches (Applying semantic web technologies to Medieval manuscript research, 117-131). Instead of imposing a single metadata standard, the approach would build links between data and thus facilitate large-scale research questions across the virtual global manuscript collection. Robert Kummer suggests a similar approach for codicological research (Semantic technologies for manuscript descriptions – concepts and visions, 133-154). Unstructured or less structured data should be made accessible by means of information extraction techniques and the CIDOC Conceptual Reference Model (CRM) chosen as an underlying Semantic Web model.

§ 6    The Manuscripts and the sciences section features four essays showcasing the application of methods developed in the Natural Sciences to questions raised by manuscript research: difficult to match fragments, text lost to reuse, further knowledge on the animal sources for parchment, and identification of watermarks. The mostly Jewish manuscripts from the Cairo Genizah pose a challenge well familiar to manuscript experts: a host of fragments dispersed over many libraries in need of identification and matching. Researchers associated to the Friedberg Genizah Project (Lior Wolf, Nachum Dershowitz, Liza Potikha, Tanya German, Roni Shweka, Yaacov Choueka) made use of image-processing and artificial intelligence approaches in order to automatically match extant fragments by their visual features with an encouraging outcome: their automated results largely coincide with the traditional palaeographic taxonomies (Automatic palaeographic exploration of Genizah manuscripts, 157-179). Daniel Deckers and Leif Glaser present further work in the already blooming palimpsest research in their essay (Zum Einsatz von Synchrotronstrahlung bei der Wiedergewinnung gelöschter Texte in Palimpsesten mittels Röntgenfluoreszenz, 181-190). Their approach consists in using high-flux storage ring X-ray radiation to map trace metals in rewritten parchment sheets and thus facilitate the reading of erased text. Their original setup required large and non-mobile equipment and a slow workflow, which severely limits the wide adoption of this approach. At the end of their essay, they describe how they are planning to address these limitations. Timothy Stinson shows how analysis of DNA extracted from parchment (he does not further deliberate on the extraction method itself – is it non-destructive? – but refers to relevant literature) can show unparalleled glimpses into the medieval past (206) and be used, along other historical and archaeological data to, for example, localise herds, better understand the parchment trade, learn more about the construction of codices, and even resolve disputes about specific manuscripts (focussing on the first two in Counting sheep: potential applications of DNA analysis to the study of Medieval parchment production, 191-207). Peter Meinlschidt, Carmen Kämmerer, and Volker Märgner introduce in their essay (Thermographie – ein neuartiges Verfahren zur exakten Abnahme, Identifizierung und digitalen Archivierung von Wasserzeichen in mittelalterlichen und frühneuzeitlichen Papierhandschriften, - zeichnungen und -drucken, 209-226) a newly developed technique that aides the representation, identification and archiving of watermarks in paper manuscripts. Thermographic cameras capture radiation in the middle (3-7μm) and longer (7-14 μm) infrared spectrum and, therefore, minimal differences in temperature. Coupled with image processing and pattern recognition algorithms, this approach can become a useful support of paper-based manuscripts studies.

§ 7    The next section, Digital palaeography, returns the collection of essays to familiar ground for readers of the preceding volume, albeit with innovative insights. Peter Stokes reflects on his own experience as teacher of palaeography and codicology courses (Teaching manuscripts in the digital age, 229-245) and asks what role technology should play in such contexts both in terms of content and learning vehicle (for example, Virtual Learning Environments). Stokes, a leading digital palaeographer himself, is refreshingly honest in his assessment of the limitations of the digital age and comes to the conclusion that digital technology should only ever supplement rather than replace teaching with a live human instructor (241), though supplement palaeography it must. Dominique Stutzmann (Paléographie statistique pour décrier, identifier, dater… Normaliser pour coopérer et aller plus loin?, 247-277) proposes that digital transcriptions can and should include palaeographic aspects of the manuscript text up to the graphetic layer (following Robinson and Solopova’s taxonomy) and that these layers should be analysed statistically. After describing such an approach to the manuscripts of the scriptorium of the Cistercian Abbey of Fontenay, he rightly reminds the reader about the lack of and need for standard practices for describing, structuring, and organising palaeographical data. In the next contribution, Stephen Quirke looks at the thousands of nineteenth and eighteenth century B.C. (!) fragments from Lahun in Egypt, the largest earliest group of writing on papyrus-paper and argues that with the application of pattern recognition image processing algorithms traditional approaches to investigating palaeographic similarities and differences across the collection can now be accelerated to a point of qualitative change (279). In Recognizing degraded handwritten characters (295-306), the Vienese inter-disciplinary group consisting of Markus Diem, Robert Sablatnig, Melanie Gau, and Heinz Miklas describe how they needed to develop a new algorithm (consisting of character classification and character localisation), given that state-of-the-art optical character recognition methods only produced poor results for their critical edition of three degraded eleventh century Slavonic (Glagolitic) manuscripts from the library of St Catherine’s Monastery at Sinai. The section is concluded by Julia Craig-McFeely’s (Finding what you need, and knowing what you can find: digital tools for palaeographers in musicology and beyond, 307-339) examination of three projects: DIAMM (Digital Image Archive of Medieval Music), which gathered high-resolution images of manuscripts for musicologists, developed digital restoration methods using mainstream commercial software and multi-spectral imaging to make degraded and erased text readable; CMME (Computerized Mensural Music Editing), which offers scholars a virtual editing environment to produce diplomatic transcriptions and fluid editions; and Gamera, a useful toolkit for building optical character recognition systems suitable to a variety of scholarly interests.

§ 8    The final section of this volume, Transcription and text encoding, begins with a self-critical reflection by the Zürich scholars Isabelle Schürch and Martin Rüesch on the e-learning software Ad fontes, which allows history students to acquire the necessary expertise for working with archival materials via interactive transcription exercises. Facing the new challenges of Web 2.0 technologies, the decision was made to further develop Ad fontes with the term "soft innovation" (features on p. 357 as "sanfte Innovationen"), adding, for example, a Wiki (Ad fontes – mit E-Learning zu ersten Editionserfahrungen, 343-359). This is followed by Carole Dornier and Pierre-Yves Buard elaboration on the Montedite project, which features TEI-XML encoded transcriptions, linked page by page to digital images of Montesquieu’s notebooks (L’édition électronique de cahiers de travail: l’exemple des Mes Pensées de Montesquieu, 361-374). Samantha Saïdi, Jean-François Bert, and Philippe Artières describe a project on Michel Foucault’s archive – mainly cards with reading, bibliographic, and subject notes – which was digitised, described, annotated, and indexed using digital data processing methods and tools (Archives d’un lecteur philosophe. Le traitement numérique des notes de lecture de Michel Foucault, 375-395). The final essay, by Elena Pierazzo and Peter Stokes, addresses and theorises the juxtaposition of texts as semantic and physical entities and introduces a document-centric approach developed by the Genetic Edition Working Group in the Manuscript Special Interest Group of the TEI that considers codicological and layout features of manuscripts (Putting the text back into context: a codicological approach to manuscript transcription, 397-430).

§ 9    The fact that a second volume of readable essays could be put together within less than two years of the preceding volume clearly shows that digital manuscripts studies have to be recognised as a vibrant and fruitful interdisciplinary field of research. The contributions to CPDA2 span a wide variety of approaches, case studies, and professional/scholarly points of view, yet still seem to be held together, much to the credit of the editors, both by the theme of the individual sections and, as a whole, by the intersection of handwritten cultural heritage and digital technology. While few readers will want to read the essays cover-to-cover, on the whole, the value of CPDA2 lies in the useful and sometimes fascinating momentary snapshots it provides of the fast developing cutting edge of an emerging field as a whole. Readers might want to consult a particular section to catch up on contemporary possibilities, learn from a number of (more or less) successful projects and make up their own mind as to whether they might want to put together similar projects and contribute to the emerging corpus of digitised manuscripts. As such, any library with a manuscript collection and/or catering to scholars with an interest in manuscripts should hold a hard copy of both CPDA volumes. Equally, any scholar interested in manuscripts and/or the various digital approaches covered here will want to be able to consult the CPDA essays, at least as an electronic copy. The Institut für Dokumentologie und Editorik is to be commended for a publication strategy that makes this possible by offering the print volumes as relative affordable print-on-demand books and, at the same time, as pdf files, which can be downloaded from the University of Cologne’s institutional repository free of charge (see link above).

§ 10    This assessment, of course, begs the question: How quickly and how well does a snapshot of a cutting-edge age? Many of the essays focus on particular projects and try to tease out some of the lessons learned without undermining the importance of each project. I still find that there isn’t a project one couldn’t learn something useful from, be it because of the interesting subjects, the challenge posed by the objects, the innovative digital approaches, or simply because they stand as an example of how not to do it. But I also strongly believe that the field is in need of transcending the particularity of individual projects, if something critically worthwhile and more durable is to be discerned. Some of the essays introduce the latest in technological development soon to be made available to the wider librarian and scholarly world. I can never fight the contradictory feeling of being inspired about the many possibilities and somewhat overwhelmed by a feeling that I am out of my depth when trying to assess the underlying science, never mind the suspicion of how soon a better (cheaper, more practical, less invasive etc.) approach might come around the corner. Considering that there is now a critical mass of impressive work undertaken in this area, I wonder whether digital manuscript research will come of age when the specialist community begins to transcend particular projects and technologies and produce both critical overviews of and critical reflections on the newly emerging topics and issues, as well as works that help researchers cross the interdisciplinary bridge towards a relatively foreign discipline. CPDA2 already offers glimpses of such maturity.