Empirical perspectives on the reliability and accuracy of collaborative pragmatic annotation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Empirical perspectives on the reliability and accuracy of collaborative pragmatic annotation

Sophie Raineri
Jukka Tyrkkö
  • Fonction : Auteur

Résumé

Corpora can be annotated with a wide variety of different data, ranging from shallow morphosyntactic tagging to deep syntactical parsing, and from synonyms and semantic categories to named entities and reference chain identification. The general usefulness of linguistic annotation is rarely a topic of much controversy, but the methods used and the reliability thereof continue to raise questions. While some types of analysis can be performed algorithmically with increasing accuracy, such as part-of-speech tagging, others continue to require the efforts of human annotators (see Hovy & Lavid 2010). The latter types of annotation tasks raise particular concerns, because while erroneous annotations do occur with computational methods, the errors are usually systematic and thus easy to account for and correct (see Archer 2012). By contrast, human annotators tend to work more inconsistently, and their performance is believed to rely extensively on how they were instructed and how well they understood the instructions, as well as on random contextual and situational variables, which may lead to significant unreliability even at the within-annotator level (see, e.g., Larsson et al 2020). In the corpus linguistic setting, an added challenge comes from the sizes of modern corpora, which would typically make it impossible for a single human annotator to work through the entire corpus and therefore requires the involvement of multiple annotators. Out of all the different types of linguistic annotations, pragmatic annotations may be the most prone to annotator errors and inconsistencies (see Archer et al 2008). This is particularly true of pragmatic features such as functional units within spoken or written samples, and most especially if the identification of such units is based on human competence of understanding language in culture-dependent and/or intertextual contexts, such as humour or different types of storytelling (see, e.g., Alsop 2016 and Alsop et al. 2013). At the same time, annotations of this type are potentially very useful, as they would allow researchers to focus on specific sections of the corpus, or to disregard specific sections, without first having to manually analyse the corpus (see, e.g., Maynard & Leichter 2007). What, then, can we realistically expect from collaborative annotation projects in terms of reliability and accuracy? How much training is needed and does training substantially improve the results? We tested the reliability and accuracy of collaborative pragmatic annotation with a total of 18 graduate students from Linnaeus University and 35 from Paris Nanterre University. Using a controlled selection of 12 subsets of extracts from 91 speeches of the Diachronic Corpus of Political Speeches, the students were asked to annotate three types of pragmatic segments, namely OPENINGS, NARRATIVES, and HUMOUR. In the first phase of the study, 43 students annotated segments with minimal guidance, while in the second phase, a new group of 10 students received extensive instruction and engaged in collaborative discussion prior to the annotation task. Each subset of data was annotated by a total of 8 students: 5 in phase one and 3 in phase two. The research design allowed the analysis of the effects of guidance on consistency and reliability of the annotation task, in relation to the individual annotators, and the different types of annotation tasks (see Banerjee et al 1999, Potter & Levine-Donnerstein 1999). Our paper will discuss the research design and the two different annotator training models, present and compare the results of the two phases of annotation, discuss relevant statistical aspects of inter-annotator reliability assessment, and end with our recommendations concerning collaborative annotation. References Alsop, Siân. 2016. The ‘humour’ element in engineering lectures across cultures: an approach to pragmatic annotation. In Maria Jose Lopez-Couso, Belen Mendez-Naya, Paloma Nunez-Pertejo & Ignacio M Palacios-Martinez (eds.) Corpus Linguistics on the Move. Exploring and Understanding English through Corpora. London: Brill. 337–361. Alsop, Siân, Emma Moreton & Hilary Nesi. 2013. The uses of storytelling in university engineering lectures. ESP Across Cultures, 10. 7–19. Archer, Dawn, Jonathan Culpeper & Matthew Davies. 2008. Pragmatic annotation. In Kytö, Merja & Anke Lüdeling (eds.) Corpus Linguistics: An International Handbook. Mouton de Gruyter, 613-642. Archer, Dawn. 2012. Corpus annotation: A welcome addition or an interpretation too far? In Jukka Tyrkkö, Matti Kilpiö, Terttu Nevalainen & Matti Rissanen (eds.) Outposts of Historical Corpus Linguistics: From the Helsinki Corpus to a Proliferation of Resources. (Studies in Variation, Contacts and Change in English 10). Helsinki: Varieng. Available online at Banerjee, Mousumi, Michelle Capozzoli, Laura McSweeney & Debajyoti Sinha. 1999. Beyond kappa: A review of interrater agreement measures. The Canadian Journal of Statistics, 27(1), 3-23. Diachronic Corpus of Political Speeches (DCPS). In progress, expected 2022. Corpus compiled by Jukka Tyrkkö, Sophie Raineri & Jenni Riihimäki at Linnaeus University, Paris Nanterre University, and Tampere University. Freely available under Creative Commons license BY-NC-ND. Hovy, Eduard & Julia Lavid. 2010. Towards a ‘science’ of corpus annotation: A new methodological challenge for corpus linguistics. International Journal of Translation, 22:1. 13–36. Larsson, Tove, Magali Paquot and Luke Plonsky. 2020. Inter-rater reliability in learner corpus research. International Journal of Learner Corpus Research, 6(2). 237–251. Maynard, Carson and Sheryl Leichter. 2007. Pragmatic annotation of an academic spoken corpus for pedagogical purposes. In Fitzpatrick, Eileen (ed.) Corpus Linguistics Beyond the Word: Corpus Research from Phrase to Discourse. Amstredam & New York: Rodopi. 107–116. Potter, W. James & Deborah Levine-Donnerstein. 1999. Rethinking validity and reliability in content analysis. Journal of Applied Communication Research, 27(3), 258–284.

Domaines

Linguistique
Fichier non déposé

Dates et versions

hal-04361357 , version 1 (22-12-2023)

Identifiants

  • HAL Id : hal-04361357 , version 1

Citer

Sophie Raineri, Jukka Tyrkkö. Empirical perspectives on the reliability and accuracy of collaborative pragmatic annotation. 42nd ICAME conference, Chairs of English Linguistics at TU Dortmund University, Aug 2021, Dortmund / Virtual, Germany. ⟨hal-04361357⟩
9 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More