Manual Corpus Annotation: Giving Meaning to the Evaluation Metrics - Archive ouverte HAL
Communication Dans Un Congrès Année : 2012

Manual Corpus Annotation: Giving Meaning to the Evaluation Metrics

Résumé

Computing inter-annotator agreement measures on a manually annotated corpus is necessary to evaluate the reliability of its annotation. However, the interpretation of the obtained results is recognized as highly arbitrary. We describe in this article a method and a tool that we developed which "shuffles" a reference annotation according to different error paradigms, thereby creating artificial annotations with controlled errors. Agreement measures are computed on these corpora, and the obtained results are used to model the behavior of these measures and understand their actual meaning.
Fichier principal
Vignette du fichier
coling2012_EvalManualAnnotation.pdf (143.86 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00769639 , version 1 (02-01-2013)

Licence

Identifiants

  • HAL Id : hal-00769639 , version 1

Citer

Yann Mathet, Antoine Widlöcher, Karen Fort, Claire François, Olivier Galibert, et al.. Manual Corpus Annotation: Giving Meaning to the Evaluation Metrics. International COLING 2012: 24th Conference on Computational Linguistics, Dec 2012, Mumbaï, India. pp.809-818. ⟨hal-00769639⟩
836 Consultations
714 Téléchargements

Partager

More