Manual Corpus Annotation: Giving Meaning to the Evaluation Metrics
Résumé
Computing inter-annotator agreement measures on a manually annotated corpus is necessary to evaluate the reliability of its annotation. However, the interpretation of the obtained results is recognized as highly arbitrary. We describe in this article a method and a tool that we developed which "shuffles" a reference annotation according to different error paradigms, thereby creating artificial annotations with controlled errors. Agreement measures are computed on these corpora, and the obtained results are used to model the behavior of these measures and understand their actual meaning.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...