A Methodology for the Comparison of Human Judgments With Metrics for Coreference Resolution
Résumé
We propose a method for investigating the interpretability of metrics used for the coreference resolution task through comparisons with human judgments. We provide a corpus with annotations of different error types and human evaluations of their gravity. Our preliminary analysis shows that metrics considerably overlook several error types and overlook errors in general in comparison to humans. This study is conducted on French texts, but the methodology should be language-independent.
Fichier principal
A_Methodology_for_the_Comparison_of_Human_Judgments_With_Metrics_for_Coreference_Resolution.pdf (176.27 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|