Increasing Argument Annotation Reproducibility by Using Inter-annotator Agreement to Improve Guidelines
Résumé
In this abstract we present a methodology to improve Argument annotation guidelines by exploiting inter-annotator agreement measures.
After a first stage of the annotation effort, we have detected problematic issues via an analysis of inter-annotator agreement. We
have detected ill-defined concepts, which we have addressed by redefining high-level annotation goals. For other concepts, that
are well-delimited but complex, the annotation protocol has been extended and detailed. Moreover, as can be expected, we show
that distinctions where human annotators have less agreement are also those where automatic analyzers perform worse. Thus, the
reproducibility of results of Argument Mining systems can be addressed by improving inter-annotator agreement in the training material.
Following this methodology, we are enhancing a corpus annotated with argumentation, available at https://github.com/
PLN-FaMAF/ArgumentMiningECHR together with guidelines and analyses of agreement. These analyses can be used to filter
performance figures of automated systems, with lower penalties for cases where human annotators agree less.