The Glass Ceiling of Automatic Evaluation in Natural Language Generation - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

The Glass Ceiling of Automatic Evaluation in Natural Language Generation

Résumé

Automatic evaluation metrics capable of replacing human judgments are critical to allowing fast development of new methods. Thus, numerous research efforts have focused on crafting such metrics. In this work, we take a step back and analyze recent progress by comparing the body of existing automatic metrics and human metrics altogether. As metrics are used based on how they rank systems, we compare metrics in the space of system rankings. Our extensive statistical analysis reveals surprising findings: automatic metrics – old and new – are much more similar to each other than to humans. Automatic metrics are not complementary and rank systems similarly. Strikingly, human metrics predict each other much better than the combination of all automatic metrics used to predict a human metric. It is surprising because human metrics are often designed to be independent, to capture different aspects of quality, e.g. content fidelity or readability. We provide a discussion of these findings and recommendations for future work in the field of evaluation.
Fichier principal
Vignette du fichier
2208.14585v2.pdf (1.86 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04575126 , version 1 (14-05-2024)

Identifiants

Citer

Pierre Colombo, Maxime Peyrard, Nathan Noiry, Robert West, Pablo Piantanida. The Glass Ceiling of Automatic Evaluation in Natural Language Generation. IJCNLP-AACL 2023 : The 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, Nov 2023, Nusa Dua, Bali, Indonesia. pp.178-183, ⟨10.18653/v1/2023.findings-ijcnlp.16⟩. ⟨hal-04575126⟩
47 Consultations
18 Téléchargements

Altmetric

Partager

More