Large-Scale Evaluation of Keyphrase Extraction Models - Archive ouverte HAL
Communication Dans Un Congrès Année : 2020

Large-Scale Evaluation of Keyphrase Extraction Models

Résumé

Keyphrase extraction models are usually evaluated under different, not directly comparable, experimental setups. As a result, it remains unclear how well proposed models actually perform, and how they compare to each other. In this work, we address this issue by presenting a systematic large-scale analysis of state-of-the-art keyphrase extraction models involving multiple benchmark datasets from various sources and domains. Our main results reveal that state-of-the-art models are in fact still challenged by simple baselines on some datasets. We also present new insights about the impact of using author- or reader-assigned keyphrases as a proxy for gold standard, and give recommendations for strong baselines and reliable benchmark datasets.
Fichier principal
Vignette du fichier
large_scale_exp(1).pdf (1.35 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02878953 , version 1 (23-06-2020)

Identifiants

Citer

Ygor Gallina, Florian Boudin, Béatrice Daille. Large-Scale Evaluation of Keyphrase Extraction Models. ACM/IEEE Joint Conference on Digital Libraries (JCDL), Aug 2020, Wuhan, China. ⟨10.1145/1122445.1122456⟩. ⟨hal-02878953⟩
102 Consultations
662 Téléchargements

Altmetric

Partager

More