Questioning the Validity of Summarization Datasets and Improving Their Factual Consistency - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Questioning the Validity of Summarization Datasets and Improving Their Factual Consistency

Résumé

The topic of summarization evaluation has recently attracted a surge of attention due to the rapid development of abstractive summarization systems. However, the formulation of the task is rather ambiguous, neither the linguistic nor the natural language processing communities have succeeded in giving a mutually agreed-upon definition. Due to this lack of well-defined formulation, a large number of popular abstractive summarization datasets are constructed in a manner that neither guarantees validity nor meets one of the most essential criteria of summarization: factual consistency. In this paper, we address this issue by combining state-of-the-art factual consistency models to identify the problematic instances present in popular summarization datasets. We release SummFC, a filtered summarization dataset with improved factual consistency, and demonstrate that models trained on this dataset achieve improved performance in nearly all quality aspects. We argue that our dataset should become a valid benchmark for developing and evaluating summarization systems.

Dates et versions

hal-04494204 , version 1 (07-03-2024)

Identifiants

Citer

Yanzhu Guo, Chloé Clavel, Moussa Kamal Eddine, Michalis Vazirgiannis. Questioning the Validity of Summarization Datasets and Improving Their Factual Consistency. 2022 Conference on Empirical Methods in Natural Language Processing, Dec 2022, Abu Dhabi, United Arab Emirates. pp.5716-5727, ⟨10.18653/v1/2022.emnlp-main.386⟩. ⟨hal-04494204⟩
20 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More