Automatic Analysis of Substantiation in Scientific Peer Reviews - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Automatic Analysis of Substantiation in Scientific Peer Reviews

Résumé

With the increasing amount of problematic peer reviews in top AI conferences, the community is urgently in need of automatic quality control measures. In this paper, we restrict our attention to substantiation — one popular quality aspect indicating whether the claims in a review are sufficiently supported by evidence — and provide a solution automatizing this evaluation process. To achieve this goal, we first formulate the problem as claim-evidence pair extraction in scientific peer reviews, and collect SubstanReview, the first annotated dataset for this task. SubstanReview consists of 550 reviews from NLP conferences annotated by domain experts. On the basis of this dataset, we train an argument mining system to automatically analyze the level of substantiation in peer reviews. We also perform data analysis on the SubstanReview dataset to obtain meaningful insights on peer reviewing quality in NLP conferences over recent years. The dataset is available at https://github.com/YanzhuGuo/SubstanReview.
Fichier principal
Vignette du fichier
2311.11967v1.pdf (445.85 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04593403 , version 1 (01-07-2024)

Identifiants

Citer

Yanzhu Guo, Guokan Shang, Virgile Rennard, Michalis Vazirgiannis, Chloé Clavel. Automatic Analysis of Substantiation in Scientific Peer Reviews. Findings of the Association for Computational Linguistics: EMNLP 2023, Dec 2023, Singapore (SG), Singapore. pp.10198-10216, ⟨10.18653/v1/2023.findings-emnlp.684⟩. ⟨hal-04593403⟩
0 Consultations
1 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More