Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction

Résumé

State-of-the-art NLP models can adopt shallow heuristics that limit their generalization capability (McCoy et al., 2019). Such heuristics include lexical overlap with the training set in Named-Entity Recognition (Taill\'e et al., 2020) and Event or Type heuristics in Relation Extraction (Rosenman et al., 2020). In the more realistic end-to-end RE setting, we can expect yet another heuristic: the mere retention of training relation triples. In this paper, we propose several experiments confirming that retention of known facts is a key factor of performance on standard benchmarks. Furthermore, one experiment suggests that a pipeline model able to use intermediate type representations is less prone to over-rely on retention.

Dates et versions

hal-03480371 , version 1 (14-12-2021)

Identifiants

Citer

Bruno Taillé, Vincent Guigue, Geoffrey Scoutheeten, Patrick Gallinari. Separating Retention from Extraction in the Evaluation of End-to-end Relation Extraction. EMNLP 2021, Nov 2021, Punta Cana (online), Dominican Republic. pp.10438-10449. ⟨hal-03480371⟩
28 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More