Contextualized Embeddings in Named-Entity Recognition: An Empirical Study on Generalization - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Contextualized Embeddings in Named-Entity Recognition: An Empirical Study on Generalization

Résumé

Contextualized embeddings use unsupervised language model pretraining to compute word representations depending on their context. This is intuitively useful for generalization, especially in Named-Entity Recognition where it is crucial to detect mentions never seen during training. However, standard English benchmarks overestimate the importance of lexical over contextual features because of an unrealistic lexical overlap between train and test mentions. In this paper, we perform an empirical analysis of the generalization capabilities of state-of-the-art contextualized embeddings by separating mentions by novelty and with out-of-domain evaluation. We show that they are particularly beneficial for unseen mentions detection, especially out-of-domain. For models trained on CoNLL03, language model contextualization leads to a +1.2% maximal relative micro-F1 score increase in-domain against +13% out-of-domain on the WNUT dataset

Dates et versions

hal-02503463 , version 1 (10-03-2020)

Identifiants

Citer

Bruno Taille, Vincent Guigue, Patrick Gallinari. Contextualized Embeddings in Named-Entity Recognition: An Empirical Study on Generalization. ECIR 2020 - 42nd European Conference on Information Retrieval, Apr 2020, Lisbon, Portugal. ⟨hal-02503463⟩
86 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More