Learning to Rank Context for Named Entity Recognition Using a Synthetic Dataset - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Learning to Rank Context for Named Entity Recognition Using a Synthetic Dataset

Arthur Amalvy
  • Fonction : Auteur
  • PersonId : 753566
  • IdHAL : aamalvy
Vincent Labatut

Résumé

While recent pre-trained transformer-based models can perform named entity recognition (NER) with great accuracy, their limited range remains an issue when applied to long documents such as whole novels. To alleviate this issue, a solution is to retrieve relevant context at the document level. Unfortunately, the lack of supervision for such a task means one has to settle for unsupervised approaches. Instead, we propose to generate a synthetic context retrieval training dataset using Alpaca, an instructiontuned large language model (LLM). Using this dataset, we train a neural context retriever based on a BERT model that is able to find relevant context for NER. We show that our method outperforms several retrieval baselines for the NER task on an English literary dataset composed of the first chapter of 40 books.
Fichier principal
Vignette du fichier
emnlp2023.pdf (381.15 Ko) Télécharger le fichier
slides.pdf (209.23 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Origine : Fichiers produits par l'(les) auteur(s)
Licence : CC BY NC - Paternité - Pas d'utilisation commerciale

Dates et versions

hal-04237987 , version 1 (11-10-2023)
hal-04237987 , version 2 (02-11-2023)

Identifiants

Citer

Arthur Amalvy, Vincent Labatut, Richard Dufour. Learning to Rank Context for Named Entity Recognition Using a Synthetic Dataset. Conference on Empirical Methods in Natural Language Processing (EMNLP), ACL, Dec 2023, Singapore, Singapore. pp.10372-10382. ⟨hal-04237987v2⟩
91 Consultations
28 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More