Distilling Knowledge from Reader to Retriever for Question Answering - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Distilling Knowledge from Reader to Retriever for Question Answering

Résumé

The task of information retrieval is an important component of many natural language processing systems, such as open domain question answering. While traditional methods were based on hand-crafted features, continuous representations based on neural networks recently obtained competitive results. A challenge of using such methods is to obtain supervised data to train the retriever model, corresponding to pairs of query and support documents. In this paper, we propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents. Our approach leverages attention scores of a reader model, used to solve the task based on retrieved documents, to obtain synthetic labels for the retriever. We evaluate our method on question answering, obtaining state-of-the-art results.
Fichier principal
Vignette du fichier
Distilling_Reader_Retriever.pdf (222.95 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03463398 , version 1 (02-12-2021)

Identifiants

  • HAL Id : hal-03463398 , version 1

Citer

Gautier Izacard, Edouard Grave. Distilling Knowledge from Reader to Retriever for Question Answering. ICLR 2021 - 9th International Conference on Learning Representations, May 2021, Vienna, Austria. ⟨hal-03463398⟩
60 Consultations
227 Téléchargements

Partager

Gmail Facebook X LinkedIn More