The Power of Selecting Key Blocks with Local Pre-ranking for Long Document Information Retrieval - Archive ouverte HAL
Article Dans Une Revue ACM Transactions on Information Systems Année : 2022

The Power of Selecting Key Blocks with Local Pre-ranking for Long Document Information Retrieval

Minghan Li
Yagmur Gizem Cinar
  • Fonction : Auteur
Eric Gaussier

Résumé

On a wide range of natural language processing and information retrieval tasks, transformer-based models, particularly pre-trained language models like BERT, have demonstrated tremendous effectiveness. Due to the quadratic complexity of the self-attention mechanism, however, such models have difficulties processing long documents. Recent works dealing with this issue include truncating long documents, in which case one loses potential relevant information, segmenting them into several passages, which may lead to miss some information and high computational complexity when the number of passages is large, or modifying the self-attention mechanism to make it sparser as in sparse-attention models, at the risk again of missing some information. We follow here a slightly different approach in which one first selects key blocks of a long document by local query-block pre-ranking, and then few blocks are aggregated to form a short document that can be processed by a model such as BERT. Experiments conducted on standard Information Retrieval datasets demonstrate the effectiveness of the proposed approach.
Fichier principal
Vignette du fichier
select-block-TOIS.pdf (2.7 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03831739 , version 1 (27-10-2022)

Identifiants

Citer

Minghan Li, Diana Nicoleta Popa, Johan Chagnon, Yagmur Gizem Cinar, Eric Gaussier. The Power of Selecting Key Blocks with Local Pre-ranking for Long Document Information Retrieval. ACM Transactions on Information Systems, 2022, ⟨10.1145/3568394⟩. ⟨hal-03831739⟩
41 Consultations
71 Téléchargements

Altmetric

Partager

More