Long-Range Transformer Architectures for Document Understanding - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Long-Range Transformer Architectures for Document Understanding

Résumé

Since their release, Transformers have revolutionized many fields from Natural Language Understanding to Computer Vision. Document Understanding (DU) was not left behind with first Transformer based models for DU dating from late 2019. However, the computational complexity of the self-attention operation limits their capabilities to small sequences. In this paper we explore multiple strategies to apply Transformer based models to long multi-page documents. We introduce 2 new multi-modal (text + layout) long-range models for DU. They are based on efficient implementations of Transformers for long sequences. Long-range models can process whole documents at once effectively and are less impaired by the document's length. We compare them to Lay-outLM, a classical Transformer adapted for DU and pre-trained on millions of documents. We further propose 2D relative attention bias to guide self-attention towards relevant tokens without harming model efficiency. We observe improvements on multi-page business documents on Information Retrieval for a small performance cost on smaller sequences. Relative 2D attention revealed to be effective on dense text for both normal and long-range models.
Fichier principal
Vignette du fichier
ICDAR_2023 (7).pdf (2.25 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04191797 , version 1 (30-08-2023)

Identifiants

Citer

Thibault Douzon, Stefan Duffner, Christophe Garcia, Jérémy Espinas. Long-Range Transformer Architectures for Document Understanding. ICDAR 2023: Document Analysis and Recognition – ICDAR 2023 Workshops, Aug 2023, San José, United States. pp.47-64, ⟨10.1007/978-3-031-41501-2_4⟩. ⟨hal-04191797⟩
21 Consultations
78 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More