Saucissonnage of Long Sequences into a Multi-encoder for Neural Text Summarization with Transformers - Archive ouverte HAL
Communication Dans Un Congrès Année : 2021

Saucissonnage of Long Sequences into a Multi-encoder for Neural Text Summarization with Transformers

Résumé

Transformer deep models have gained lots of attraction in Neural Text Summarization. The problem with existing Transformer-based systems is that they truncate documents considerably before feeding them to the network. In this paper, we are particularly interested in biomedical long text summarization. However, current input sequences are far shorter than the average length of biomedical articles. To handle this problem, we propose two improvements to the original Transformer model that allow a faster training of long sequences without penalizing the summary quality. First, we split the input between four encoders to focus attention on smaller segments of the input. Second, we use end-chunk task training at the decoder level for progressive fast decoding. We evaluate our proposed architecture on PubMed, a well-known biomedical dataset. The comparison with competitive baselines shows that our approach: (1) allows reading large input sequences, (2) reduces the training time considerably, and (3) slightly improves the quality of generated summaries.
Fichier principal
Vignette du fichier
EGC_2021_paper.pdf (320.23 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04090684 , version 1 (05-05-2023)

Licence

Identifiants

  • HAL Id : hal-04090684 , version 1

Citer

Jessica López Espejel, Gaël de Chalendar, Jorge Garcia Flores, Ivan Vladimir Meza Ruiz, Thierry Charnois. Saucissonnage of Long Sequences into a Multi-encoder for Neural Text Summarization with Transformers. Extraction et Gestion des Connaissances (EGC), Montpellier, France,, Jan 2021, Montpellier, France. ⟨hal-04090684⟩
66 Consultations
52 Téléchargements

Partager

More