Fine-grained analysis of the transformer model for efficient pruning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Fine-grained analysis of the transformer model for efficient pruning

Résumé

In automatic speech recognition, deep learning models such as transformers are increasingly used for their high performance. However, they suffer from their large size, which makes it very difficult to use them in real contexts. Hence the idea of pruning them. Conventional pruning methods are not optimal and sometimes not efficient since they operate blindly without taking into account the nature of the layers or their number of parameters or their distribution. In this work, we propose to perform a fine-grained analysis of the transformer model layers in order to determine the most efficient pruning approach. We show that it is more appropriate to prune some layers than others and underline the importance of knowing the behavior of the layers to choose the pruning approach.
Fichier principal
Vignette du fichier
conference_101719.pdf (1.72 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04047338 , version 1 (27-03-2023)

Identifiants

Citer

Leila Ben Letaifa, Jean-Luc Rouas. Fine-grained analysis of the transformer model for efficient pruning. 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA), Dec 2022, Nassau, Bahamas. pp.897-902, ⟨10.1109/ICMLA55696.2022.00149⟩. ⟨hal-04047338⟩

Collections

CNRS
2 Consultations
21 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More