Explainable attention pruning: a meta-learning-based approach - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue IEEE Transactions on Artificial Intelligence Année : 2024

Explainable attention pruning: a meta-learning-based approach

Résumé

Pruning, as a technique to reduce the complexity and size of Transformer-based models, has gained significant attention in recent years. While various models have been successfully pruned, pruning BERT poses unique challenges due to their fine-grained structure and overparameterization. However, by carefully considering these factors, it is possible to prune BERT without significantly degrading its pre-trained loss. In this paper, we propose a Meta-learning-based pruning approach that can adaptively identify and eliminate insignificant attention weights. The performance of the proposed model is compared with several baseline models, as well as the default fine-tuned BERT model. The baseline pruning strategies employed low-level pruning techniques, targeting the removal of only 20% of the connections. The experimental results show that the proposed model outperforms the other baseline models, in terms of lower inference latency, higher MCC and lower loss. However, there is no significan...
Fichier principal
Vignette du fichier
2023_IEEE_TAI___BERT_Pruning__V5___Accepted_.pdf (1.77 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04447491 , version 1 (11-03-2024)

Identifiants

Citer

Praboda Rajapaksha, Noel Crespi. Explainable attention pruning: a meta-learning-based approach. IEEE Transactions on Artificial Intelligence, 2024, pp.1-12. ⟨10.1109/TAI.2024.3363686⟩. ⟨hal-04447491⟩
16 Consultations
1 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More