DAMP: distribution-aware magnitude pruning for budget-sensitive graph convolutional networks - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

DAMP: distribution-aware magnitude pruning for budget-sensitive graph convolutional networks

Résumé

Graph convolutional networks (GCNs) are nowadays becoming mainstream in solving many image processing tasks including skeleton-based recognition. Their general recipe consists in learning convolutional and attention layers that maximize classification performances. With multi-head attention, GCNs are highly accurate but oversized, and their deployment on edge devices requires their pruning. Among existing methods, magnitude pruning (MP) is relatively effective but its design is clearly suboptimal as network topology selection and weight retraining are achieved independently. In this paper, we devise a novel lightweight GCN design dubbed as Distribution-Aware Magnitude Pruning (DAMP). The latter is variational and proceeds by aligning the weight distribution of the learned networks with an a priori distribution. This allows implementing any targeted pruning rate while maintaining high generalization of the designed lightweight GCNs particularly at the highest (most interesting) pruning regimes. Extensive experiments conducted on the challenging task of skeleton-based recognition show a substantial gain of our DAMP compared to MP as well as related methods.
Fichier principal
Vignette du fichier
document.pdf (316.87 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04796183 , version 1 (21-11-2024)

Identifiants

Citer

Hichem Sahbi. DAMP: distribution-aware magnitude pruning for budget-sensitive graph convolutional networks. ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr 2024, Seoul, South Korea. pp.3070-3074, ⟨10.1109/ICASSP48485.2024.10448148⟩. ⟨hal-04796183⟩
0 Consultations
0 Téléchargements

Altmetric

Partager

More