DAMP: distribution-aware magnitude pruning for budget-sensitive graph convolutional networks
Résumé
Graph convolutional networks (GCNs) are nowadays becoming mainstream in solving many image processing tasks including skeleton-based recognition. Their general recipe consists in learning convolutional and attention layers that maximize classification performances. With multi-head attention, GCNs are highly accurate but oversized, and their deployment on edge devices requires their pruning. Among existing methods, magnitude pruning (MP) is relatively effective but its design is clearly suboptimal as network topology selection and weight retraining are achieved independently. In this paper, we devise a novel lightweight GCN design dubbed as Distribution-Aware Magnitude Pruning (DAMP). The latter is variational and proceeds by aligning the weight distribution of the learned networks with an a priori distribution. This allows implementing any targeted pruning rate while maintaining high generalization of the designed lightweight GCNs particularly at the highest (most interesting) pruning regimes. Extensive experiments conducted on the challenging task of skeleton-based recognition show a substantial gain of our DAMP compared to MP as well as related methods.
Origine | Fichiers produits par l'(les) auteur(s) |
---|