Rethinking Weight Decay for Efficient Neural Network Pruning - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Journal of Imaging Année : 2022

Rethinking Weight Decay for Efficient Neural Network Pruning

Résumé

Introduced in the late 1980s for generalization purposes, pruning has now become a staple for compressing deep neural networks. Despite many innovations in recent decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which carries out efficient, continuous pruning throughout training. Our approach, theoretically grounded on Lagrangian smoothing, is versatile and can be applied to multiple tasks, networks, and pruning structures. We show that SWD compares favorably to state-of-the-art approaches, in terms of performance-to-parameters ratio, on the CIFAR-10, Cora, and ImageNet ILSVRC2012 datasets.
Fichier principal
Vignette du fichier
jimaging-08-00064.pdf (417.39 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03675138 , version 1 (24-05-2022)

Licence

Paternité

Identifiants

Citer

Hugo Tessier, Vincent Gripon, Mathieu Leonardon, Matthieu Arzel, Thomas Hannagan, et al.. Rethinking Weight Decay for Efficient Neural Network Pruning. Journal of Imaging, 2022, 8 (3), pp.64. ⟨10.3390/jimaging8030064⟩. ⟨hal-03675138⟩
84 Consultations
115 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More