One-cycle pruning: pruning convNets under a tight training budget - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

One-cycle pruning: pruning convNets under a tight training budget

Résumé

Introducing sparsity in a neural network has been an efficient way to reduce its complexity while keeping its performance almost intact. Most of the time, sparsity is introduced using a three-stage pipeline: 1) train the model to convergence, 2) prune the model according to some criterion, 3) fine-tune the pruned model to recover performance. The last two steps are often performed iteratively, leading to reasonable results but also to a time-consuming and complex process. In our work, we propose to get rid of the first step of the pipeline and to combine the two other steps in a single pruning-training cycle, allowing the model to jointly learn for the optimal weights while being pruned. We do this by introducing a novel pruning schedule, named One-Cycle Pruning, which starts pruning from the beginning of the training, and until its very end. Adopting such a schedule not only leads to better performing pruned models but also drastically reduces the training budget required to prune a model. Experiments are conducted on a variety of architectures (VGG-16 and ResNet-18) and datasets (CIFAR-10, CIFAR-100 and Caltech-101), and for relatively high sparsity values (80%, 90%, 95% of weights removed). Our results show that One-Cycle Pruning consistently outperforms commonly used pruning schedules such as One-Shot Pruning, Iterative Pruning and Automated Gradual Pruning, on a fixed training budget.
Fichier non déposé

Dates et versions

hal-04389225 , version 1 (11-01-2024)

Licence

Paternité

Identifiants

Citer

Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia. One-cycle pruning: pruning convNets under a tight training budget. 2024. ⟨hal-04389225⟩
7 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More