One-cycle pruning: pruning convnets with tight training budget - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

One-cycle pruning: pruning convnets with tight training budget

Résumé

Introducing sparsity in a convnet has been an efficient way to reduce its complexity while keeping its performance almost intact. Most of the time, sparsity is introduced using a three-stage pipeline: 1) training the model to convergence, 2) pruning the model, 3) fine-tuning the pruned model to recover performance. The last two steps are often performed iteratively, leading to reasonable results but also to a time-consuming process. In our work, we propose to remove the first step of the pipeline and to combine the two others in a single training-pruning cycle, allowing the model to jointly learn the optimal weights while being pruned. We do this by introducing a novel pruning schedule, named One-Cycle Pruning (OCP), which starts pruning from the beginning of the training, and until its very end. Experiments conducted on a variety of combinations between architectures (VGG-16, ResNet-18), datasets (CIFAR-10, CIFAR-100, Caltech-101), and sparsity values (80%, 90%, 95%) show that not only OCP consistently outperforms common pruning schedules such as One-Shot, Iterative and Automated Gradual Pruning, but also that it drastically reduces the required training budget. More-over, experiments following the Lottery Ticket Hypothesis show that OCP allows to find higher quality and more stable pruned networks.
Fichier non déposé

Dates et versions

hal-03930230 , version 1 (09-01-2023)

Identifiants

Citer

Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia. One-cycle pruning: pruning convnets with tight training budget. 2022 IEEE International Conference on Image Processing (ICIP), Oct 2022, Bordeaux, France. pp.4128-4132, ⟨10.1109/ICIP46576.2022.9897980⟩. ⟨hal-03930230⟩
18 Consultations
0 Téléchargements

Altmetric

Partager

More