Induced feature selection by structured pruning - Archive ouverte HAL
Pré-Publication, Document De Travail Année : 2023

Induced feature selection by structured pruning

Résumé

The advent of sparsity inducing techniques in neural networks has been of a great help in the last few years. Indeed, those methods allowed to find lighter and faster networks, able to perform more efficiently in resource-constrained environment such as mobile devices or highly requested servers. Such a sparsity is generally imposed on the weights of neural networks, reducing the footprint of the architecture. In this work, we go one step further by imposing sparsity jointly on the weights and on the input data. This can be achieved following a three-step process: 1) impose a certain structured sparsity on the weights of the network; 2) track back input features corresponding to zeroed blocks of weight; 3) remove useless weights and input features and retrain the network. Performing pruning both on the network and on input data not only allows for extreme reduction in terms of parameters and operations but can also serve as an interpretation process. Indeed, with the help of data pruning, we now have information about which input feature is useful for the network to keep its performance. Experiments conducted on a variety of architectures and datasets: MLP validated on MNIST, CIFAR10/100 and ConvNets (VGG16 and ResNet18), validated on CIFAR10/100 and CALTECH101 respectively, show that it is possible to achieve additional gains in terms of total parameters and in FLOPs by performing pruning on input data, while also increasing accuracy.

Dates et versions

hal-04049940 , version 1 (29-03-2023)

Identifiants

Citer

Nathan Hubens, Victor Delvigne, Matei Mancas, Bernard Gosselin, Marius Preda, et al.. Induced feature selection by structured pruning. 2023. ⟨hal-04049940⟩
17 Consultations
0 Téléchargements

Altmetric

Partager

More