Compression of Deep Neural Networks on the Fly - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Compression of Deep Neural Networks on the Fly

Guillaume Soulie
  • Fonction : Auteur
  • PersonId : 989480
Maëlys Robert
  • Fonction : Auteur
  • PersonId : 989481

Résumé

Thanks to their state-of-the-art performance, deep neural networks are increasingly used for object recognition. To achieve the best results, they use millions of parameters to be trained. However, when targetting embedded applications the size of these models becomes problematic. As a consequence, their usage on smartphones or other resource limited devices is prohibited. In this paper we introduce a novel compression method for deep neural networks that is performed during the learning phase. It consists in adding an extra regularization term to the cost function of fully-connected layers. We combine this method with Product Quantization (PQ) of the trained weights for higher savings in storage consumption. We evaluate our method on two data sets (MNIST and CIFAR10), on which we achieve significantly larger compression rates than state-of-the-art methods.
Fichier non déposé

Dates et versions

hal-01371021 , version 1 (23-09-2016)

Identifiants

  • HAL Id : hal-01371021 , version 1

Citer

Guillaume Soulie, Vincent Gripon, Maëlys Robert. Compression of Deep Neural Networks on the Fly. ICANN 2016 : 25th International Conference on Artificial Neural Networks, Sep 2016, Barcelone, Spain. pp.153 - 160. ⟨hal-01371021⟩
74 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More