Progressive Layer-based Compression for Convolutional Spiking Neural Network
Résumé
Spiking neural networks (SNNs) have attracted interest in recent years due to their low energy consumption and the increasing need for more power in real-life machine learning applications. Having those bio-inspired networks on neuromorphic hardware for extra-low energy consumption is another exciting aspect of this technology. Furthermore, many works discuss the improvement of SNNs in terms of performance and hardware implementation. This paper presents a progressive layer-based compression approach applied to convolutional spiking neural networks trained either with Spike Time Dependent Plasticity (STDP) or Surrogate Gradient (SG). Moreover, we study the effect of this approach when used with SpiNNaker. This approach, inspired by neuroplasticity, produces highly compressed networks (up to 90% compression rate per layer) while preserving most of the network performance, as shown by experimental results on MNIST, FMNIST, Caltech face/motorbike, and CIFAR-10 datasets.
Fichier principal
Progressive Layer-based Compression for Convolutional Spiking Neural Network.pdf (2.21 Mo)
Télécharger le fichier
frontiers_SupplementaryMaterial.pdf (3.57 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Origine | Fichiers produits par l'(les) auteur(s) |
---|