Efficiency analysis of Artificial vs Spiking Neural Networks on FPGAs
Résumé
Artificial neural networks (ANNs) incur huge costs in terms of processing power, memory performance, and energy consumption, where in comparison an average human brain operates within a power budget of nearly 20 W. Brain-inspired computing such as Spiking Neural Networks (SNNs) are thus expected to improve efficiency to an unprecedented extent. But apart from the spike coding aspects currently addressed by numerous investigations, research also needs to find solutions for the practical design of future neuromorphic hardware ensuring very low power processing. This paper investigates these questions with a pragmatic comparison of deep Convolutional Neural Networks (CNNs) and their equivalent SNNs based on the implementation and measurement of a set of CNN image classification benchmarks on FPGA devices. Results show that SNNs are clearly less energy efficient than their equivalent CNNs in the general case, further indicating that, on top of ongoing progress in spike modeling theory (e.g. spike encoding, learning), neuromorphic accelerators also have to address important issues in the reality of RTL development and silicon implementation, among which sparsity versus static and idle power consumption, ability to support large levels of parallelism, memory performance, scalability, spiking convolutions.
Fichier principal
Efficiency_Analysis_of_Artificial_vs_Spiking_Neural_Networks_on_FPGAs.pdf (245.54 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|