Neural Network Precision Tuning Using Stochastic Arithmetic - Archive ouverte HAL Accéder directement au contenu
Poster De Conférence Année : 2022

Neural Network Precision Tuning Using Stochastic Arithmetic

Résumé

Neural networks can be costly in terms of memory and execu- tion time. Reducing their cost has become an objective, especially when integrated in an embedded system with limited resources. A possible solution consists in reducing the precision of their neurons parameters. In this article, we present how to use auto-tuning on neural networks to lower their precision while keeping an accurate output. To do so, we use a floating-point auto-tuning tool on different kinds of neural networks. We show that, to some extent, we can lower the precision of several neural network parameters without compromising the accuracy requirement.
Fichier principal
Vignette du fichier
poster_NN_promise.pdf (596.45 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03780505 , version 1 (19-09-2022)

Identifiants

  • HAL Id : hal-03780505 , version 1

Citer

Quentin Ferro, Stef Graillat, Thibault Hilaire, Fabienne Jézéquel, Basile Lewandowski. Neural Network Precision Tuning Using Stochastic Arithmetic. Sparse Days conference, Jun 2022, Saint-Girons, France. ⟨hal-03780505⟩
47 Consultations
22 Téléchargements

Partager

Gmail Facebook X LinkedIn More