Neural Network Precision Tuning Using Stochastic Arithmetic - Archive ouverte HAL
Conference Papers Year : 2022

Neural Network Precision Tuning Using Stochastic Arithmetic

Abstract

Neural networks can be costly in terms of memory and execution time. Reducing their cost has become an objective, especially when integrated in an embedded system with limited resources. A possible solution consists in reducing the precision of their neurons parameters. In this article, we present how to use auto-tuning on neural networks to lower their precision while keeping an accurate output. To do so, we use a floating-point auto-tuning tool on different kinds of neural networks. We show that, to some extent, we can lower the precision of several neural network parameters without compromising the accuracy requirement.
Fichier principal
Vignette du fichier
NSV_2022 (1).pdf (372.37 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03682645 , version 1 (31-05-2022)
hal-03682645 , version 2 (20-07-2022)

Identifiers

  • HAL Id : hal-03682645 , version 2

Cite

Quentin Ferro, Stef Graillat, Thibault Hilaire, Fabienne Jézéquel, Basile Lewandowski. Neural Network Precision Tuning Using Stochastic Arithmetic. NSV'22, 15th International Workshop on Numerical Software Verification,, Aug 2022, Haifa, Israel. ⟨hal-03682645v2⟩
326 View
248 Download

Share

More