Performance of precision auto-tuned neural networks - Archive ouverte HAL
Conference Papers Year : 2023

Performance of precision auto-tuned neural networks

Abstract

While often used in embedded systems, neural networks can be costly in terms of memory and execution time. Reducing the precision used in neural networks can be beneficial in terms of performance and energy consumption. After having applied a floating-point auto-tuning tool, PROMISE, on various neural networks, we obtained versions using lower precision while keeping a required accuracy on the results. In this article, we present results regarding the memory and computation time gains obtained thanks to reduced precision, using vectorized and non-vectorized code. We also show the impact on the execution time of PROMISE of the parallelization of the Delta Debug algorithm it implements.
Fichier principal
Vignette du fichier
POAT_2023.pdf (251.96 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04149501 , version 1 (03-07-2023)

Identifiers

  • HAL Id : hal-04149501 , version 1

Cite

Quentin Ferro, Stef Graillat, Thibault Hilaire, Fabienne Jézéquel. Performance of precision auto-tuned neural networks. MCSoC 2023 (16th IEEE International Symposium on Embedded Multicore/Manycore Systems-on-Chip), special session POAT (Performance Optimization and Auto-Tuning of Software on Multicore/Manycore Systems), Dec 2023, Singapore, Singapore. ⟨hal-04149501⟩
98 View
117 Download

Share

More