Performance of precision auto-tuned neural networks
Abstract
While often used in embedded systems, neural networks can be costly in terms of memory and execution time. Reducing the precision used in neural networks can be beneficial in terms of performance and energy consumption. After having applied a floating-point auto-tuning tool, PROMISE, on various neural networks, we obtained versions using lower precision while keeping a required accuracy on the results. In this article, we present results regarding the memory and computation time gains obtained thanks to reduced precision, using vectorized and non-vectorized code. We also show the impact on the execution time of PROMISE of the parallelization of the Delta Debug algorithm it implements.
Origin | Files produced by the author(s) |
---|