A Framework for Semi-Automatic Precision and Accuracy Analysis for Fast and Rigorous Deep Learning - Archive ouverte HAL
Communication Dans Un Congrès Année : 2020

A Framework for Semi-Automatic Precision and Accuracy Analysis for Fast and Rigorous Deep Learning

Résumé

Deep Neural Networks (DNN) represent a performance-hungry application. Floating-Point (FP) and custom floating-point-like arithmetic satisfies this hunger. While there is need for speed, inference in DNNs does not seem to have any need for precision. Many papers experimentally observe that DNNs can successfully run at almost ridiculously low precision. The aim of this paper is twofold: first, to shed some theoretical light upon why a DNN's FP accuracy stays high for low FP precision. We observe that the loss of relative accuracy in the convolutional steps is recovered by the activation layers, which are extremely well-conditioned. We give an interpretation for the link between precision and accuracy in DNNs. Second, the paper presents a software framework for semi-automatic FP error analysis for the inference phase of deep-learning. Compatible with common Tensorflow/Keras models, it leverages the frugally-deep Python/C++ library to transform a neural network into C++ code in order to analyze the network's need for precision. This rigorous analysis is based an Interval and Affine arithmetics to compute absolute and relative error bounds for a DNN. We demonstrate our tool with several examples.
Fichier principal
Vignette du fichier
accurateai_HAL.pdf (286.67 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02473300 , version 1 (10-02-2020)

Identifiants

Citer

Christoph Lauter, Anastasia Volkova. A Framework for Semi-Automatic Precision and Accuracy Analysis for Fast and Rigorous Deep Learning. IEEE Symposium on Computer Arithmetic (ARITH), Jun 2020, Portland, United States. ⟨10.1109/arith48897.2020.00023⟩. ⟨hal-02473300⟩
168 Consultations
232 Téléchargements

Altmetric

Partager

More