Hardware-Aware Quantization for Multiplierless Neural Network Controllers
Résumé
Deep neural networks (DNNs) have been successfully applied to the approximation of non-linear control systems. These DNNs, deployed in safety-critical embedded systems, are relatively small but require a high throughput. Our goal is to perform a coefficient quantization to reduce the arithmetic complexity while maintaining an inference with high numerical accuracy. The key idea is to target multiplierless parallel architectures, where constant multiplications are replaced by bit-shifts and additions. We propose an adder-aware training that finds the quantized fixed-point coefficients minimizing the number of adders and thus improving the area, latency and power. With this approach, we demonstrate that an automatic cruise control floating-point DNN can be retrained to have only power-of-two coefficients, while maintaining a similar mean squared error (MSE) and formally satisfying a safety check. We provide a pushbutton training and implementation framework, automatically generating the VHDL code.
Origine | Fichiers produits par l'(les) auteur(s) |
---|