Scalable High-Performance Architecture for Convolutional Ternary Neural Networks on FPGA - Archive ouverte HAL Access content directly
Conference Papers Year : 2017

Scalable High-Performance Architecture for Convolutional Ternary Neural Networks on FPGA

Abstract

Thanks to their excellent performances on typical artificial intelligence problems, deep neural networks have drawn a lot of interest lately. However, this comes at the cost of large computational needs and high power consumption. Benefiting from high precision at acceptable hardware cost on these difficult problems is a challenge. To address it, we advocate the use of ternary neural networks (TNN) that, when properly trained, can reach results close to the state of the art using floating-point arithmetic. We present a highly versatile FPGA friendly architecture for TNN in which we can vary both the number of bits of the input data and the level of parallelism at synthesis time, allowing to trade throughput for hardware resources and power consumption. To demonstrate the efficiency of our proposal, we implement high-complexity convolutional neural networks on the Xilinx Virtex-7 VC709 FPGA board. While reaching a better accuracy than comparable designs, we can target either high throughput or low power. We measure a throughput up to 27 000 fps at ≈7 W or up to 8.36 TMAC/s at ≈13 W.
Fichier principal
Vignette du fichier
fpl17.pdf (521.03 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-01563763 , version 1 (18-07-2017)

Licence

CC0 - Public Domain Dedication

Identifiers

  • HAL Id : hal-01563763 , version 1

Cite

Adrien Prost-Boucle, Alban Bourge, Frédéric Pétrot, Hande Alemdar, Nicholas Caldwell, et al.. Scalable High-Performance Architecture for Convolutional Ternary Neural Networks on FPGA. Field Programmable Logic and Applications (FPL), 2017 27th International Conference on, Sep 2017, Gent, Belgium. ⟨hal-01563763⟩
748 View
1413 Download

Share

Gmail Facebook Twitter LinkedIn More