Improving the Energy Efficiency of CNN Inference on FPGA using Partial Reconfiguration
Résumé
With the increasing demand for edge AI application scenarios, as the most popular deep learning models, Convolutional Neural Networks (CNNs) need advanced solutions for the deployment of highly energy-efficient implementations. This paper presents a novel approach to improve the efficiency of CNN inference on Field-Programmable Gate Arrays (FPGAs) using Partial Reconfiguration (PR). Our method deconstructs CNN topology into different layers for runtime reconfiguration with fewer resources, aiming to significantly reduce static power and overall energy consumption. To identify the conditions for practical PR efficiency, we present a thorough design space exploration study with three CNN benchmarks, each evaluated across three different implementations. The comparison results demonstrate that our PR approach can achieve up to 3.88 and 1.67 times energy savings compared to software and static hardware implementations, respectively. These results also show that the benefits of PR improve with the depth of the network, suggesting very promising levels of gains as the network gets larger and under the key conditions of using fast optimized reconfiguration controllers and methodical system-level exploration of the increased hardware implementation complexity.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|