On energy complexity of fully-connected layers - Archive ouverte HAL
Article Dans Une Revue Neural Networks Année : 2024

On energy complexity of fully-connected layers

Jiří Šíma
Petra Vidnerová

Résumé

The massive increase in the size of deep neural networks (DNNs) is accompanied by a significant increase in energy consumption of their hardware implementations which is critical for their widespread deployment in low-power mobile devices. In our previous work, an abstract hardware-independent model of energy complexity for convolutional neural networks (CNNs) has been proposed and experimentally validated. Based on this model, we provide a theoretical analysis of energy complexity related to the computation of a fully-connected layer when its inputs, outputs, and weights are transferred between two kinds of memories (DRAM and Buffer). First, we establish a general lower bound on this energy complexity. Then, we present two dataflows and calculate their energy costs to achieve the corresponding upper bounds. In the case of a partitioned Buffer, we prove by the weak duality theorem from linear programming that the lower and upper bounds coincide up to an additive constant, and therefore establish the optimal energy complexity. Finally, the asymptotically optimal quadratic energy complexity of fully-connected layers is experimentally validated by estimating their energy consumption on the Simba and Eyeriss hardware.
Fichier non déposé

Dates et versions

hal-04623751 , version 1 (25-06-2024)

Identifiants

Citer

Jiří Šíma, Petra Vidnerová, Jérémie Cabessa. On energy complexity of fully-connected layers. Neural Networks, 2024, 178, ⟨10.1016/j.neunet.2024.106419⟩. ⟨hal-04623751⟩
16 Consultations
0 Téléchargements

Altmetric

Partager

More