Energy Complexity of Fully-Connected Layers
Résumé
The energy efficiency of processing convolutional neural networks (CNNs) is crucial for their deployment on low-power mobile devices. In our previous work, a simplified theoretical hardware-independent model of energy complexity for CNNs has been introduced. This model has been experimentally shown to asymptotically fit the power consumption estimates of CNN hardware implementations on different platforms. Here, we pursue the study of this model from a theoretically perspective in the context of fully-connected layers. We present two dataflows and compute their associated energy costs to obtain upper bounds on the optimal energy. Using the weak duality theorem, we further prove a matching lower bound when the buffer memory is divided into two fixed parts for inputs and outputs. The optimal energy complexity for fully-connected layers in the case of partitioned buffer ensues. These results are intended to be generalized to the case of convolutional layers.