Modeling Energy Consumption in Deep Learning Architectures Using Power Laws
Résumé
Modern Deep Learning architectures such as LSTM, GRU, and Transformers achieve remarkable performance in various sequence processing tasks. Yet, their high computational cost and energy consumption have raised concerns about their environmental impact and the sustainability of Deep Learning. In this paper, we present an empirical study assessing the efficiency of training LSTM, GRU, and Transformer models on a GPU. By evaluating these models under various configurations, we characterize the relationship between energy consumption and pre-defined quantities such as hardware efficiency and the number of floating point operations (FLOPs) required for inference. We show that it is possible to derive scaling laws that make energy consumption predictable, given an architecture and a GPU model.
| Origine | Fichiers produits par l'(les) auteur(s) |
|---|---|
| Licence |