Grow, prune or select data: which technique allows the most energy-efficient neural network training?
Résumé
The training energy efficiency of deep neural networks became an extensively studied research topic in the last years. Some of the existing approaches seek to reduce the size of the architecture by either starting the training with a large network and pruning it, or by beginning with a seed architecture and then growing it. Instead of compressing the architecture, other approaches aim to reduce the number of training examples through data selection.
While various approaches belonging to these two categories have been proposed, only a few works actually conduct energy measurements. Others merely mention potential gains in efficiency or rely on alternative evaluation metrics such as FLOPs. In this paper, we conduct a series of experiments both on a synthetic dataset and on image classification benchmarks in order to compare the impact of pruning, architecture growing and data selection on training energy consumption and prediction quality.
Our results show that growing maintains a high prediction quality but brings limited energy gains when the size of the resulting architecture is large. Pruning can offer high gains, but also impacts accuracy, making it more suited for large models. Data selection provides energy gains correlated with the selectivity rate but causes an accuracy loss. We find that the effectiveness of every technique depends on its hyperparameters and on the architecture size.
Domaines
Informatique [cs]Origine | Fichiers produits par l'(les) auteur(s) |
---|