Bayesian optimization for NAS with pretrained deep ensembles
Résumé
Neural architecture search (NAS) is the use of a search algorithm to find, according to one or many criteria, the most suitable neural architecture for a given task, within a search space of neural network architectures. Among prominent search methods used in NAS literature, we can find reinforcement learning (RL)[1], evolutionary algorithms (EAs)[2, 3], and bayesian optimization (BO)[4, 5]. In this paper, we present a BO-based method, where instead of using Gaussian Processes which are usually associated with BO, we use deep ensembles as performance predictors for candidate neural networks. Specifically, we explore the potential of pretraining the ensemble networks as a way to mitigate their need for more data compared to GPs The idea is to use pretraining to accelerate the training of the deep ensemble, in order to obtain a good prediction performance early on in the optimization process. In the case of NAS, we have access to zero-cost metrics, which can be calculated quickly and without the need to train the candidate networks, by far the most costly part of evaluating them. By using existing and widely used NAS benchmarks, we show the improvement pretraining brings to the deep-ensemble based method, as a result accelerating the search process.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
Licence |
Copyright (Tous droits réservés)
|