Scalable Meta-Bayesian Based Hyperparameters Optimization for Machine Learning
Résumé
It is a known fact that the selection of one or more optimized algorithms and the configuration of significant hyperparameters, is among the major problems for the advanced data analytics using Machine Learning (ML) methodologies. However, it is one of the essential tasks in order to apply the ML based solutions to deal with the real-world problems. In this regard, Bayesian Optimization (BO) is a popular method for optimizing black-box functions. But, yet it is deficient for large-scale problems because it fails to leverage the knowledge from historical applications. The major challenge in this aspect is due to the BO waste function evaluations on bad design choices (such that the ML hyperparameters). To address this issue, we propose to integrate Bayesian Optimization via Meta-Guidance. Consequently, Meta-Guided Bayesian Optimization (MGBO) provide means to use the knowledge from previous optimization cycles on similar tasks. This capability takes the form of pre-requisite to decide the specific parts of the input space to be evaluated next; in this regard, we intend to guide the BO with a functional ANOVA of configurations as suggested by a meta-learning process. In this paper, we demonstrate, with the help of a large collection of hyperparameters optimization benchmark problems, that MGBO is about 3 times faster than the vanilla Bayesian optimization. Thence, it achieves a new state-of-the-art performance as proved by the experiments on 09 classification datasets.