Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization? - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization?

Résumé

Hyperparameter optimization (HPO) is crucial for fine-tuning machine learning models but can be computationally expensive. To reduce costs, Multi-fidelity HPO (MF-HPO) leverages intermediate accuracy levels in the learning process and discards low-performing models early on. We compared various representative MF-HPO methods against a simple baseline on classical benchmark data. The baseline involved discarding all models except the Top-K after training for only one epoch, followed by further training to select the best model. Surprisingly, this baseline achieved similar results to its counterparts, while requiring an order of magnitude less computation. Upon analyzing the learning curves of the benchmark data, we observed a few dominant learning curves, which explained the success of our baseline. This suggests that researchers should (1) always use the suggested baseline in benchmarks and (2) broaden the diversity of MF-HPO benchmarks to include more complex cases.

Dates et versions

hal-04219301 , version 1 (27-09-2023)

Identifiants

Citer

Romain Egele, Isabelle Guyon, Yixuan Sun, Prasanna Balaprakash. Is One Epoch All You Need For Multi-Fidelity Hyperparameter Optimization?. ESANN 2023 - 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Oct 2023, Bruges / Hybrid, Belgium. ⟨hal-04219301⟩
31 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More