Which Gaussian Process for Bayesian Optimization ? - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

Which Gaussian Process for Bayesian Optimization ?

Résumé

Bayesian Optimization (BO) is a popular approach to the global optimization of costly non-convex functions in moderate dimension. BO is based on Gaussian processes that are iteratively learned and serve as a model to control the exploration-exploitation trade-off through an acquisition criterion. Much of the recent research on BO has focused on new acquisition criteria and the specialization to specific problems (e.g., uncertain or multi-fidelity contexts). In this talk, on the contrary, we consider a standard black-box single objective problem and a standard acquisition criterion, the expected improvement. The focus is the Gaussian process (GP) and how it can be modified to improve the optimization. Three directions for progress are discussed: the trend of the GP, the adaptation to higher dimension by a linear embedding, and the collaboration between a global and a local GP through a trust-region mechanism.
talk_jopt22_r_le_riche.pdf (3.61 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03679142 , version 1 (25-05-2022)

Identifiants

  • HAL Id : hal-03679142 , version 1

Citer

Rodolphe Le Riche, David Gaudrie, Victor Picheny, Youssef Diouane, Adrien Spagnol, et al.. Which Gaussian Process for Bayesian Optimization ?. 2022 Optimization Days, GERAD, May 2022, Montreal, Canada. ⟨hal-03679142⟩
104 Consultations
116 Téléchargements

Partager

Gmail Facebook X LinkedIn More