A supermartingale approach to Gaussian process based sequential design of experiments
Résumé
Gaussian process (GP) models have become a well-established framework
for the adaptive design of costly experiments, and notably of computer
experiments. GP-based sequential designs have been found practically
efficient for various objectives, such as global optimization
(estimating the global maximum or maximizer(s) of a function),
reliability analysis (estimating a probability of failure) or the
estimation of level sets and excursion sets. In this paper, we deal
with convergence properties of an important class of sequential design
approaches, known as stepwise uncertainty reduction (SUR) strategies.
Our approach relies on the key observation that the sequence of
residual uncertainty measures, in SUR strategies, is generally a
supermartingale with respect to the filtration generated by the
observations. We study the existence of SUR strategies and establish
generic convergence results for a broad class thereof. We also
introduce a special class of uncertainty measures defined in terms of
regular loss functions, which makes it easier to check that our
convergence results apply in particular cases. Applications of the
latter include proofs of convergence for the two main SUR strategies
proposed by Bect, Ginsbourger, Li, Picheny and Vazquez (Stat. Comp.,
2012). To the best of our knowledge, these are the first convergence
proofs for GP-based sequential design algorithms dedicated to the
estimation of excursions sets and their measure. Coming to global
optimization algorithms, we also show that the knowledge gradient
strategy can be cast in the SUR framework with an uncertainty
functional stemming from a regular loss, resulting in further
convergence results. We finally establish a new proof of convergence
for the expected improvement algorithm, which is the first proof for
this algorithm that applies to any GP with continuous sample paths.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...