Gaussian random field models for function approximation under structural priors and adaptive design of experiments
Résumé
Whether in natural sciences, in economics, or in the industry, a number of modelling problems involve functions that are expensive to evaluate. This includes, for instance, flow simulations used in hydrogeology, where time-consuming solvers are used to compute a misfit between observed and theoretical tracer concentrations. In such a case, the solver can be seen as a ``black box'' deterministic function returning the misfit as a function of parameter fields describing the subsurface physics and geology. Given observations, a typical hydrogeological problem would be to ``invert'' that function, i.e. to uncover parameter fields that lead to an outcome close to the observations. This includes also Monte Carlo simulations, used in a broad class of applications ranging from finance to neutronics through climate, where expectations are approximated by averages over a large number of particles. Such simulations may be used, e.g., to determine the best design in engineering, the most profitable product under legal constraints in insurance, or the worst case scenario on a power plant subject to climatic hazards. All three examples boil down to solving an optimization problem, be it a constrained or unconstrained, deterministic or noisy, minimization or maximization problem. What is drastically varying is the nature of the objective function. The contributions presented here essentially focus on stochastic methods for approximating, optimizing, and more generally for studying multivariate functions under very limited evaluation budget. In mathematical terms, we typically consider a function $f: D\subset \mathbb{R}^{d}\rightarrow \mathbb{R}$ and try to approximate it over $D$, locate its points of optima and/or determine regions where $f$ takes a prescribed range of values, relying on a finite number of evaluation results $\{(\x_{i}, f(\x_i)), 1\leq i \leq n\}$. Obviously, further hypotheses on $f$ are needed in order to say something meaningful on its values outside of $\{\x_{1}, \dots, \x_{n}\}$. Now, approximating functions based on a finite number of point-wise evaluations is a problem that has been intensively studied throughout mathematics, and a plethora of interpolation and approximation techniques exists to reconstruct $f$ from available information.A particularity of the approaches considered here is that, even though there is nothing intrinsically random about the objective function, probabilistic and statistical concepts are used to model it. More precisely, $f$ is typically seen as one realization of a Gaussian random field (GRF), a mathematical object coming from spatial statistics. While appealing to such approach may seem unnatural at first, it has recently become commonplace in several application domains such as those cited above. In fact, approximation methods and sequential strategies based on Gaussian random field models proved efficient for a variety of problems and goals, including global optimization and probability of failure estimation. The work presented in this habilitation thesis deals with three questions related to GRF models and their use in sequential strategies for optimizing and inverting deterministic functions under limited evaluation budget. The first question, discussed in Chapter $1$, concerns the incorporation of prior information on $f$ through the type of Gaussian random field model considered. Centred Gaussian random fields being characterized by their covariance, several kinds of so-called ``structural'' priors that can be imposed through the kernel are studied. It is shown that various properties including (but not restricted to) symmetries and additivity may be embedded within GRF modelling and that, when applicable, such prescribed features may partly compensate for the scarcity of observations. Chapter $2$ focuses on sequential evaluation strategies dedicated to global optimization, and to further goals such as probability of excursion and excursion set estimation. A state-of-the-art criterion, the \textit{Expected Improvement}, is studied in finite time settings, and adapted to a noisy optimization problem with tunable noise variance. The latter enables in particular reducing computation time when optimizing a function $f$ whose evaluations are done through Monte Carlo simulations. Finally, strategies for learning regions of the parameter space where $f$ exceeds a given threshold are studied and a criterion for quantifying the uncertainty about such regions is introduced. Lastly, Chapter $3$ contains a discussion on various aspects of interdisciplinary research projects, and on applications and implementations of the methods presented in the two previous chapters. In particular, some results obtained on a stochastic hydrogeology test case are commented, and open source software implementations (R packages) including part of the methods presented so far are presented. Each chapter is complemented by several publications (four for Chapter $1$, four for Chapter $2$, and two for Chapter $3$), and the Appendix contains a few additional proofs as well as a CV, a publication list, and a teaching statement of the author.
Loading...