An Empirical Case of Gaussian Processes Learning in High Dimension: the Likelihood versus Leave-One-Out Rivalry - Archive ouverte HAL
Communication Dans Un Congrès Année : 2024

An Empirical Case of Gaussian Processes Learning in High Dimension: the Likelihood versus Leave-One-Out Rivalry

Résumé

Gaussian Processes (GPs) are semi-parametric models commonly employed in various applications such as statistical modeling, sensitivity analysis and Bayesian optimization. GPs are particularly useful in the context of small data. However, GPs suffer particularly from the curse of dimensionality: at a fixed number of data points, their predictive capability may decrease dramatically after 40 dimensions. In this talk, we investigate such a phenomenon in details. We illustrate the loss of performance with increasing dimension on simple quadratic functions and analyze its underlying symptoms, in particular a tendency to become constant away from the data points. We show that the fundamental problem is one of learning and not one of representation capacity: maximum likelihood, the dominant loss function for such models, can miss regions of optimality of the GP hyperparameters. Failure of maximum likelihood is related to statistical model inadequacy: a model with constant trend is sensitive to dimensionality when fitting quadratic functions while it much better handles dimension growth for linear functions or Gaussian trajectories generated with the right covariance. Our experiments also show that the leave-one-out loss function is less prone to the curse of dimensionality even for inadequate statistical models. A first step towards analyzing the curse of dimensionality in this context is taken. It considers a uniform sampling of the data points. As dimension increases, the cross-covariance terms concentrate around a mean value. This mean value is calculated and defines a limit iso-covariance. The iso-covariance GP model has closed-form expressions for its prediction, likelihood and leave-one-out error. It allows to explain why the a priori mean must increase with dimension.
talk_SIAMUQ24_LeRiche_Gaudrie_FV.pdf (655.61 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04484082 , version 1 (04-03-2024)

Licence

Identifiants

  • HAL Id : hal-04484082 , version 1

Citer

David Gaudrie, Rodolphe Le Riche, Tanguy Appriou. An Empirical Case of Gaussian Processes Learning in High Dimension: the Likelihood versus Leave-One-Out Rivalry. SIAM Conference on Uncertainty Quantification (UQ24), Feb 2024, Trieste, Italy. ⟨hal-04484082⟩
111 Consultations
24 Téléchargements

Partager

More