Polynomial meta-models with canonical low-rank approximations: numerical insights and comparison to sparse polynomial chaos expansions
Résumé
The growing need for uncertainty analysis of complex computational models has led to
an expanding use of meta-models across engineering and sciences. The efficiency of metamodeling
techniques relies on their ability to provide statistically-equivalent analytical representations
based on relatively few evaluations of the original model. Polynomial chaos
expansions (PCE) have proven a powerful tool for developing meta-models in a wide range
of applications; the key idea thereof is to expand the model response onto a basis made of
multivariate polynomials obtained as tensor products of appropriate univariate polynomials.
The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the
exponential increase of the basis size with increasing input dimension. To address this limitation,
the sparse PCE technique has been proposed, in which the expansion is carried out
on only a few relevant basis terms that are automatically selected by a suitable algorithm.
An alternative for developing meta-models with polynomial functions in high-dimensional
problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting
the tensor-product structure of the multivariate basis, LRA can provide polynomial
representations in highly compressed formats. Through extensive numerical investigations,
we herein first shed light on issues relating to the construction of canonical LRA with a
particular greedy algorithm involving a sequential updating of the polynomial coefficients
along separate dimensions. Specifically, we examine the selection of optimal rank, stopping
criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we
confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications
based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse
PCE in cases when the number of available model evaluations is small with respect to the
input dimension, a situation that is often encountered in real-life problems. By introducing
the conditional generalization error, we further demonstrate that canonical LRA tend to
outperform sparse PCE in the prediction of extreme model responses, which is critical in
reliability analysis.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...