Régression: bases déformées et sélection de modèles par pénalisation et méthode de Lepski
Résumé
This paper deals with the problem of estimating a regression function $f$, in a random design framework. We build and study two adaptive estimators based on model selection, applied with warped bases. We start with a collection of finite dimensional linear spaces, spanned by orthonormal bases. Instead of expanding directly the target function $f$ on these bases, we rather consider the expansion of $h=f\circ G^{-1}$, where $G$ is the cumulative distribution function of the design, following Kerkyacharian and Picard (2004). The data-driven selection of the (best) space is done with two strategies: we use both a penalization version of a "warped contrast", and a model selection device in the spirit of Goldenshluger and Lepski (2011). We propose by these methods two functions, $\hat{h}_l$ ($l=1,2$), easier to compute than least-squares estimators. We establish nonasymptotic mean-squared integrated risk bounds for the resulting estimators, $\hat{f}_l=\hat{h}_l\circ G$ if $G$ is known, or $\hat{f}_l=\hat{h}_l\circ\hat{G}$ ($l=1,2$) otherwise, where $\hat{G}$ is the empirical distribution function. We study also adaptive properties, in case the regression function belongs to a Besov or Sobolev space, and compare the theoretical and practical performances of the two selection rules.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...