Penalization versus Goldenshluger − Lepski strategies in warped bases regression
Résumé
This paper deals with the problem of estimating a regression function f , in a random design framework. We build and study two adaptive estimators based on model selection , applied with warped bases. We start with a collection of nite dimensional linear spaces, spanned by orthonormal bases. Instead of expanding directly the target function f on these bases, we rather consider the expansion of an intermediate function, the convolution product of f with the inverse of the cumulative distribution function of the design, following Kerkyacharian and Picard (2004). The data-driven selection of the (best) space is done with two strategies: we use both a penalization version of a "warped contrast", and a model selection device in the spirit of Goldenshluger and Lepski (2011). We propose by these methods two functions, easier to compute than least-squares estimators. We establish nonasymptotic mean-squared integrated risk bounds for the resulting estimators. We study also adaptive properties, in case the regression function belongs to a Besov or Sobolev space, and compare the theoretical and practical performances of the two selection rules.
Domaines
Statistiques [math.ST]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...