Penalization versus Goldenshluger − Lepski strategies in warped bases regression - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue ESAIM: Probability and Statistics Année : 2013

Penalization versus Goldenshluger − Lepski strategies in warped bases regression

Gaëlle Chagny

Résumé

This paper deals with the problem of estimating a regression function f , in a random design framework. We build and study two adaptive estimators based on model selection , applied with warped bases. We start with a collection of nite dimensional linear spaces, spanned by orthonormal bases. Instead of expanding directly the target function f on these bases, we rather consider the expansion of an intermediate function, the convolution product of f with the inverse of the cumulative distribution function of the design, following Kerkyacharian and Picard (2004). The data-driven selection of the (best) space is done with two strategies: we use both a penalization version of a "warped contrast", and a model selection device in the spirit of Goldenshluger and Lepski (2011). We propose by these methods two functions, easier to compute than least-squares estimators. We establish nonasymptotic mean-squared integrated risk bounds for the resulting estimators. We study also adaptive properties, in case the regression function belongs to a Besov or Sobolev space, and compare the theoretical and practical performances of the two selection rules.
Fichier principal
Vignette du fichier
ArticlRegRevision.pdf (510.3 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02132877 , version 1 (17-05-2019)

Identifiants

Citer

Gaëlle Chagny. Penalization versus Goldenshluger − Lepski strategies in warped bases regression. ESAIM: Probability and Statistics, 2013, 17, pp.328-358. ⟨10.1051/ps/2011165⟩. ⟨hal-02132877⟩
20 Consultations
61 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More