A fast adaptive strategy for the estimation of a conditional density
Résumé
We consider the estimation of the conditional density $pi$ of a response vector $Y$ given a continuous predictor $X$. We provide an adaptive nonparametric strategy, based on model selection. Beginning with a collection of finite dimensional product spaces spanned by orthonormal bases, we consider the expansion of $h(x,y)=pi(F_X^{-1}(x),y)$, where $F_X$ is the cumulative distribution function of the variable $X$. Through this 'warping' of the bases, we propose a family of projection estimators easier to compute than estimators resulting of the minimization of a regression-type contrast. The selection of the best estimator $hat{h}$ for the function $h$, is done with a device inspired by Goldenshluger and Lepski (2011). The estimator is $hat{pi}(x,y)=hat{h}(hat{F}(x),y)$, where $hat{F}$ is the empirical distribution function. It realizes a global squared-bias/variance compromise, for anisotropic function classes: we establish non-asymptotic mean-squared integrated risk bounds and convergence rate for the risk. Simulation experiments illustrate the method.
Origine | Fichiers produits par l'(les) auteur(s) |
---|