Adaptive functional linear regression
Résumé
We consider the estimation of the slope function in functional linear regression, where scalar responses are modeled in dependence of random functions. Cardot and Johannes (2010) have shown that a thresholded projection estimator can attain up to a constant minimax-rates of convergence in a general framework which allows to cover the prediction problem with respect to the mean squared prediction error as well as the estimation of the slope function and its derivatives. This estimation procedure, however, requires an optimal choice of a tuning parameter with regard to certain characteristics of the slope function and the covariance operator associated with the functional regressor. As this information is usually inaccessible in practice, we investigate a fully data-driven choice of the tuning parameter which combines model selection and Lepski's method. It is inspired by the recent work of Goldenshluger and Lepski (2011). The tuning parameter is selected as minimizer of a stochastic penalized contrast function imitating Lepski's method among a random collection of admissible values. This choice of the tuning parameter depends only on the data and we show that within the general framework the resulting data-driven thresholded projection estimator can attain minimax-rates up to a constant over a variety of classes of slope functions and covariance operators. The results are illustrated considering different configurations which cover in particular the prediction problem as well as the estimation of the slope and its derivatives.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...