Nonparametric Estimation of the Regression Function in an Errors-in-Variables Model
Résumé
We consider the regression model with errors-in-variables where we observe $n$ i.i.d. copies of $(Y,Z)$ satisfying $Y=f(X)+\xi, \; Z=X+\sigma\varepsilon$, involving independent and unobserved random variables $X,\xi,\varepsilon$. The density $g$ of $X$ is unknown, whereas the density of $\sigma\varepsilon$ is completely known. Using the observations $(Y_i, Z_i)$, $i=1,\cdots,n$, we propose an estimator of the regression function $f$, built as the ratio of two penalized minimum contrast estimators of $\ell=fg$ and $g$, without any prior knowledge on their smoothness. We prove that its $\mathbb{L}_2$-risk on a compact set is bounded by the sum of the two $\mathbb{L}_2(\mathbb{R})$-risks of the estimators of $\ell$ and $g$, and give the rate of convergence of such estimators for various smoothness classes for $\ell$ and $g$, when the errors $\varepsilon$ are either ordinary smooth or super smooth. The resulting rate is optimal in a minimax sense in all cases where lower bounds are available.
Loading...