Noisy Quantization: theory and practice - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2014

Noisy Quantization: theory and practice

Résumé

The effect of errors in variables in quantization is investigated. Given a noisy sample $Z_i=X_i+\epsilon_i,i=1,\ldots,n$, where $(X_i)_{i=1, \ldots ,n}$ are i.i.d. with law $P$, we want to find the best approximation of the probability distribution $P$ with $k\geq 1$ points called codepoints. We prove general excess risk bounds with fast rates for an empirical minimization based on a deconvolution kernel estimator. These rates depend on the behaviour of the density of $P$ and the asymptotic behaviour of the characteristic function of the noise $\epsilon$. This general study can be applied to the problem of $k$-means clustering with noisy data. For this purpose, we introduce a deconvolution $k$-means stochastic minimization which reaches fast rates of convergence under standard Pollard's regularity assumptions. We also introduce a new algorithm to deal with $k$-means clustering with errors in variables. Following the theoretical study, the algorithm mixes different tools from the inverse problem literature and the machine learning community. Coarsely, it is based on a two-step procedure: (1) a deconvolution step to deal with noisy inputs and (2) Newton's iterations as the popular $k$-means.
Fichier principal
Vignette du fichier
jmva.pdf (655.44 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-01060380 , version 1 (10-09-2014)

Identifiants

  • HAL Id : hal-01060380 , version 1

Citer

Camille Brunet, Sébastien Loustau. Noisy Quantization: theory and practice. 2014. ⟨hal-01060380⟩
303 Consultations
96 Téléchargements

Partager

Gmail Facebook X LinkedIn More