Distance Measure Machines
Résumé
This paper presents a distance-based discriminative framework for learning with probability distributions. Instead of using kernel
mean embeddings or generalized radial basis kernels, we introduce embeddings based on dissimilarity of distributions to some reference
distributions denoted as templates. Our framework extends the theory of similarity of \citet{balcan2008theory} to the population
distribution case and we show that, for some learning problems, some dissimilarity on distribution achieves low-error linear decision
functions with high probability. Our key result is to prove that the theory also holds for empirical distributions. Algorithmically, the
proposed approach consists in computing a mapping based on pairwise dissimilarity where learning a linear decision function is amenable.
Our experimental results show that the Wasserstein distance embedding performs better than kernel mean embeddings and computing
Wasserstein distance is far more tractable than estimating pairwise Kullback-Leibler divergence of empirical distributions.
Origine | Fichiers produits par l'(les) auteur(s) |
---|