Theoretical Analysis of Domain Adaptation with Optimal Transport
Résumé
Domain adaptation (DA) is an important and emerging field of machine learning that tackles the problem occurring when the distributions of training (source domain) and test (target domain) data are similar but different. This kind of learning paradigm is of vital importance for future advances as it allows a learner to generalize the knowledge across different tasks. Current theoretical results show that the efficiency of DA algorithms depends on their capacity of minimizing the divergence between source and target probability distributions. In this paper, we provide a theoretical study on the advantages that concepts borrowed from optimal transportation theory [17] can bring to DA. In particular, we show that the Wasserstein metric can be used as a divergence measure between distributions to obtain generalization guarantees for three different learning settings: (i) classic DA with unsupervised target data (ii) DA combining source and target labeled data, (iii) multiple source DA. Based on the obtained results, we motivate the use of the regularized optimal transport and provide some algorithmic insights for multi-source domain adaptation. We also show when this theoretical analysis can lead to tighter inequalities than those of other existing frameworks. We believe that these results open the door to novel ideas and directions for DA.
Domaines
Apprentissage [cs.LG]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...