Interpretable Domain Adaptation Using Unsupervised Feature Selection on Pre-trained Source Models - Archive ouverte HAL
Article Dans Une Revue Neurocomputing Année : 2022

Interpretable Domain Adaptation Using Unsupervised Feature Selection on Pre-trained Source Models

Résumé

We study a realistic domain adaptation setting where one has access to an already existing "black-box" machine learning model. Indeed, in real-life scenarios, an efficient pre-trained source domain predictive model is often available and required to be preserved. Our work extends a method that has been recently proposed to tackle this specific problem, yet providing an interpretable target to source transformation, by seeking a coordinate-wise adaptation of the feature space. However, this method requires partially labeled target data to select the features to be adapted. In contrast, we address the more challenging unsupervised version of this domain adaptation scenario. We propose a new pseudo-label estimator over unlabeled target examples, based on the rank-stability in regards to the source model prediction. Such estimated "labels" are further used in a feature selection process to assess whether each feature needs to be transformed to achieve adaptation. We provide theoretical foundations of our method as well as an efficient implementation. Numerical experiments on real datasets show particularly encouraging results since approaching the supervised case, where one has access to labeled target samples.
Fichier principal
Vignette du fichier
preprint.pdf (1.39 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03325509 , version 1 (24-08-2021)
hal-03325509 , version 2 (20-09-2022)

Identifiants

  • HAL Id : hal-03325509 , version 2

Citer

Luxin Zhang, Pascal Germain, Yacine Kessaci, Christophe Biernacki. Interpretable Domain Adaptation Using Unsupervised Feature Selection on Pre-trained Source Models. Neurocomputing, 2022, 511, pp.319-336. ⟨hal-03325509v2⟩
184 Consultations
457 Téléchargements

Partager

More