CLIP-QDA: An Explainable Concept Bottleneck Model - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Transactions on Machine Learning Research Journal Année : 2024

CLIP-QDA: An Explainable Concept Bottleneck Model

Résumé

In this paper, we introduce an explainable algorithm designed from a multi-modal foundation model, that performs fast and explainable image classification. Drawing inspiration from CLIP-based Concept Bottleneck Models (CBMs), our method creates a latent space where each neuron is linked to a specific word. Observing that this latent space can be modeled with simple distributions, we use a Mixture of Gaussians (MoG) formalism to enhance the interpretability of this latent space. Then, we introduce CLIP-QDA, a classifier that only uses statistical values to infer labels from the concepts. In addition, this formalism allows for both local and global explanations. These explanations come from the inner design of our architecture, our work is part of a new family of greybox models, combining performances of opaque foundation models and the interpretability of transparent models. Our empirical findings show that in instances where the MoG assumption holds, CLIP-QDA achieves similar accuracy with state-of-the-art methods CBMs. Our explanations compete with existing XAI methods while being faster to compute.
Fichier principal
Vignette du fichier
Modeling_CLIP_latent_space_TMLR_v2__Version_TMLR_.pdf (16.3 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04605490 , version 1 (07-06-2024)

Identifiants

  • HAL Id : hal-04605490 , version 1

Citer

Rémi Kazmierczak, Eloïse Berthier, Goran Frehse, Gianni Franchi. CLIP-QDA: An Explainable Concept Bottleneck Model. Transactions on Machine Learning Research Journal, 2024. ⟨hal-04605490⟩

Collections

GENCI IP_PARIS
9 Consultations
3 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More