Multi-Channel Causal Variational Autoencoder
Multi-Channel Causal Variational Autoencoder
Résumé
The multimodal nature of clinical assessment and decision-making, and the high rate of healthcare data generation, motivate the need to develop approaches specifically adapted to the analysis of these complex and potentially high-dimensional multimodal datasets. This poses both technical and conceptual problems: how can such heterogeneous data be analyzed jointly? How can modality-specific information be identified from shared information? Variational autoencoders (VAEs) offer a robust framework for learning latent representations of complex data distributions, while being flexible enough to adapt to different data types and structures, and have already been successfully applied for latent disentanglement of multimodal data. Identifying causal relationships between available modalities, beyond simple statistical associations, could provide valuable and actionable insights, but conventional causal discovery techniques suffer from the curse of dimensionality. To address these issues, we propose Multi-Channel Causal VAE (\textit{MC$^2$VAE}), a causal disentanglement approach for multichannel data, whose objective is to jointly learn modality-specific latent representations from a multichannel dataset, and identify a linear causal structure between the latent variables. Each modality is projected into its own latent space, where a causal discovery step is integrated to learn the hidden causal graph. Finally, the decoder takes into account the discovered graph to predict the data. We formally derive MC$^2$VAE and the optimization strategy for its parameters. Experiments on synthetically generated data-sets underline the ability of our model to identify ground-truth hidden causal relationships, opening up a viable avenue for actionable interventions on multichannel systems.
Origine | Fichiers produits par l'(les) auteur(s) |
---|