Bringing Explainability to Autoencoding Neural Networks Encoding Aircraft Trajectories - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Bringing Explainability to Autoencoding Neural Networks Encoding Aircraft Trajectories

Résumé

Autoencoders, a class of neural networks, have emerged as a valuable tool for anomaly detection and trajectory clustering: they produce a compressed latent space and capture essential features in the data. However, their lack of interpretability poses challenges in the context of ATM, where clear-cut explanations are crucial. In this paper, we investigate this issue by exploring visual methods to enhance the interpretability of autoencoders applied to aircraft trajectory data. We propose techniques to extract meaningful information from the structure of the latent space, and to promote a better understanding of generative models behaviours. We present insights from two simplified and real-world datasets and evaluate the structure of the latent space of autoencoders. Furthermore, we introduce suggestions for more realism in trajectory generation based on Variational Autoencoders (VAE). This study offers valuable recommendations to developers in the field of ATM, fostering improved interpretability and thus safety for generative AI in air traffic management.
Fichier principal
Vignette du fichier
SIDs_2023_paper_39-final.pdf (1.37 Mo) Télécharger le fichier
Origine Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-04633736 , version 1 (03-07-2024)

Identifiants

  • HAL Id : hal-04633736 , version 1

Citer

Zakaria Ezzahed, Antoine Chevrot, Christophe Hurter, Xavier Olive. Bringing Explainability to Autoencoding Neural Networks Encoding Aircraft Trajectories. 13th SESAR Innovation Days 2023, SIDS 2023, Nov 2023, Séville, Spain. pp.ISSN : 0770-1268. ⟨hal-04633736⟩
0 Consultations
0 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More