Abstract interpretation for neural networks verification - Archive ouverte HAL
Poster De Conférence Année : 2020

Abstract interpretation for neural networks verification

Résumé

We present in this poster a theoretical formalization of the abstract operator of the Leaky ReLU function as an activation function in a deep neural network. This work is part of the verification of the robustness of neural networks by interpretation. To validate our formulation, we present our implementation of this function within the most powerful tool currently available for neural network verification, the ETH Robustness Analyzer for Neural Networks (ERAN). This experimental validation was performed on a sample of the famous MNIST database, a set of images representing handwritten digits.
Fichier non déposé

Dates et versions

hal-03878080 , version 1 (29-11-2022)

Identifiants

Citer

Omar El Mellouki, Mohamed Ibn Khedher, Mounim El Yacoubi. Abstract interpretation for neural networks verification. DataIA Workshop « Safety & AI » (2020), Sep 2020, Gif-sur-Yvette, France. 2020, ⟨10.13140/RG.2.2.33268.78720⟩. ⟨hal-03878080⟩
61 Consultations
0 Téléchargements

Altmetric

Partager

More