Abstract interpretation for neural networks verification
Résumé
We present in this poster a theoretical formalization of the abstract operator of the Leaky ReLU function as an activation function in a deep neural network. This work is part of the verification of the robustness of neural networks by interpretation. To validate our formulation, we present our implementation of this function within the most powerful tool currently available for neural network verification, the ETH Robustness Analyzer for Neural Networks (ERAN). This experimental validation was performed on a sample of the famous MNIST database, a set of images representing handwritten digits.