Deep networks with ReLU activation functions can be smooth statistical models
Résumé
Most Deep neural networks use ReLU activation functions. Since these functions are not differentiable in 0, we may believe that such models may have irregular behavior. In this paper, we will show that the issue is more in the data than in the model, and if the data are “smooth”, the model will be differentiable in a suitable sense. We give a striking illustration of this fact with the example of adversarial attacks.