On the complexity of nonsmooth automatic differentiation - Archive ouverte HAL Access content directly
Conference Papers Year : 2023

On the complexity of nonsmooth automatic differentiation

Abstract

We provide a simple model to estimate the computational costs of the backward and forward modes of algorithmic differentiation for a wide class of nonsmooth programs. Prominent examples are the famous relu and convolutional neural networks together with their standard loss functions. Using the recent notion of conservative gradients, we then establish a "nonsmooth cheap gradient principle" for backpropagation encompassing most concrete applications. Nonsmooth backpropagation's cheapness contrasts with concurrent forward approaches which have, at this day, dimensional-dependent worst case estimates. In order to understand this class of methods, we relate the complexity of computing a large number of directional derivatives to that of matrix multiplication. This shows a fundamental limitation for improving forward AD for that task. Finally, while the fastest algorithms for computing a Clarke subgradient are linear in the dimension, it appears that computing two distinct Clarke (resp. lexicographic) subgradients for simple neural networks is NP-Hard.
Fichier principal
Vignette du fichier
main.pdf (432.75 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03683640 , version 1 (31-05-2022)
hal-03683640 , version 2 (30-01-2023)
hal-03683640 , version 3 (01-02-2023)

Identifiers

  • HAL Id : hal-03683640 , version 3

Cite

Jérôme Bolte, Ryan Boustany, Edouard Pauwels, Béatrice Pesquet-Popescu. On the complexity of nonsmooth automatic differentiation. International Conference on Learning Representations (ICLR 2023), International Conference on Learning Representations, May 2023, Kigali, Rwanda. ⟨hal-03683640v3⟩
483 View
289 Download

Share

Gmail Mastodon Facebook X LinkedIn More