Deep Sturm–Liouville: Learnable orthogonal basis functions parameterized by neural networks
Résumé
We introduce $\textit{Deep Sturm-Liouville}$ (DSL), a novel function approximator obtained by integrating the Sturm-Liouville theorem (SLT) into the deep learning framework. The Sturm-Liouville theorem deals with a class of eigenvalue problems having a wide range of applications in physics, which motivates us to explore its usage on machine learning tasks. The core idea of our work is to learn a vector field, crossing the input space $\Omega \subset \mathbb{R}^n$, such that the ML task along each of its field lines can be solved more easily due to the regularity of the problem on these field lines. A Sturm-Liouville Problem is solved along each field line to obtain orthogonal basis functions that, combined linearly, form the DSL function approximator. The vector field and the functions appearing in the SLT are parameterized by neural networks and they are learnt simultaneously. We also demonstrate that the DSL formulation appears naturally when solving a Rank-1 Parabolic Eigenvalue Problem. DSL is trained by stochastic gradient descent thanks to the implicit differentiation theorem, achieving comparable performances to neural networks on several multivariate datasets and the $\texttt{MNIST}$ dataset.
Origine | Fichiers produits par l'(les) auteur(s) |
---|