Spectral Embedding to Compress Neural Architectures Without Performance Loss
Résumé
This work investigates the impact of spectral-based dimensionality reduction on the performance of deep neural networks. We propose a method to construct low-dimensional, data-driven input representations by extracting dominant eigenvectors from the a square matrix representation of the dataset. The resulting spectral embeddings are used to initialize compressed input layers of neural architectures. A central element of this approach is MIRAMns, an efficient and scalable eigensolver based on nested Krylov subspaces, designed for high-performance distributed environments. Through empirical evaluation on both numerical and image datasets, we assess how varying the embedding dimension affects model accuracy, convergence speed, and parameter count. Results show that substantial reductions in input dimensionality can be achieved without significant loss in accuracy, demonstrating that spectral embeddings preserve essential information. Moreover, it is shown that, due to the simplification of inputs, it is possible to use smaller models while maintaining equivalent accuracy, thus reducing training time.
| Origine | Fichiers produits par l'(les) auteur(s) |
|---|---|
| Licence |