Exploiting regularity in sparse Generalized Linear Models - Archive ouverte HAL Access content directly
Conference Papers Year :

Exploiting regularity in sparse Generalized Linear Models

Abstract

Generalized Linear Models (GLM) are a wide class ofregression and classification models, where the predictedvariable is obtained from a linear combination of the in-put variables. For statistical inference in high dimensions,sparsity inducing regularization have proven useful whileoffering statistical guarantees. However, solving the result-ing optimization problems can be challenging: even forpopular iterative algorithms such as coordinate descent, oneneeds to loop over a large number of variables. To mitigatethis, techniques known asscreening rulesandworking setsdiminish the size of the optimization problem at hand, eitherby progressively removing variables, or by solving a growingsequence of smaller problems. For both of these techniques,significant variables are identified by convex duality. In thispaper, we show that the dual iterates of a GLM exhibit aVector AutoRegressive (VAR) behavior after sign identifi-cation, when the primal problem is solved with proximalgradient descent or cyclic coordinate descent. Exploitingthis regularity one can construct dual points that offertighter control of optimality, enhancing the performance ofscreening rules and helping to design a competitive workingset algorithm.
Fichier principal
Vignette du fichier
spars2019.pdf (1.18 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02288859 , version 1 (13-10-2019)

Identifiers

  • HAL Id : hal-02288859 , version 1

Cite

Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon. Exploiting regularity in sparse Generalized Linear Models. SPARS 2019 - Signal Processing with Adaptive Sparse Structured Representations, Jul 2019, Toulouse, France. ⟨hal-02288859⟩
137 View
72 Download

Share

Gmail Facebook Twitter LinkedIn More