möglich sobald bei der ZB eingereicht worden ist.
Fast, blind, and accurate: Tuning-free sparse regression with global linear convergence.
In: (37th Annual Conference on Learning Theory, 30 June-3 July 2024, Edmonton). 2024. 3823-3872 (Proceedings of Machine Learning Research ; 247)
Many algorithms for high-dimensional regression problems require the calibration of regularization hyperparameters. This, in turn, often requires the knowledge of the unknown noise variance in order to produce meaningful solutions. Recent works show, however, that there exist certain estimators that are pivotal, i.e., the regularization parameter does not depend on the noise level; the most remarkable example being the square-root lasso. Such estimators have also been shown to exhibit strong connections to distributionally robust optimization. Despite the progress in the design of pivotal estimators, the resulting minimization problem is challenging as both the loss function and the regularization term are non-smooth. To date, the design of fast, robust, and scalable algorithms with strong convergence rate guarantees is still an open problem. This work addresses this problem by showing that an iteratively reweighted least squares (IRLS) algorithm exhibits global linear convergence under the weakest assumption available in the literature. We expect our findings will also have implications for multi-task learning and distributionally robust optimization.
Weitere Metriken?
Zusatzinfos bearbeiten
[➜Einloggen]
Publikationstyp
Artikel: Konferenzbeitrag
Schlagwörter
Convex Optimization ; Global Convergence ; Iteratively Reweighted Least Squares ; Linear Convergence Rate ; Majorization-minimization ; Sparse Regression ; Square-root Lasso
Konferenztitel
37th Annual Conference on Learning Theory
Konferzenzdatum
30 June-3 July 2024
Konferenzort
Edmonton
Quellenangaben
Band: 247,
Seiten: 3823-3872
Nichtpatentliteratur
Publikationen
Institut(e)
Institute of Computational Biology (ICB)