as soon as is submitted to ZB.
Fast, blind, and accurate: Tuning-free sparse regression with global linear convergence.
In: (37th Annual Conference on Learning Theory, 30 June-3 July 2024, Edmonton). 2024. 3823-3872 (Proceedings of Machine Learning Research ; 247)
Many algorithms for high-dimensional regression problems require the calibration of regularization hyperparameters. This, in turn, often requires the knowledge of the unknown noise variance in order to produce meaningful solutions. Recent works show, however, that there exist certain estimators that are pivotal, i.e., the regularization parameter does not depend on the noise level; the most remarkable example being the square-root lasso. Such estimators have also been shown to exhibit strong connections to distributionally robust optimization. Despite the progress in the design of pivotal estimators, the resulting minimization problem is challenging as both the loss function and the regularization term are non-smooth. To date, the design of fast, robust, and scalable algorithms with strong convergence rate guarantees is still an open problem. This work addresses this problem by showing that an iteratively reweighted least squares (IRLS) algorithm exhibits global linear convergence under the weakest assumption available in the literature. We expect our findings will also have implications for multi-task learning and distributionally robust optimization.
Additional Metrics?
Edit extra informations
Login
Publication type
Article: Conference contribution
Keywords
Convex Optimization ; Global Convergence ; Iteratively Reweighted Least Squares ; Linear Convergence Rate ; Majorization-minimization ; Sparse Regression ; Square-root Lasso
Conference Title
37th Annual Conference on Learning Theory
Conference Date
30 June-3 July 2024
Conference Location
Edmonton
Quellenangaben
Volume: 247,
Pages: 3823-3872
Institute(s)
Institute of Computational Biology (ICB)