## The homotopy method revisited: Computing solution paths of $\ell _1$-regularized problems

HTML articles powered by AMS MathViewer

- by Bjoern Bringmann, Daniel Cremers, Felix Krahmer and Michael Moeller PDF
- Math. Comp.
**87**(2018), 2343-2364 Request permission

## Abstract:

$\ell _1$-regularized linear inverse problems frequently arise in signal processing, image analysis, and statistics. The correct choice of the regularization parameter $t \in \mathbb {R}_{\geq 0}$ is a delicate issue. Instead of solving the variational problem for a fixed parameter, the idea of the homotopy method is to compute a complete solution path $u(t)$ as a function of $t$. In their celebrated paper*A new approach to variable selection in least squares problems*[IMA J. Numer. Anal.

**20**(2000), no. 3, 389–403], Osborne, Presnell, and Turlach showed that the computational cost of this approach is often comparable to the cost of solving the corresponding least squares problem. Their analysis relies on the one-at-a-time condition, which requires that different indices enter or leave the support of the solution at distinct regularization parameters. In this paper, we introduce a generalized homotopy algorithm based on a nonnegative least squares problem, which does not require such a condition, and prove its termination after finitely many steps. At every point of the path, we give a full characterization of all possible directions. To illustrate our results, we discuss examples in which the standard homotopy method either fails or becomes infeasible. To the best of our knowledge, our algorithm is the first to provably compute a full piecewise linear and continuous solution path for an arbitrary combination of a measurement matrix and a data vector.

## References

- Arjan Berkelaar, Kees Roos, and Tamas Terlaky,
*The optimal set and optimal partition approach to linear and quadratic programming*, Springer, 1996, pp. 159–202. - Michael J. Best,
*An algorithm for the solution of the parametric quadratic programming problem*, Applied mathematics and parallel computing, Physica, Heidelberg, 1996, pp. 57–76. MR**1469261** - Martin Burger, Guy Gilboa, Michael Moeller, Lina Eckardt, and Daniel Cremers,
*Spectral decompositions using one-homogeneous functionals*, SIAM J. Imaging Sci.**9**(2016), no. 3, 1374–1408. MR**3544851**, DOI 10.1137/15M1054687 - Martin Burger, Michael Möller, Martin Benning, and Stanley Osher,
*An adaptive inverse scale space method for compressed sensing*, Math. Comp.**82**(2013), no. 281, 269–299. MR**2983025**, DOI 10.1090/S0025-5718-2012-02599-3 - Emmanuel J. Candès, Justin Romberg, and Terence Tao,
*Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information*, IEEE Trans. Inform. Theory**52**(2006), no. 2, 489–509. MR**2236170**, DOI 10.1109/TIT.2005.862083 - David Donoho, Iddo Drori, Victoria Stodden, Yaakov Tsaig, and Morteza Shahram,
*Sparselab*, https://sparselab.stanford.edu/, 2007. - David L. Donoho and Yaakov Tsaig,
*Fast solution of $l_1$-norm minimization problems when the solution may be sparse*, IEEE Trans. Inform. Theory**54**(2008), no. 11, 4789–4812. MR**2589865**, DOI 10.1109/TIT.2008.929958 - Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani,
*Least angle regression*, Ann. Statist.**32**(2004), no. 2, 407–499. With discussion, and a rejoinder by the authors. MR**2060166**, DOI 10.1214/009053604000000067 - Simon Foucart and Holger Rauhut,
*A mathematical introduction to compressive sensing*, Applied and Numerical Harmonic Analysis, Birkhäuser/Springer, New York, 2013. MR**3100033**, DOI 10.1007/978-0-8176-4948-7 - Jean-Jacques Fuchs,
*Fast implementation of a l1-l1 regularized sparse representations algorithm*, 2009 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 3329–3332. - Guy Gilboa,
*A total variation spectral framework for scale and texture analysis*, SIAM J. Imaging Sci.**7**(2014), no. 4, 1937–1961. MR**3267157**, DOI 10.1137/130930704 - Tom Goldstein, Min Li, Xiaoming Yuan, Ernie Esser, and Richard Baraniuk,
*Adaptive primal-dual hybrid gradient methods for saddle-point problems*, arXiv:1305.0546 (2013). - Charles L. Lawson and Richard J. Hanson,
*Solving least squares problems*, Prentice-Hall Series in Automatic Computation, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1974. MR**0366019** - Ignace Loris,
*L1Packv2: a Mathematica package in minimizing an $\ell _1$-penalized functional*, Comput. Phys. Comm.**179**(2008), no. 12, 895–902. MR**2670260**, DOI 10.1016/j.cpc.2008.07.010 - Vahid Nassiri and Ignace Loris,
*An efficient algorithm for structured sparse quantile regression*, Comput. Statist.**29**(2014), no. 5, 1321–1343. MR**3266061**, DOI 10.1007/s00180-014-0494-1 - M. R. Osborne, Brett Presnell, and B. A. Turlach,
*A new approach to variable selection in least squares problems*, IMA J. Numer. Anal.**20**(2000), no. 3, 389–403. MR**1773265**, DOI 10.1093/imanum/20.3.389 - R. Tyrrell Rockafellar,
*Convex analysis*, Princeton Mathematical Series, No. 28, Princeton University Press, Princeton, N.J., 1970. MR**0274683** - Leonid I. Rudin, Stanley Osher, and Emad Fatemi,
*Nonlinear total variation based noise removal algorithms*, Phys. D**60**(1992), no. 1-4, 259–268. Experimental mathematics: computational issues in nonlinear science (Los Alamos, NM, 1991). MR**3363401**, DOI 10.1016/0167-2789(92)90242-F - Robert Tibshirani,
*Regression shrinkage and selection via the lasso*, J. Roy. Statist. Soc. Ser. B**58**(1996), no. 1, 267–288. MR**1379242** - Ryan J. Tibshirani,
*The lasso problem and uniqueness*, Electron. J. Stat.**7**(2013), 1456–1490. MR**3066375**, DOI 10.1214/13-EJS815 - Ryan J. Tibshirani and Jonathan Taylor,
*The solution path of the generalized lasso*, Ann. Statist.**39**(2011), no. 3, 1335–1371. MR**2850205**, DOI 10.1214/11-AOS878 - Hui Zhang, Ming Yan, and Wotao Yin,
*One condition for solution uniqueness and robustness of both $\ell _1$-synthesis and $\ell _1$-analysis minimizations*, Adv. Comput. Math.**42**(2016), no. 6, 1381–1399. MR**3571210**, DOI 10.1007/s10444-016-9467-y - Hui Zhang, Wotao Yin, and Lizhi Cheng,
*Necessary and sufficient conditions of solution uniqueness in 1-norm minimization*, J. Optim. Theory Appl.**164**(2015), no. 1, 109–122. MR**3296287**, DOI 10.1007/s10957-014-0581-z

## Additional Information

**Bjoern Bringmann**- Affiliation: Department of Mathematics, University of California Los Angeles, 520 Portola Plaza, Los Angeles, California 90095
- Email: bringmann@math.ucla.edu
**Daniel Cremers**- Affiliation: Department of Computer Science, Technische Universität München, Informatik 9, Boltzmannstr. 3, D-85748 Garching bei München
- MR Author ID: 644986
- Email: cremers@tum.de
**Felix Krahmer**- Affiliation: Department of Mathematics, Technische Universität München, Boltzmannstr. 3, D-85748 Garching bei München
- MR Author ID: 845768
- Email: felix.krahmer@tum.de
**Michael Moeller**- Affiliation: Department of Electrical Engineering and Computer Science, Universität Siegen, Hölderlinstr. 3, D-57076 Siegen
- MR Author ID: 974311
- Email: michael.moeller@uni-siegen.de
- Received by editor(s): May 30, 2016
- Received by editor(s) in revised form: April 6, 2017
- Published electronically: December 14, 2017
- Additional Notes: The second and fourth authors were supported by the ERC Starting Grant “ConvexVision”.

The third author’s contribution was supported by the German Science Foundation DFG in context of the Emmy Noether junior research group KR 4512/1-1 (RaSenQuaSI) - © Copyright 2017 American Mathematical Society
- Journal: Math. Comp.
**87**(2018), 2343-2364 - MSC (2010): Primary 49N45, 90C25, 62J07
- DOI: https://doi.org/10.1090/mcom/3287
- MathSciNet review: 3802438