×

An algorithm solving compressive sensing problem based on maximal monotone operators. (English) Zbl 1481.65096

Summary: The need to solve \(\ell^1\) regularized linear problems can be motivated by various compressive sensing and sparsity related techniques for data analysis and signal or image processing. These problems lead to nonsmooth convex optimization in high dimensions. Theoretical works predict a sharp phase transition for the exact recovery of compressive sensing problems. Our numerical experiments show that state-of-the-art algorithms are not effective enough to observe this phase transition accurately. This paper proposes a simple formalism that enables us to produce an algorithm that computes an \(\ell^1\) minimizer under the constraints \(A\boldsymbol{u}=\boldsymbol{b}\) up to the machine precision. In addition, a numerical comparison with standard algorithms available in the literature is exhibited. The comparison shows that our algorithm compares advantageously with other state-of-the-art methods, both in terms of accuracy and efficiency. With our algorithm, the aforementioned phase transition is observed at high precision.

MSC:

65K15 Numerical methods for variational inequalities and related problems
34A60 Ordinary differential inclusions
49M29 Numerical methods involving duality
90C06 Large-scale problems in mathematical programming
90C25 Convex programming
Full Text: DOI

References:

[1] J.-P. Aubin and A. Cellina, Differential Inclusions: Set-Valued Maps and Viability Theory, Grundlehren Math. Wiss., Springer, Berlin, Heidelberg, 2012.
[2] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), pp. 183-202, https://doi.org/10.1137/080716542. · Zbl 1175.94009
[3] T. Blumensath and M. E. Davies, Iterative hard thresholding for compressed sensing, Appl. Comput. Harmon. Anal., 27 (2009), pp. 265-274. · Zbl 1174.94008
[4] J. M. Borwein and A. S. Lewis, Convex Analysis and Nonlinear Optimization: Theory and Examples, CMS Books Math., Springer, New York, 2000. · Zbl 0953.90001
[5] H. Brézis, Opérateurs Maximaux Monotones et Semi-Groupes de Contractions Dans Les Espaces de Hilbert, North-Holland Publishing, Amsterdam, American Elsevier Publishing, New York, 1973. · Zbl 0252.47055
[6] B. Bringmann, D. Cremers, F. Krahmer, and M. Möller, The homotopy method revisited: Computing solution paths of \(\ell_1\)-regularized problems, Math. Comp., 87 (2018), pp. 2343-2364, https://doi.org/10.1090/mcom/3287. · Zbl 1391.49072
[7] M. Burger, M. Möller, M. Benning, and S. Osher, An adaptive inverse scale space method for compressed sensing, Math. Comp., 82 (2013), pp. 269-299. · Zbl 1260.49053
[8] J.-F. Cai, S. Osher, and Z. Shen, Convergence of the linearized Bregman iteration for \(\ell_1\)-norm minimization, Math. Comp., 78 (2009), pp. 2127-2136. · Zbl 1198.65103
[9] J.-F. Cai, S. Osher, and Z. Shen, Linearized Bregman iterations for compressed sensing, Math. Comp., 78 (2009), pp. 1515-1536. · Zbl 1198.65102
[10] E. J. Candès, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), pp. 489-509. · Zbl 1231.94017
[11] E. J. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 51 (2005), pp. 4203-4215. · Zbl 1264.94121
[12] E. J. Candes and T. Tao, Near-optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory, 52 (2006), pp. 5406-5425. · Zbl 1309.94033
[13] I. Ciril, J. Darbon, and Y. Tendero, A simple and exact algorithm to solve l1 linear problems – Application to the compressive sensing method, in Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2018) – Volume 4: VISAPP, Funchal, Madeira, Portugal, 2018, pp. 54-62.
[14] W. Dai and O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction, IEEE Trans. Inform. Theory, 55 (2009), pp. 2230-2249. · Zbl 1367.94082
[15] M. Dimiccoli, Fundamentals of cone regression, Statist. Surv., 10 (2016), pp. 53-99, https://doi.org/10.1214/16-SS114. · Zbl 1384.62134
[16] D. L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), pp. 1289-1306. · Zbl 1288.94016
[17] D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via \(\ell 1\) minimization, Proc. Natl. Acad. Sci. USA, 100 (2003), pp. 2197-2202. · Zbl 1064.94011
[18] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, Least angle regression, Ann. Statist., 32 (2004), pp. 407-499, https://doi.org/10.1214/009053604000000067. · Zbl 1091.62054
[19] S. Foucart, Stability and robustness of weak orthogonal matching pursuits, in Recent Advances in Harmonic Analysis and Applications, Springer Proc. Math. Stat. 25, Springer, New York, 2013, pp. 395-405. · Zbl 1322.94065
[20] M. C. Grant and S. P. Boyd, The CVX Users’ Guide, Release 2.1.
[21] E. T. Hale, W. Yin, and Y. Zhang, A Fixed-Point Continuation Method for l1-Regularized Minimization with Applications to Compressed Sensing, CAAM Technical Report TR07-07, Rice University, Houston, TX, 2007.
[22] J.-B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms I: Fundamentals, Grundlehren Math. Wiss., Springer, Berlin, Heidelberg, 1996.
[23] J.-B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms II: Advanced Theory and Bundle Methods, Grundlehren Math. Wiss., Springer, Berlin, Heidelberg, 1996.
[24] J. Huang, Y. Jiao, X. Lu, and L. Zhu, Robust decoding from 1-bit compressive sampling with ordinary and regularized least squares, SIAM J. Sci. Comput., 40 (2018), pp. A2062-A2086, https://doi.org/10.1137/17M1154102. · Zbl 1395.49033
[25] P. Jain, A. Tewari, and I. S. Dhillon, Orthogonal matching pursuit with replacement, in Advances in Neural Information Processing Systems, 2011, pp. 1215-1223.
[26] J. Mairal, F. Bach, J. Ponce, G. Sapiro, R. Jenatton, and G. Obozinski, SPAMS: A Sparse Modeling Software, v2.3, http://spams-devel.gforge.inria.fr/downloads.html, 2014.
[27] J. Mairal and B. Yu, Complexity analysis of the lasso regularization path, in Proceedings of the 29th International Conference on International Conference on Machine Learning, Omnipress, 2012, pp. 1835-1842.
[28] A. Maleki, Coherence analysis of iterative thresholding algorithms, in Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, IEEE, 2009, pp. 236-243.
[29] A. Maleki and D. L. Donoho, Optimally tuned iterative reconstruction algorithms for compressed sensing, IEEE J. Sel. Topics Signal Process., 4 (2010), pp. 330-341.
[30] S. Mallat and Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Trans. Signal Process., 41 (1993), pp. 3397-3415. · Zbl 0842.94004
[31] M. C. Meyer, A simple new algorithm for quadratic programming with applications in statistics, Comm. Statist. Simulation Comput., 42 (2013), pp. 1126-1139. · Zbl 1347.62013
[32] M. Moeller and X. Zhang, Fast sparse reconstruction: Greedy inverse scale space flows, Math. Comp., 85 (2016), pp. 179-208. · Zbl 1327.65067
[33] D. Needell and J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal., 26 (2009), pp. 301-321. · Zbl 1163.94003
[34] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, An iterative regularization method for total variation-based image restoration, Multiscale Model. Simul., 4 (2005), pp. 460-489, https://doi.org/10.1137/040605412. · Zbl 1090.94003
[35] Y. C. Pati, R. Rezaiifar, and P. Krishnaprasad, Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition, in Proceedings of the Twenty-Seventh Asilomar Conference on Signals, Systems and Computers, IEEE, 1993, pp. 40-44.
[36] R. Rockafellar, M. Wets, and R. Wets, Variational Analysis, Grundlehren Math. Wiss., Springer, Berlin, Heidelberg, 2009.
[37] H. Schaeffer, Y. Yang, H. Zhao, and S. Osher, Real-time adaptive video compression, SIAM J. Sci. Comput., 37 (2015), pp. B980-B1001, https://doi.org/10.1137/130937792. · Zbl 1329.94014
[38] J. F. Sturm, Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones, Optim. Methods Softw. 11/12 (1999), pp. 625-653. · Zbl 0973.90526
[39] J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inform. Theory, 53 (2007), pp. 4655-4666. · Zbl 1288.94022
[40] E. Van Den Berg and M. P. Friedlander, Probing the Pareto frontier for basis pursuit solutions, SIAM J. Sci. Comput., 31 (2008), pp. 890-912, https://doi.org/10.1137/080714488. · Zbl 1193.49033
[41] E. van den Berg and M. P. Friedlander, SPGL1: A Solver for Large-Scale Sparse Reconstruction, December 2019, https://friedlander.io/spgl1.
[42] Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang, A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation, SIAM J. Sci. Comput., 32 (2010), pp. 1832-1857, https://doi.org/10.1137/090747695. · Zbl 1215.49039
[43] J. Yang and Y. Zhang, Alternating direction algorithms for \(\ell_1\)-problems in compressive sensing, SIAM J. Sci. Comput., 33 (2011), pp. 250-278, https://doi.org/10.1137/090777761. · Zbl 1256.65060
[44] W. Yin, S. Osher, D. Goldfarb, and J. Darbon, Bregman iterative algorithms for \(\ell_1\)-minimization with applications to compressed sensing, SIAM J. Imaging Sci., 1 (2008), pp. 143-168, https://doi.org/10.1137/070703983. · Zbl 1203.90153
[45] X. Zhang, M. Burger, and S. Osher, A unified primal-dual algorithm framework based on Bregman iteration, J. Sci. Comput., 46 (2011), pp. 20-46. · Zbl 1227.65052
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.