×

Harmonic mean iteratively reweighted least squares for low-rank matrix recovery. (English) Zbl 1512.62057

Summary: We propose a new iteratively reweighted least squares (IRLS) algorithm for the recovery of a matrix \(X \in \mathbb{C}^{d_1\times d_2}\) of rank \(r \ll\min(d_1,d_2)\) from incomplete linear observations, solving a sequence of low complexity linear problems. The easily implementable algorithm, which we call harmonic mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Schatten-\(p\) quasi-norm penalization to promote low-rankness and carries three major strengths, in particular for the matrix completion setting. First, we observe a remarkable {global convergence behavior} of the algorithm’s iterates to the low-rank matrix for relevant, interesting cases, for which any other state-of-the-art optimization approach fails the recovery. Secondly, HM-IRLS exhibits an empirical recovery probability close to \(1\) even for a number of measurements very close to the theoretical lower bound \(r (d_1 +d_2 -r)\), i.e., already for significantly fewer linear observations than any other tractable approach in the literature. Thirdly, HM-IRLS exhibits a locally superlinear rate of convergence (of order \(2-p\)) if the linear observations fulfill a suitable null space property. While for the first two properties we have so far only strong empirical evidence, we prove the third property as our main theoretical result.

MSC:

62H12 Estimation in multivariate analysis
90C26 Nonconvex programming, global optimization

References:

[1] A. Ahmed and J. Romberg. Compressive multiplexing of correlated signals. IEEE Trans. Inf. Theory, 61(1):479–498, 2015. · Zbl 1359.94049
[2] K. M. R. Audenaert. A generalisation of Mirsky’s singular value inequalities. preprint, arXiv:1410.4941 [math.FA], 2014.
[3] D. S. Bernstein.Matrix Mathematics: Theory, Facts, and Formulas (Second Edition). Princeton University Press, 2009. · Zbl 1183.15001
[4] S. Bhojanapalli, B. Neyshabur, and N. Srebro. Global optimality of local search for low rank matrix recovery. In Advances in Neural Information Processing Systems (NIPS), pages 3873–3881, 2016.
[5] J. D. Blanchard, J. Tanner, and K. Wei. CGIHT: conjugate gradient iterative hard thresholding for compressed sensing and matrix completion. Inf. Inference, 4(4):289–327, 2015. 45 · Zbl 1380.94045
[6] E. J. Cand‘es and Y. Plan. Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Trans. Inf. Theory, 57(4):2342– 2359, April 2011. · Zbl 1366.90160
[7] E. J. Cand‘es and B. Recht. Exact matrix completion via convex optimization. Found. Comput. Math., 9(6):717–772, 2009. · Zbl 1219.90124
[8] E. J. Cand‘es, Y. Eldar, T. Strohmer, and V. Voroninski. Phase retrieval via matrix completion. SIAM J. Imag. Sci., 6(1):199–225, 2013. · Zbl 1280.49052
[9] E. J. Cand‘es, T. Strohmer, and V. Voroninski. PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming. Commun. Pure Appl. Math., 66(8):1241–1274, 2013. · Zbl 1335.94013
[10] E. J. Cand‘es, X. Li, and M. Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Trans. Inf. Theory, 61(4):1985–2007, 2015. · Zbl 1359.94069
[11] R. Chartrand. Exact reconstructions of sparse signals via nonconvex minimization. IEEE Signal Process. Lett., 14:707–710, 2007.
[12] J. A. Chavez-Dominguez and D. Kutzarova. Stability of low-rank matrix recovery and its connections to Banach space geometry. J. Math. Anal. Appl., 427(1):320–335, 2015. · Zbl 1331.41036
[13] B. Dacorogna. Direct Methods in the Calculus of Variations. Springer, New York, 1989. · Zbl 0703.49001
[14] I. Daubechies, R. DeVore, M. Fornasier, and C.S. G¨unt¨urk. Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math., 63:1–38, 2010. · Zbl 1202.65046
[15] M. A. Davenport and J. Romberg. An overview of low-rank matrix recovery from incomplete observations. IEEE J. Sel. Topics Signal Process., 10:608–622, 06 2016.
[16] D. L. Donoho, M. Gavish, and A. Montanari. The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising. Proc. Nat. Acad. Sci. U.S.A., 110(21):8405–8410, 2013. · Zbl 1292.94004
[17] J. Duchi. Properties of the Trace and Matrix Derivatives. Available electronically at https: //web.stanford.edu/ jduchi/projects/matrix_prop.pdf.
[18] Y.C. Eldar, D. Needell, and Y. Plan. Uniqueness conditions for low-rank matrix recovery. Appl. Comput. Harmon. Anal., 33(2):309–314, 2012. · Zbl 1247.65056
[19] M. Fazel. Matrix rank minimization with applications. Ph.D. Thesis, Electrical Engineering Department, Stanford University, 2002.
[20] M. Fornasier, H. Rauhut, and R. Ward. Low-rank matrix recovery via iteratively reweighted least squares minimization. SIAM J. Optim., 21(4):1614–1640, 2011. code from https: //github.com/rward314/IRLSM]. · Zbl 1236.65044
[21] M. Fornasier, S. Peter, H. Rauhut, and S. Worm. Conjugate gradient acceleration of iteratively re-weighted least squares methods. Comput. Optim. Appl., 65(1):205–259, 2016. 46 · Zbl 1353.90115
[22] S. Foucart. Concave Mirsky Inequality and Low-Rank Recovery. SIAM J. Matrix Anal. Appl., 39(1):99–103, 2018. · Zbl 1386.15025
[23] S. Foucart and H. Rauhut. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkh¨auser/Springer, New York, 2013. · Zbl 1315.94002
[24] Y. Gao, J. Peng, S. Yue, and Y. Zhao. On the null space property of ‘q-minimization for 0 ă q ď 1 in compressed sensing. J. Funct. Spaces, 2015:4203–4215, 2015.
[25] R. Ge, J. D. Lee, and T. Ma. Matrix completion has no spurious local minimum. In Advances in Neural Information Processing Systems (NIPS), pages 2973–2981, 2016.
[26] I. Gohberg, S. Goldberg, and N. Krupnik. Traces and determinants of linear operators, volume 116 of Operator Theory: Advances and Applications. Birkh¨auser, Basel, 2000. · Zbl 0946.47013
[27] D. Goldberg, D. Nichols, B. M. Oki, and D. Terry. Using collaborative filtering to weave an information tapestry. Commun. ACM, 35(12):61–70, 1992.
[28] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx, March 2014.
[29] D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory, 57(3):1548–1566, 2011. · Zbl 1366.94103
[30] D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, and J. Eisert. Quantum state tomography via compressed sensing. Phys. Rev. Lett., 105:150401, 2010.
[31] D. Gross, F. Krahmer, and R. Kueng. A partial derandomization of phaselift using spherical designs. J. Fourier Anal. Appl., 21(2):229–266, 2015. · Zbl 1332.90197
[32] J. P. Haldar and D. Hernando. Rank-constrained solutions to linear matrix equations using powerfactorization. IEEE Signal Process. Lett., 16(7):584–587, July 2009. [using AltMin (Alternating Minimization) algorithm].
[33] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 53 (2):217–288, 2011. · Zbl 1269.65043
[34] P. Jain, Raghu M., and I. S. Dhillon. Guaranteed rank minimization via singular value projection. In Advances in Neural Information Processing Systems (NIPS), pages 937– 945, 2010.
[35] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In Proc. ACM Symp. Theory Comput. (STOC), pages 665–674, Palo Alto, CA, USA, June 2013. · Zbl 1293.65073
[36] A. Jameson. Solution of the equation ax + xb = c by inversion of an m x m or n x n matrix. SIAM J. Appl. Math., 16(5):1020–1023, 1968. · Zbl 0169.35202
[37] M. Kabanava, R. Kueng, H. Rauhut, and U. Terstiege. Stable low-rank matrix recovery via null space properties. Inf. Inference, 5(4):405–441, 2016. 47 · Zbl 1388.94018
[38] F. J. Kir´aly, L. Theran, and R. Tomioka. The Algebraic Combinatorial Approach for LowRank Matrix Completion. J. Mach. Learn. Res., 16:1391–1436, 2015. · Zbl 1354.15019
[39] A. Kyrillidis and V. Cevher. Matrix recipes for hard thresholding methods. J. Math. Imaging Vision, 48(2):235–265, 2014. [using Matrix ALPS II (’Matrix ALgrebraic PursuitS II’) algorithm, code from http://akyrillidis.github.io/projects/]. · Zbl 1311.90141
[40] C. K¨ummerle and J. Sigl. Harmonic Mean Iteratively Reweighted Least Squares for low-rank matrix recovery. In 12th International Conference on Sampling Theory and Applications (SampTA), pages 489–493, 2017.
[41] Z. Liu and L. Vandenberghe. Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl., 31(3):1235–1256, 2010. · Zbl 1201.90151
[42] Z. Liu, A. Hansson, and L. Vandenberghe. Nuclear norm system identification with missing inputs and outputs. Systems Control Lett., 62(8):605–612, 2013. · Zbl 1279.93040
[43] J.R. Magnus and H. Neudecker. Matrix Differential Calculus with Applications in Statistics and Econometrics. Wiley Series in Probability and Statistics. Wiley, 1999. · Zbl 0912.15003
[44] B. Mishra, G. Meyer, F. Bach, and R. Sepulchre. Low-rank optimization with trace norm penalty. SIAM J. Optim., 23(4):2124–2149, 2013. · Zbl 1286.65078
[45] K. Mohan and M. Fazel. Iterative reweighted algorithms for matrix rank minimization. J. Mach. Learn. Res., 13(1):3441–3473, 2012. [using IRLS-MF (’IRLS-p’) algorithm, code from https://faculty.washington.edu/mfazel/]. · Zbl 1436.65055
[46] S. Oymak, K. Mohan, M. Fazel, and B. Hassibi. A simplified approach to recovery conditions for low rank matrices. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), pages 2318–2322, 2011.
[47] D. Park, A. Kyrillidis, C. Caramanis, and S. Sanghavi. Finding Low-rank Solutions to Matrix Problems, Efficiently and Provably. preprint, arXiv:1606.03168 [math.OC], [using BFGD (’Bi-Factored Gradient Descent’) algorithm, code from http://akyrillidis. github.io/projects/], 2016.
[48] D.L. Pimentel-Alarc´on, N. Boston, and R. D. Nowak. A Characterization of Deterministic Sampling Patterns for Low-Rank Matrix Completion. preprint, arXiv:1503.02596v3 [stat.ML], October 2016.
[49] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471–501, 2010. · Zbl 1198.90321
[50] B. Recht, W. Xu, and B. Hassibi. Null space conditions and thresholds for rank minimization. Math. Program., 127(1):175–202, 2011. · Zbl 1211.90172
[51] E. Schost and P.-J. Spaenlehauer. A quadratically convergent algorithm for structured´ low-rank approximation. Found. Comput. Math., 16(2):457–492, 2016. · Zbl 1347.65080
[52] N. Srebro, J. Rennie, and T. S. Jaakkola.Maximum-margin matrix factorization.In Advances in Neural Information Processing Systems (NIPS), pages 1329–1336, 2005. 48
[53] M. Stewart. Perturbation of the SVD in the presence of small singular values. Linear Algebra Appl., 419(1):53–77, 2006. · Zbl 1111.65037
[54] R. Sun and Z. Q. Luo. Guaranteed matrix completion via non-convex factorization. IEEE Trans. Inf. Theory, 62(11):6535–6579, 2016. · Zbl 1359.94179
[55] J. Tanner and K. Wei. Normalized Iterative Hard Thresholding for Matrix Completion. SIAM J. Sci. Comput., 35(5):S104–S125, 2013. · Zbl 1282.65043
[56] J. Tanner and K. Wei.Low rank matrix completion by alternating steepest descent methods. Appl. Comput. Harmon. Anal., 40(2):417–429, 2016. [using ASD (’Alternating Steepest Descent’) algorithm, code from https://www.math.ucdavis.edu/ kewei/ publications.html]. · Zbl 1336.65047
[57] S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht. Low-rank Solutions of Linear Matrix Equations via Procrustes Flow. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pages 964–973, 2016.
[58] B. Vandereycken. Low-rank matrix completion by Riemannian optimization. SIAM J. Optim., 23(2):1214–1236, 2013. [using Riemann Opt (’Riemannian Optimization’) algorithm, code from http://www.unige.ch/math/vandereycken/matrix_completion.html]. · Zbl 1277.15021
[59] P.-˚A. Wedin. Perturbation bounds in connection with singular value decomposition. BIT, 12(1):99–111, 1972. · Zbl 0239.15015
[60] K. Wei, J.-F. Cai, T. F. Chan, and S. Leung. Guarantees of Riemannian Optimization for Low Rank Matrix Recovery. SIAM J. Matrix Anal. Appl., 37(3):1198–1222, 2016. · Zbl 1347.65109
[61] Z. Wen, W. Yin, and Y. Zhang. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput., 4(4):333– 361, 2012. · Zbl 1271.65083
[62] Q. Zheng and J. Lafferty. A convergent gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements. In Advances in Neural Information Processing Systems (NIPS), pages 109–117, 2015.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.