×

Null space conditions and thresholds for rank minimization. (English) Zbl 1211.90172

Summary: Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in machine learning, control theory, and discrete geometry. This class of optimization problems, known as rank minimization, is NP-hard, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic replaces the rank function with the nuclear norm – equal to the sum of the singular values—of the decision variable and has been shown to provide the optimal low rank solution in a variety of scenarios. In this paper, we assess the practical performance of this heuristic for finding the minimum rank matrix subject to linear equality constraints. We characterize properties of the null space of the linear operator defining the constraint set that are necessary and sufficient for the heuristic to succeed. We then analyze linear constraints sampled uniformly at random, and obtain dimension-free bounds under which our null space properties hold almost surely as the matrix dimensions tend to infinity. Finally, we provide empirical evidence that these probabilistic bounds provide accurate predictions of the heuristic’s performance in non-asymptotic scenarios.

MSC:

90C25 Convex programming
90C59 Approximation methods and heuristics in mathematical programming
15B52 Random matrices (algebraic aspects)

Software:

SeDuMi
Full Text: DOI

References:

[1] Ames, B.P.W., Vavasis, S.A.: Nuclear norm minimization for the planted clique and biclique problems (2009). Submitted to Mathematical Programming. Preprint available at http://arxiv.org/abs/0901.3348v1 · Zbl 1271.90056
[2] Amit, Y., Fink, M., Srebro, N., Ullman, S.: Uncovering shared structures in multiclass classification. In: Proceedings of the International Conference of Machine Learning (2007)
[3] Argyriou, A., Micchelli, C.A., Pontil, M.: Convex multi-task feature learning. Machine Learning (2008). Published online first at http://www.springerlink.com/ · Zbl 1137.68517
[4] Bai Z.D.: Methodologies in spectral analysis of large dimensional random matrices. Statistica Sinica 9(3), 611–661 (1999) · Zbl 0949.60077
[5] Baraniuk, R., Davenport, M., DeVore, R., Wakin, M.: A simple proof of the restricted isometry property for random matrices. Constructive Approximation (2008). To Appear. Preprint available at http://dsp.rice.edu/cs/jlcs-v03.pdf · Zbl 1177.15015
[6] Beck, C., D’Andrea, R.: Computational study and comparisons of LFT reducibility methods. In: Proceedings of the American Control Conference (1998)
[7] Cai J.F., Candès E.J., Shen Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2008) · Zbl 1201.90155 · doi:10.1137/080738970
[8] Candès E., Recht B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717–772 (2009) · Zbl 1219.90124 · doi:10.1007/s10208-009-9045-5
[9] Candès E.J., Romberg J., Tao T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006) · Zbl 1231.94017 · doi:10.1109/TIT.2005.862083
[10] Candès E.J., Tao T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005) · Zbl 1264.94121 · doi:10.1109/TIT.2005.858979
[11] Donoho D.: High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Discret. Comput. Geom. 35(4), 617–652 (2006) · Zbl 1095.52500 · doi:10.1007/s00454-005-1220-0
[12] Donoho D., Huo X.: Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001) · Zbl 1019.94503 · doi:10.1109/18.959265
[13] Donoho D.L., Tanner J.: Neighborliness of randomly projected simplices in high dimensions. Proc. Natl. Acad. Sci. USA 102(27), 9452–9457 (2005) · Zbl 1135.60300 · doi:10.1073/pnas.0502258102
[14] Donoho D.L., Tanner J.: Sparse nonnegative solution of underdetermined linear equations by linear programming. Proc. Natl. Acad. Sci. USA 102(27), 9446–9451 (2005) · Zbl 1135.90368 · doi:10.1073/pnas.0502269102
[15] Fazel, M.: Matrix Rank Minimization with Applications. Ph.D. thesis, Stanford University (2002)
[16] Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of the American Control Conference (2001)
[17] El Ghaoui, L., Gahinet, P.: Rank minimization under LMI constraints: a framework for output feedback problems. In: Proceedings of the European Control Conference (1993)
[18] Gordon Y.: Some inequalities for Gaussian processes and applications. Israel J. Math. 50, 265–289 (1985) · Zbl 0663.60034 · doi:10.1007/BF02759761
[19] Gordon Y.: Gaussian processes and almost spherical sections of convex bodies. Ann. Probab. 16, 180–188 (1988) · Zbl 0639.60046 · doi:10.1214/aop/1176991893
[20] Ledoux M., Talagrand M.: Probability in Banach Spaces. Springer, Berlin (1991) · Zbl 0748.60004
[21] Lee, K., Bresler, Y.: Efficient and guaranteed rank minimization by atomic decomposition. In: IEEE International Symposium on Information Theory (2009)
[22] Linial N., London E., Rabinovich Y.: The geometry of graphs and some of its algorithmic applications. Combinatorica 15, 215–245 (1995) · Zbl 0827.05021 · doi:10.1007/BF01200757
[23] Liu Z., Vandenberghe L.: Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31(3), 1235–1256 (2009) · Zbl 1201.90151 · doi:10.1137/090755436
[24] Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization (2008). Preprint available at http://www.optimization-online.org/DB_HTML/2008/11/2151.html
[25] Marčenko V.A., Pastur L.A.: Distributions of eigenvalues for some sets of random matrices. Math. USSR-Sbornik 1, 457–483 (1967) · Zbl 0162.22501 · doi:10.1070/SM1967v001n04ABEH001994
[26] Meka, R., Jain, P., Caramanis, C., Dhillon, I.S.: Rank minimization via online learning. In: Proceedings of the International Conference on Machine Learning (2008)
[27] Mesbahi M., Papavassilopoulos G.P.: On the rank minimization problem over a positive semidefinite linear matrix inequality. IEEE Trans. Autom. Control 42(2), 239–243 (1997) · Zbl 0872.93029 · doi:10.1109/9.554402
[28] Parrilo P.A., Khatri S.: On cone-invariant linear matrix inequalities. IEEE Trans. Automat. Control 45(8), 1558–1563 (2000) · Zbl 0988.93029 · doi:10.1109/9.871772
[29] Recht B., Fazel M., Parrilo P.: Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010) · Zbl 1198.90321 · doi:10.1137/070697835
[30] Recht, B., Xu, W., Hassibi, B.: Necessary and sufficient conditions for success of the nuclear norm heuristic for rank minimization. In: Proceedings of the 47th IEEE Conference on Decision and Control (2008)
[31] Rennie, J.D.M., Srebro, N.: Fast maximum margin matrix factorization for collaborative prediction. In: Proceedings of the International Conference of Machine Learning (2005)
[32] Slepian D.: The one-sided barrier problem for Gaussian noise. Bell Syst. Tech. J. 41, 463–501 (1962)
[33] Stojnic, M., Xu, W., Hassibi, B.: Compressed sensing - probabilistic analysis of a null-space characterization. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2008)
[34] Sturm J.F.: Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 11–12, 625–653 (1999) · Zbl 0973.90526 · doi:10.1080/10556789908805766
[35] Szarek, S.J.: Metric entropy of homogeneous spaces. In: Quantum probability (Gdańsk, 1997), Banach Center Publ., vol. 43, pp. 395–410. Polish Acad. Sci., Warsaw (1998). Preprint available at arXiv:math/ 9701213v1 · Zbl 0927.46047
[36] Weinberger K.Q., Saul L.K.: Unsupervised learning of image manifolds by semidefinite programming. Int. J. Comput. Vis. 70(1), 77–90 (2006) · doi:10.1007/s11263-005-4939-z
[37] Yuan M., Ekici A., Lu Z., Monteiro R.: Dimension reduction and coefficient estimation in multivariate linear regression. J. Roy. Stat. Soc. Ser. B 69, 329–346 (2007) · doi:10.1111/j.1467-9868.2007.00591.x
[38] Zhang, Y.: A simple proof for recoverability of 1 minimization: go over or under? Tech. Rep. TR05-09, Rice CAAM Department (2005)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.