×

An adaptation for iterative structured matrix completion. (English) Zbl 1483.65066

Summary: The task of predicting missing entries of a matrix, from a subset of known entries, is known as matrix completion. In today’s data-driven world, data completion is essential whether it is the main goal or a pre-processing step. Structured matrix completion includes any setting in which data is not missing uniformly at random. In recent work, a modification to the standard nuclear norm minimization (NNM) for matrix completion has been developed to take into account sparsity-based structure in the missing entries. This notion of structure is motivated in many settings including recommender systems, where the probability that an entry is observed depends on the value of the entry. We propose adjusting an Iteratively Reweighted Least Squares (IRLS) algorithm for low-rank matrix completion to take into account sparsity-based structure in the missing entries. We also present an iterative gradient-projection-based implementation of the algorithm that can handle large-scale matrices. Finally, we present a robust array of numerical experiments on matrices of varying sizes, ranks, and level of structure. We show that our proposed method is comparable with the adjusted NNM on small-sized matrices, and often outperforms the IRLS algorithm in structured settings on matrices up to size \(1000 \times 1000\).

MSC:

65F55 Numerical methods for low-rank matrix approximation; matrix compression
65F50 Computational methods for sparse matrices
15A83 Matrix completion problems

Software:

ADMiRA; GitHub; CVX

References:

[1] H. Adams, L. Kassab and D. Needell, Structured IRLS code, https://github.com/lara-kassab/structured-matrix-completion-IRLS.git.
[2] R. M. Bell; Y. Koren, Lessons from the Netflix prize challenge, SiGKDD Explorations, 9, 75-79 (2007) · doi:10.1145/1345448.1345465
[3] J. Bennett and S. Lanning, The Netflix prize, in Proceedings of KDD Cup and Workshop, vol. 2007, New York, NY, USA, (2007), 35.
[4] P. Biswas; T.-C. Lian; T.-C. Wang; Y. Ye, Semidefinite programming based algorithms for sensor network localization, ACM Transactions on Sensor Networks (TOSN), 2, 188-220 (2006)
[5] J.-F. Cai; E. J. Candès; Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20, 1956-1982 (2010) · Zbl 1201.90155 · doi:10.1137/080738970
[6] E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?, J. ACM, 58 (2011), Art. 11, 37 pp. · Zbl 1327.62369
[7] E. J. Candès; Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98, 925-936 (2010)
[8] E. J. Candès; B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9, 717-772 (2009) · Zbl 1219.90124 · doi:10.1007/s10208-009-9045-5
[9] E. J. Candès; T. Tao, Decoding by linear programming, IEEE Trans. Inform. Theory, 51, 4203-4215 (2005) · Zbl 1264.94121 · doi:10.1109/TIT.2005.858979
[10] E. J. Candès; T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56, 2053-2080 (2010) · Zbl 1366.15021 · doi:10.1109/TIT.2010.2044061
[11] V. Chandrasekaran; S. Sanghavi; P. A. Parrilo; A. S. Willsky, Rank-sparsity incoherence for matrix decomposition, SIAM J. Optim., 21, 572-596 (2011) · Zbl 1226.90067 · doi:10.1137/090761793
[12] Y. Chen, S. Bhojanapalli, S. Sanghavi and R. Ward, Coherent matrix completion, in International Conference on Machine Learning, PMLR, (2014), 674-682.
[13] Y. Chen; S. Bhojanapalli; S. Sanghavi; R. Ward, Completing any low-rank matrix, provably, J. Mach. Learn. Res., 16, 2999-3034 (2015) · Zbl 1351.62107
[14] Y. Chen; A. Jalali; S. Sanghavi; C. Caramanis, Low-rank matrix recovery from errors and erasures, IEEE Transactions on Information Theory, 59, 4324-4337 (2013)
[15] I. Daubechies; R. DeVore; M. Fornasier; C. S. Güntürk, Iteratively reweighted least squares minimization for sparse recovery, Comm. Pure Appl. Math., 63, 1-38 (2010) · Zbl 1202.65046 · doi:10.1002/cpa.20303
[16] D. L. Donoho; P. B. Stark, Uncertainty principles and signal recovery, SIAM J. Appl. Math., 49, 906-931 (1989) · Zbl 0689.42001 · doi:10.1137/0149053
[17] M. Fazel, Matrix Rank Minimization with Applications, PhD thesis, Stanford University, 2002.
[18] M. Fazel, H. Hindi and S. P. Boyd, Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices, in Proceedings of the 2003 American Control Conference, 2003, vol. 3, IEEE, (2003), 2156-2162.
[19] M. Fornasier; H. Rauhut; R. Ward, Low-rank matrix recovery via iteratively reweighted least squares minimization, SIAM J. Optim., 21, 1614-1640 (2011) · Zbl 1236.65044 · doi:10.1137/100811404
[20] D. Goldfarb; S. Ma, Convergence of fixed-point continuation algorithms for matrix rank minimization, Found. Comput. Math., 11, 183-210 (2011) · Zbl 1219.90195 · doi:10.1007/s10208-011-9084-6
[21] M. C. Grant and S. P. Boyd, Graph implementations for nonsmooth convex programs, in Recent Advances in Learning and Control (eds. V. Blondel, S. Boyd and H. Kimura), Lecture Notes in Control and Information Sciences, Springer-Verlag Limited, (2008), 95-110. · Zbl 1205.90223
[22] M. Grant and S. Boyd, CVX: Matlab Software for Disciplined Convex Programming, version 2.1, http://cvxr.com/cvx, 2014.
[23] D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Trans. Inform. Theory, 57, 1548-1566 (2011) · Zbl 1366.94103 · doi:10.1109/TIT.2011.2104999
[24] D. Gross; Y.-K. Liu; S. T. Flammia; S. Becker; J. Eisert, Quantum state tomography via compressed sensing, Phys. Rev. Lett., 105, 150401 (2010) · doi:10.1103/PhysRevLett.105.150401
[25] N. Halko; P.-G. Martinsson; J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53, 217-288 (2011) · Zbl 1269.65043 · doi:10.1137/090771806
[26] P. Jain, R. Meka and I. S. Dhillon, Guaranteed rank minimization via singular value projection, in Advances in Neural Information Processing Systems, (2010), 937-945.
[27] P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, in Proceedings of the 2013 ACM Symposium on Theory of Computing, (2013), 665-674. · Zbl 1293.65073
[28] R. H. Keshavan and S. Oh, A gradient descent algorithm on the Grassman manifold for matrix completion, arXiv preprint. arXiv: 0910.5260.
[29] Y. Koren, R. Bell and C. Volinsky, Matrix factorization techniques for recommender systems, Computer, 30-37.
[30] C. Kümmerle; J. Sigl, Harmonic mean iteratively reweighted least squares for low-rank matrix recovery, J. Mach. Learn. Res., 19, 1815-1863 (2018) · Zbl 1512.62057
[31] K. Lee; Y. Bresler, ADMiRA: Atomic decomposition for minimum rank approximation, IEEE Trans. Inform. Theory, 56, 4402-4416 (2010) · Zbl 1366.94112 · doi:10.1109/TIT.2010.2054251
[32] N. Linial; E. London; Y. Rabinovich, The geometry of graphs and some of its algorithmic applications, Combinatorica, 15, 215-245 (1995) · Zbl 0827.05021 · doi:10.1007/BF01200757
[33] Z. Liu; L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. Appl., 31, 1235-1256 (2009) · Zbl 1201.90151 · doi:10.1137/090755436
[34] S. Ma; D. Goldfarb; L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Math. Program., 128, 321-353 (2011) · Zbl 1221.65146 · doi:10.1007/s10107-009-0306-5
[35] K. Mohan and M. Fazel, IRLS code, https://faculty.washington.edu/mfazel/, [Online; accessed 01-Aug-2019].
[36] K. Mohan and M. Fazel, Reweighted nuclear norm minimization with application to system identification, in Proceedings of the 2010 American Control Conference, IEEE, (2010), 2953-2959.
[37] K. Mohan; M. Fazel, Iterative reweighted algorithms for matrix rank minimization, J. Mach. Learn. Res., 13, 3441-3473 (2012) · Zbl 1436.65055
[38] D. Molitor and D. Needell, Matrix completion for structured observations, in 2018 Information Theory and Applications Workshop (ITA), IEEE, (2018), 1-5.
[39] I. Razenshteyn, Z. Song and D. P. Woodruff, Weighted low rank approximations with provable guarantees, in Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, (2016), 250-263. · Zbl 1378.65093
[40] B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12, 3413-3430 (2011) · Zbl 1280.68141
[41] B. Recht; M. Fazel; P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Review, 52, 471-501 (2010) · Zbl 1198.90321 · doi:10.1137/070697835
[42] G. Robin; O. Klopp; J. Josse; É. Moulines; R. Tibshirani, Main effects and interactions in mixed and incomplete data frames, J. Amer. Statist. Assoc., 15, 1292-1303 (2020) · Zbl 1441.62145 · doi:10.1080/01621459.2019.1623041
[43] A. A. Rontogiannis; P. V. Giampouras; K. D. Koutroumbas, Online reweighted least squares robust PCA, IEEE Signal Processing Letters, 27, 1340-1344 (2020)
[44] R. Schmidt, Multiple emitter location and signal parameter estimation, IEEE Transactions on Antennas and Propagation, 34, 276-280 (1986) · doi:10.1109/TAP.1986.1143830
[45] T. Schnabel, A. Swaminathan, A. Singh, N. Chandak and T. Joachims, Recommendations as treatments: Debiasing learning and evaluation, arXiv preprint. arXiv: 1602.05352.
[46] A. Singer, A remark on global positioning from local distances, Proc. Natl. Acad. Sci. USA, 105, 9507-9511 (2008) · Zbl 1205.86043 · doi:10.1073/pnas.0709842104
[47] A. M.-C. So; Y. Ye, Theory of semidefinite programming for sensor network localization, Math. Program., 109, 367-384 (2007) · Zbl 1278.90482 · doi:10.1007/s10107-006-0040-1
[48] A. Sportisse; C. Boyer; J. Josse, Imputation and low-rank estimation with missing not at random data, Stat. Comput., 30, 1629-1643 (2020) · Zbl 1452.62174 · doi:10.1007/s11222-020-09963-5
[49] K.-C. Toh; S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems, Pac. J. Optim., 6, 615-640 (2010) · Zbl 1205.90218
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.