×

An efficient Gauss-Newton algorithm for symmetric low-rank product matrix approximations. (English) Zbl 1321.65060

Summary: We derive and study a Gauss-Newton method for computing a symmetric low-rank product \(XX^{{T}}\), where \(X \in{\mathbb{R}}^{n\times k}\) for \(k<n\), that is the closest to a given symmetric matrix \(A \in{\mathbb{R}}^{n\times n}\) in Frobenius norm. When \(A=B^{{T}} B\) (or \(BB^{{T}} \)), this problem essentially reduces to finding a truncated singular value decomposition of \(B\). Our Gauss-Newton method, which has a particularly simple form, shares the same order of iteration-complexity as a gradient method when \(k \ll n\), but can be significantly faster on a wide range of problems. In this paper, we prove global convergence and a \(Q\)-linear convergence rate for this algorithm and perform numerical experiments on various test problems, including those from recently active areas of matrix completion and robust principal component analysis. Numerical results show that the proposed algorithm is capable of providing considerable speed advantages over Krylov subspace methods on suitable application problems where high-accuracy solutions are not required. Moreover, the algorithm possesses a higher degree of concurrency than Krylov subspace methods, thus offering better scalability on modern multi-/many-core computers.

MSC:

65F20 Numerical solutions to overdetermined systems, pseudoinverses
15A83 Matrix completion problems
65F10 Iterative numerical methods for linear systems
62H35 Image analysis in multivariate analysis
Full Text: DOI

References:

[1] J. Barzilai and J. M. Borwein, {\it Two-point step size gradient methods}, IMA J. Numer. Anal., 8 (1988), pp. 141-148. · Zbl 0638.65055
[2] A. Beck and M. Teboulle, {\it A fast iterative shrinkage-thresholding algorithm for linear inverse problems}, SIAM J. Imaging Sci., 2 (2009), pp. 183-202. · Zbl 1175.94009
[3] M. Bollhöfer and Y. Notay, {\it JADAMILU: A software code for computing selected eigenvalues of large sparse symmetric matrices}, Comput. Phys. Comm., 177 (2007), pp. 951-964. · Zbl 1196.65072
[4] E. J. Candès, X. Li, Y. Ma, and J. Wright, {\it Robust principal component analysis?}, J. ACM, 58 (2011), pp. 1– 37. · Zbl 1327.62369
[5] E. J. Candès and B. Recht, {\it Exact matrix completion via convex optimization}, Found. Comput. Math., 9 (2009), pp. 717-772. · Zbl 1219.90124
[6] E. J. Candès and T. Tao, {\it The power of convex relaxation: Near-optimal matrix completion}, IEEE Trans. Inform. Theory, 56 (2010), pp. 2053-2080. · Zbl 1366.15021
[7] H.-R. Fang and Y. Saad, {\it A filtered Lanczos procedure for extreme and interior eigenvalue problems}, SIAM J. Sci. Comput., 34 (2012), pp. A2220-A2246. · Zbl 1253.65053
[8] R. Fletcher, {\it Practical Methods of Optimization}, 2nd ed., Wiley-Interscience, Chichester, UK, 1987. · Zbl 0905.65002
[9] N. Halko, P. G. Martinsson, and J. A. Tropp, {\it Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions}, SIAM Rev., 53 (2011), pp. 217-288. · Zbl 1269.65043
[10] H. O. Hartley, {\it The modified Guass-Newton method for the fitting of nonlinear regression functions by least squares}, Technometrics, 3 (1961), pp. 269-280. · Zbl 0096.34603
[11] C. Jian-Feng, E. J. Candes, and S. Zuowei, {\it A singular value thresholding algorithm for matrix completion export}, SIAM J. Optim., 20 (2010), pp. 1956-1982. · Zbl 1201.90155
[12] A. V. Knyazev, {\it Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method}, SIAM J. Sci. Comput., 23 (2001), pp. 517-541. · Zbl 0992.65028
[13] L. Kronik, A. Makmal, M. Tiago, M. M. G. Alemany, X. Huang, Y. Saad, and J. R. Chelikowsky, {\it PARSEC–the pseudopotential algorithm for real-space electronic structure calculations: Recent advances and novel applications to nanostructures}, Phys. Stat. Solidi. (b), 243 (2006), pp. 1063-1079.
[14] R. M. Larsen, {\it Lanczos Bidiagonalization with Partial Reorthogonalization}, Technical report, DAIMI PB-357, Aarhus University, 1998.
[15] R. B. Lehoucq, {\it Implicitly restarted Arnoldi methods and subspace iteration}, SIAM J. Matrix Anal. Appl., 23 (2001), pp. 551-562. · Zbl 1004.65044
[16] R. B. Lehoucq, D. C. Sorensen, and C. Yang, {\it ARPACK Users’ Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods}, Software Environ. Tools 6, SIAM, Philadelphia, 1998. · Zbl 0901.65021
[17] K. Levenberg, {\it A method for the solution of certain non-linear problems in least squares}, Quart. Appl. Math., 2 (1944), pp. 164-168. · Zbl 0063.03501
[18] Z. Lin, M. Chen, L. Wu, and Y. Ma, {\it The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices}, UIUC Technical report UILU-ENG-09-2215 (2009).
[19] X. Liu, Z. Wen, and Y. Zhang, {\it Limited memory block Krylov subspace optimization for computing dominant singular value decompositions}, SIAM J. Sci. Comput., 35 (2013), pp. A1641-A1668. · Zbl 1278.65045
[20] D. W. Marquardt, {\it An algorithm for least-squares estimation of nonlinear parameters}, J. SIAM, 11 (1963), pp. 431-441. · Zbl 0112.10505
[21] B. Recht, M. Fazel, and P. A. Parrilo, {\it Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization}, SIAM Rev., 52 (2010), pp. 471-501. · Zbl 1198.90321
[22] H. Rutishauser, {\it Computational aspects of F. L. Bauer’s simultaneous iteration method}, Numer. Math., 13 (1969), pp. 4-13. · Zbl 0182.21304
[23] H. Rutishauser, {\it Simultaneous iteration method for symmetric matrices}, Numer. Math., 16 (1970), pp. 205-223. · Zbl 0239.65037
[24] Y. Saad, {\it Chebyshev acceleration techniques for solving nonsymmetric eigenvalue problems}, Math. Comp., 42 (1984), pp. 567-588. · Zbl 0539.65013
[25] D. C. Sorensen, {\it Implicitly restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations}, in Parallel Numerical Algorithms (Hampton, VA, 1994), ICASE/LaRC Interdiscip. Ser. Sci. Eng. 4, Kluwer, Dordrecht, Netherlands, 1996, pp. 119-165. · Zbl 0865.65019
[26] A. Stathopoulos and C. F. Fischer, {\it A Davidson program for finding a few selected extreme eigenpairs of a large, sparse, real, symmetric matrix}, Comput. Phys. Comm., 79 (1994), pp. 268-290. · Zbl 0878.65029
[27] G. W. Stewart, {\it Simultaneous iteration for computing invariant subspaces of non-Hermitian matrices}, Numer. Math., 25 (1975/76), pp. 123-136. · Zbl 0328.65025
[28] W. J. Stewart and A. Jennings, {\it A simultaneous iteration algorithm for real matrices}, ACM Trans. Math. Software, 7 (1981), pp. 184-198. · Zbl 0455.65028
[29] P. Tak, P. Tang, and E. Polizzi, {\it FEAST as a Subspace Iteration Eigensolver Accelerated by Approximate Spectral Projection}, arXiv:1302.0432, 2013. · Zbl 1303.65018
[30] K.-C. Toh and S. Yun, {\it An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems}, Paci. J. Optimi., 6 (2010), pp. 615-640. · Zbl 1205.90218
[31] Z. Wen, C. Yang, X. Liu, and Y. Zhang, {\it Trace-penalty minimization for large-scale eigenspace computation}, J. Sci. Comput. (2015), DOI:10.1007/s10915-015-0061-0. · Zbl 1373.65026
[32] Z. Wen and Y. Zhang, {\it Block algorithms with augmented Rayleigh-Ritz projections for large-scale eigenpair computation}, preprint, arXiv:1507.06078, 2015. · Zbl 1463.65073
[33] X. Yuan and J. Yang, {\it Sparse and low-rank matrix decomposition via alternating direction methods}, Paci. J. Optimi., 9 (2013), pp. 167-180. · Zbl 1269.90061
[34] Y. Zhou and Y. Saad, {\it A Chebyshev-Davidson algorithm for large symmetric eigenproblems}, SIAM J. Matrix Anal. Appl., 29 (2007), pp. 954-971. · Zbl 1151.65321
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.