×

Sharp 2-norm error bounds for LSQR and the conjugate gradient method. (English) Zbl 1461.65037

Summary: We consider the iterative method LSQR for solving \(\min_x |Ax-b|_2\). LSQR is based on the Golub-Kahan bidiagonalization process and at every step produces an iterate that minimizes the norm of the residual vector over a Krylov subspace \(\mathcal{K}_k\). The 2-norm of the error is known to decrease monotonically, although it is not minimized over \(\mathcal{K}_k\). Given a lower bound on the smallest singular value of \(A\), we show that in exact arithmetic the solution lies in the interior of a certain ellipsoid and that the LSQR iterate lies on the boundary of this ellipsoid. We use this result to derive new 2-norm error bounds for LSQR. Although our bounds are not much smaller than the existing ones, we show that they are sharp in the following sense: if the only information we use is our lower bound on \(\sigma_{\min}(A)\) plus the information gained by running \(k\) steps of LSQR, then our bounds cannot be improved. We also show how to choose a point with an error bound smaller than our corresponding bound for the LSQR error, although its true error is not necessarily smaller than the true LSQR error. As LSQR is formally equivalent to the conjugate gradient (CG) method applied to the normal equations \(A^TAx = A^Tb\), we derive analogous error bounds for CG. Our bounds for CG apply to any system \(Ax=b\) where \(A\) is symmetric positive definite.

MSC:

65F10 Iterative numerical methods for linear systems
65F20 Numerical solutions to overdetermined systems, pseudoinverses
65F50 Computational methods for sparse matrices

Software:

CRAIG; SparseMatrix; LSQR
Full Text: DOI

References:

[1] T. Davis and Y. Hu, The University of Florida Sparse Matrix Collection, ACM Trans. Math. Software, 38 (2011), pp. 1:1-1:25. · Zbl 1365.65123
[2] R. Estrin, D. Orban, and M. Saunders, Euclidean-norm error bounds for SYMMLQ and CG, SIAM J. Matrix Anal. Appl., 40 (2019), pp. 235-253. · Zbl 1409.65015
[3] R. Estrin, D. Orban, and M. Saunders, LNLQ: An iterative method for least-norm problems with an error minimization property, SIAM J. Matrix Anal. Appl., 40 (2019), pp. 1102-1124. · Zbl 1435.65050
[4] R. Estrin, D. Orban, and M. Saunders, LSLQ: An iterative method for linear least-squares with an error minimization property, SIAM J. Matrix Anal. Appl., 40 (2019), pp. 254-275. · Zbl 1451.65028
[5] D. Fadeev and V. Fadeeva, Computational Methods of Linear Algebra, Freeman, San Francisco, 1963.
[6] D. Fong and M. Saunders, LSMR: An iterative algorithm for sparse least squares problems, SIAM J. Sci. Comput., 33 (2011), pp. 2950-2971. · Zbl 1232.65052
[7] A. Frommer, K. Kahl, T. Lippert, and H. Rittich, 2-norm error bounds and estimates for Lanczos approximations to linear systems and rational matrix functions, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 1046-1065. · Zbl 1283.65020
[8] G. Golub and W. Kahan, Calculating the singular values and pseudo-inverse of a matrix, SIAM J. Numer. Anal., 2 (1965), pp. 205-224. · Zbl 0194.18201
[9] G. Golub and G. Meurant, Matrices, moments and quadrature II: How to compute the norm of the error in iterative methods, BIT, 37 (1997), pp. 687-705. · Zbl 0888.65050
[10] E. Hallman, Error Estimates for Least-Squares Problems, Ph.D. thesis, University of California, Berkeley, 2019.
[11] E. Hallman and M. Gu, LSMB: Minimizing the backward error for least-squares problems, SIAM J. Matrix Anal. Appl., 39 (2018), pp. 1295-1317. · Zbl 1416.65102
[12] M. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Natl. Bur. Stand., 49 (1952), pp. 409-436. · Zbl 0048.09901
[13] C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, J. Res. Natl. Bur. Stand., 45 (1950), pp. 255-282.
[14] G. Meurant, The computation of bounds for the norm of the error in the conjugate gradient algorithm, Numer. Algorithms, 16 (1997), pp. 77-87. · Zbl 0897.65026
[15] G. Meurant, Estimates of the l2 norm of the error in the conjugate gradient algorithm, Numer. Algorithms, 40 (2005), pp. 157-169. · Zbl 1082.65040
[16] G. Meurant and P. Tichý, On computing quadrature-based bounds for the A-norm of the error in conjugate gradients, Numer. Algorithms, 62 (2013), pp. 163-191. · Zbl 1261.65034
[17] G. Meurant and P. Tichý, Approximating the extreme Ritz values and upper bounds for the A-norm of the error in CG, Numer. Algorithms, 82 (2019), pp. 937-968. · Zbl 1436.65033
[18] C. Paige, Bidiagonalization of matrices and solution of linear equations, SIAM J. Numer. Anal., 11 (1974), pp. 197-209. · Zbl 0244.65023
[19] C. Paige and M. Saunders, Solution of sparse indefinite systems of linear equations, SIAM J. Numer. Anal., 12 (1975), pp. 617-629. · Zbl 0319.65025
[20] C. Paige and M. Saunders, A Bidiagonalization Algorithm for Sparse Linear Equations and Least-Squares Problems, Tech. Report SOL-78-19, Stanford University, Stanford, CA, 1978.
[21] C. Paige and M. Saunders, LSQR: An algorithm for sparse linear equations and sparse least squares, ACM Trans. Math. Software, 8 (1982), pp. 43-71. · Zbl 0478.65016
[22] M. Saunders, Solution of sparse rectangular systems using LSQR and CRAIG, BIT, 35 (1995), pp. 588-604. · Zbl 0844.65029
[23] P. Tichý, A New Algorithm for Computing Quadrature-Based Bounds in Conjugate Gradients, http://www.cs.cas.cz/tichy/download/present/2014Spa.pdf, 2014. · Zbl 1298.65062
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.