×

Iterative Krylov methods for large linear systems. (English) Zbl 1023.65027

Cambridge Monographs on Applied and Computational Mathematics. 13. Cambridge: Cambridge University Press. xiii, 221 p. (2003).
The conjugate gradient (cg) method and its variants for unsymmetric matrices are in the center of the book. Chapters 1-2 contain an introduction and the basic tools from linear algebra. Chapters 3-5 provide the conjugate gradient algorithm after a more general discussion of Krylov space methods. Here we have already a clear deviation from the standard introduction of the cg methods. Usually the discussion is based on the three-term recurrence relations for the vectors, while the specifications are given here in terms of matrices. If \(V_j\) denotes the matrix whose columns are the first \(j\) iterates, then \[ A V_{m-1} = V_m H_{m,m-1}, \] where \(H_{m,m-1}\) is upper Hessenberg. Similarly, \[ A U_m = U_m B_m + u_{m-1}e_m^T, \] where the columns of \(U_m\) are the vectors that generate the \(m\)-th Krylov space.
Chapters 6-9 are concerned with the algorithms for unsymmetric matrices as GMRES, GMRES*, MINRES, QMR, CGS, Bi-CG, and Bi-CGSTAB. Chapters 10-12 refer to singular systems and miscellaneous aspects. The book concludes with a chapter on preconditioning. Here the author focuses on ILU (incomplete LU decomposition) and SPAI (sparse approximate inverses). Moreover, it is explained that polynomial preconditioners make only sense if the computation of inner products is very expensive.
The red thread of the book is determined by computational aspects that cannot be presented in terms of lemmas and theorems. When for instance the given matrix is skew-symmetric, it is clear why a breakdown can occur and how it can be avoided by letting the odd steps differ from the even steps. Obviously, there are no easy generalizations in other cases, and a possible breakdown must be overcome by appropriate features of the algorithms. Therefore, numerical examples and their consequences are discussed in several chapters.
The book is useful and a source of valuable information in particular for those readers who have already some basic knowledge of the conjugate gradient method and who are going to attack more involved problems with linear equations.

MSC:

65F10 Iterative numerical methods for linear systems
65-02 Research exposition (monographs, survey articles) pertaining to numerical analysis
65F35 Numerical computation of matrix norms, conditioning, scaling

Software:

Bi-CG; LSQR
Full Text: DOI