×

Matrices. Algebra, analysis and applications. (English) Zbl 1337.15002

Hackensack, NJ: World Scientific (ISBN 978-981-4667-96-8/hbk; 978-981-4667-98-2/ebook). xii, 582 p. (2016).
In its preface, the book is described as “a very personal selection of topics in matrix theory that the author was actively working on in the past 40 years.” As well as certain classical results, it includes material which was previously only available in the primary literature, much of it the author’s own work. The scope of the book is best described by outlining a sample of the less familar topics covered.
One theme concerns matrices whose entries are analytic functions in one or more complex variables. In the introductory chapter, the author defines classes of integral domains with desirable properties including elementary divisor domains (EDDs). The defining property of an EDD is that every ideal generated by three elements, say, \(a\), \(b\) and \(c\), can be generated by one element of the form \((px)a+(py)b+(qy)c\) for suitable elements \(p\), \(q\), \(x\) and \(y\) of the domain. An EDD is clearly a Bézout domain (every finitely generated ideal is cyclic) but, contrary to a statement by the author, the converse is known to be false (see [L. Gillman and M. Henriksen, Trans. Am. Math. Soc. 82, 366–391 (1956; Zbl 0073.09201)]). An important example of an EDD is the ring \(H(\Omega)\) of analytic functions on an open connected subset \(\Omega\) of the complex plane. This means that the Smith normal form is available for matrices over \(H(\Omega)\) and two matrices over \(H(\Omega)\) of the same size are equivalent if and only if they have the same rank and the same invariant factors. Questions such as whether pointwise solutions of a system of linear equations with coefficients in \(H(\Omega)\) imply that there is a solution over \(H(\Omega)\), or what conditions imply that two matrices over \(H(\Omega)\) are similar, naturally arise
It is well known that, for matrices over a field, an equation of the type \(AX-XB=C\) (\(A\), \(B\) square matrices and \(X\), \(C\) rectangular) holds if and only if \((I\otimes A-B^{T}\otimes I)\mathrm{vec}(X)=\mathrm{vec}(C)\), where \(\mathrm{vec}(X)\) is a column vector obtained by stacking the columns of \(X\). It is less well known that if \(A\) and \(B\) are matrices of the same size, then \(A\) is similar to \(B\) if and only if the matrices \(I\otimes A-A^{T}\otimes I\), \(I\otimes A-B^{T}\otimes I\) and \(I\otimes B-B^{T}\otimes I\) all have the same rank. A rather subtle theorem of W. E. Roth [Proc. Am. Math. Soc. 3, 392–396 (1952; Zbl 0047.01901)] states that the matrices \[ \begin{pmatrix} A & C\\ 0 & B \end{pmatrix} \text{ and } \begin{pmatrix} A & 0\\ 0 & B \end{pmatrix} \] are similar if and only if \(AX-XB=C\) is solvable. Suppose that two \(n\times n\) matrices \(A(x)\), \(B(x)\) over \(H(\Omega)\) are similar: \(B(x)=P^{-1}(x)A(x)P(x)\) at each point \(x\in\Omega\). This can be pointwise (there is no analytic condition on \(P(x)\)), rational (when the entries of \(P(x)\) are meromorphic functions on \(\Omega\)) or analytic (the entries of \(P(x)\) are analytic). It is a natural problem to look for conditions under which pointwise \(\implies\) rational \(\implies\) analytic, and theorems of this type can be proved using the criteria above.
Consider a matrix pencil \(A(x_{0},x_{1})=A_{0}x_{0}+A_{1}x_{1}\), where \(A_{0}\), \(A_{1}\) are square complex matrices and \(x_{0}\), \(x_{1}\) are indeterminates. T. S. Motzkin and O. Taussky [Trans. Am. Math. Soc. 73, 108–114 (1952; Zbl 0048.00905)] were interested in studying eigenvalues of sum of matrices and introduced what they called Property L. The pencil \(A(x_{0},x_{1})\) has Property L if its characteristic polynomial splits into linear factors over \(\mathbb{C}[x_{0},x_{1}]\). Using methods from analysis and algebraic geometry they proved that if \(A(\xi_{0},\xi_{1})\) is diagonizable for all \(\xi_{0},\xi_{1}\in\mathbb{C}\), then \(A(x_{0},x_{1})\) has property L and \(A_{0}\) and \(A_{1}\) are simultaneously diagonizable over \(\mathbb{C}\). This theorem and some related results are given in the book under review.
The chapter on inner product spaces deals with familiar topics on Hermitian and symmetric matrices and their eigenvalues, the singular value decomposition and the Moore-Penrose inverse. This leads to consideration of inequalities between eigenvalues and singular values, the construction of low rank matrix approximations to a matrix, and how the eigenvalues of sums of Hermitian matrices relate to the eigenvalues of the individual matrices. This topic is revisited in the next chapter where the Golden-Thompson inequality \(\operatorname{tr}e^{A+B}\leq \operatorname{tr}(e^{A}e^{B})\) is proved for Hermitian matrices.
The final two chapters consider non-negative matrices (the Perron-Frobenius theorem and finite Markov chains), norms of vectors and matrices, and numerical ranges. A section in the last chapter considers the inverse eigenvalue problem for non-negative matrices: When is a list \(\lambda _{1},\dots,\lambda_{n}\) of (not necessarily distinct) complex numbers the list of eigenvalues of some \(n\times n\) non-negative primitive matrix? M. Boyle and D. Handelman [Ann. Math. (2) 133, No. 2, 249–316 (1991; Zbl 0735.15005)] have conjectured a set of necessary and sufficient conditions on \(\lambda _{1},\dots,\lambda_{n}\) but the problem remains open. The solution presented in the book for the special case \(n=3\) illustrates some of the difficulties.
Each chapter of the book concludes with a brief section of bibliographic information and references to the original literature. Throughout the book there are numerous well constructed exercises for the reader to solve. In spite of this, the reviewer finds the book difficult to read. The author does not explain what is of central and what is of secondary importance, or why he chooses to include a particular topic. Even when there is an extensive literature on a topic, this is not reflected in the text or the bibliographic notes. There are pages of material on the exponential of a matrix, but an unsuspecting reader will not learn that there is a fundamental link with Lie theory and that the Golden-Thompson inequality is important in physics and random matrix theory [P. J. Forrester and C. J. Thompson, J. Math. Phys. 55, No. 2, 023503, 12 p. (2014; Zbl 1308.15018)]. Similarly, the extensive discussion of vector majorization does not mention that it has applications in statistics and other disciplines [A. W. Marshall and I. Olkin, Inequalities: theory of majorization and its applications. New York etc.: Academic Press (1979; Zbl 0437.26007)]. The book contains interesting material for a reader who is willing to dig for it, so it is a pity that it is not written in a more inviting style.

MSC:

15-02 Research exposition (monographs, survey articles) pertaining to linear algebra
15A21 Canonical forms, reductions, classification
15A42 Inequalities involving eigenvalues and eigenvectors
15A54 Matrices over function rings in one or more variables
15A75 Exterior algebra, Grassmann algebras
15A60 Norms of matrices, numerical range, applications of functional analysis to matrix theory
15A69 Multilinear algebra, tensor calculus
15B48 Positive matrices and their generalizations; cones of matrices
15A24 Matrix equations and identities
15A22 Matrix pencils
15A63 Quadratic and bilinear forms, inner products
15A09 Theory of matrix inversion and generalized inverses
15A16 Matrix exponential and similar functions of matrices
Full Text: DOI