Correction to: Constr Approx (2012) 36:267–309 https://doi.org/10.1007/s00365-012-9155-1

1 Introduction

The article [4] had a few typos and mistakes that need to be addressed. The most essential one was pointed out to us by the authors of [2]: [4, Lemma 2.19] was incorrect as stated. Below we provide the corrected statement. The author is grateful to J. Geronimo and P. Iliev for spotting the error. We also fix other minor typos and inaccuracies in [4].

2 Correction to [4, Lemma 2.19]

Lemma 2.1

(Corrected version of [4, Lemma 2.19]) Let A and B be two \(l\times l\) matrices.

There exists a nonnegative definite \(l\times l\) matrix W satisfying

$$\begin{aligned}&WA = B \end{aligned}$$
(2.1)
$$\begin{aligned}&\textrm{Ran}\,W = \textrm{Ran}\,B \end{aligned}$$
(2.2)

if and only if the following three conditions hold:

$$\begin{aligned}&\ker A \subseteq \ker B , \end{aligned}$$
(2.3)
$$\begin{aligned}&A^* B = B^* A \ge 0, \end{aligned}$$
(2.4)
$$\begin{aligned}&\textrm{Ran}\,B \cap \ker (A^*) = \{0\}. \end{aligned}$$
(2.5)

Moreover, the solution is then unique and given by

$$\begin{aligned} W = B(B^* A)^+ B^*, \end{aligned}$$
(2.6)

where \(X^+\) stands for the Moore–Penrose inverse of X.

Remark 1

1. Recall that the Moore–Penrose inverse of X is the unique matrix \(X^+\) of the same size as X such that

$$\begin{aligned}&(XX^+)^* = XX^+, \end{aligned}$$
(2.7)
$$\begin{aligned}&(X^+ X)^* = X^+ X, \end{aligned}$$
(2.8)
$$\begin{aligned}&XX^+ X = X, \end{aligned}$$
(2.9)
$$\begin{aligned}&X^+ X X^+ = X^+. \end{aligned}$$
(2.10)

It is uniquely defined for any matrix X, and it coincides with \(X^{-1}\) if X happens to be invertible.

2. Necessary and sufficient conditions (2.3), (2.4), (2.5) for the matrix linear equation (2.1) to have nonnegative definite solutions (without the extra requirement (2.2)) were established by Khatri–Mitra [3]. The only new result in this lemma compared to  [3] is that condition (2.2) ensures uniqueness. We provide here the full proof for completeness purposes, with some ideas borrowed from [1] which contains a nice review, generalizations, and further references.

3. Conditions  (2.3) and (2.5) have a different but equivalent form compared to the ones in [3], see the next lemma.

Lemma 2.2

Let A and B be two \(l\times l\) matrices.

  1. (i)

    \(\ker A \subseteq \ker B\) if and only if \(BA^+ A= B\).

  2. (ii)

    \(\textrm{Ran}\,B \cap \ker (A^*) = \{0\}\) if and only if \({\text {rank}}(A^* B) = {\text {rank}}B\).

Proof

(i) That \(BA^+ A = B\) implies \(\ker B = \ker [BA^+ A] \supseteq \ker A\) is trivial. Conversely, suppose \(\ker A \subseteq \ker B\). It is well known (see (2.8) and (2.10)) that \(A^+ A\) is the orthogonal projection onto \(\textrm{Ran}\,A^* = (\ker A)^\perp \). Therefore \(BA^+ A = B\) holds on \((\ker A)^\perp \). For \(v\in \ker A\) we get also \(v\in \ker B\), so that both \(BA^+ A v=0\) and \(Bv=0\). So indeed \(BA^+ A = B\).

(ii) Both conditions are equivalent to \(\ker (A^* B) = \ker B\). \(\square \)

Proof of Lemma 2.1

Suppose (2.1)–(2.2) has a unique nonnegative definite solution W. Then \(WAA^+A = BA^+A\). This combined with (2.1) and (2.9) gives \(B = BA^+A\) which is (2.3) by Lemma 2.2(i). Further, \(W\ge 0\) implies \(A^* W A = A^* B\) is also nonnegative definite. Finally, (2.2) implies that \({\text {rank}}(A^*B)={\text {rank}}(A^*WA) = {\text {rank}}(A^*W)={\text {rank}}\,B^* = {\text {rank}}\, B\). Here we used that \({\text {rank}}(A^*WA) = {\text {rank}}(A^*W)\) which is equivalent to \(\ker (A^*WA) = \ker (WA)\) which follows from

$$\begin{aligned}{} & {} v\in \ker (A^*WA) \Rightarrow A^*WA v =0 \Rightarrow ||W^{1/2} A v ||=0 \Rightarrow W^{1/2}W^{1/2}\\{} & {} A v =0\Rightarrow v\in \ker (WA). \end{aligned}$$

Conversely, suppose (2.3), (2.4), (2.5) hold. Define W as in (2.6). It is nonnegative definite by (2.4). Let us show that it solves (2.1)–(2.2).

By (2.5) and Lemma 2.2(ii) we get \(\ker (A^*B) \subseteq \ker B\) which by Lemma 2.2(i) is equivalent to \(B(A^*B)^+A^*B = B\). By (2.4) this can be rewritten as \(B(B^* A)^+ B^* A\) which is  (2.1). Clearly \(\textrm{Ran}\,W = \textrm{Ran}\,(B(B^* A)^+ B^* )\subseteq \textrm{Ran}\,B\). But \(WA= B\) implies \(\textrm{Ran}\,W \supseteq \textrm{Ran}\,B\), which proves (2.2).

Finally, we need to show uniqueness of W. W maps \(\textrm{Ran}\,A\) onto \(\textrm{Ran}\,B\), and there it is uniquely defined by \(WA = B\). W is also uniquely defined on \((\textrm{Ran}\,B)^\perp = (\textrm{Ran}\,W)^\perp = (\textrm{Ran}\,W^*)^\perp = \ker W\) to be zero. Therefore W is uniquely determined on the space \(\textrm{Ran}\,A + (\textrm{Ran}\,B)^\perp \) whose dimension is

$$\begin{aligned} \dim \textrm{Ran}\,A + \dim (\textrm{Ran}\,B)^\perp - \dim \textrm{Ran}\,A\cap (\textrm{Ran}\,B)^\perp . \end{aligned}$$

Denote \(\dim \ker A = n_A\) and \(\dim \ker B = n_B\). By (2.3), \(n_A \le n_B\). By the rank-nullity theorem \(\dim \textrm{Ran}\,A = l-n_A\), \(\dim (\textrm{Ran}\,B)^\perp =n_B\).

Now, \(\textrm{Ran}\,B = \textrm{Ran}\,WA = \textrm{Ran}\,W\) means that \({\text {rank}}B = {\text {rank}}(WA) = {\text {rank}}A - \dim \textrm{Ran}\,A\cap \ker W\), so that \(\dim \textrm{Ran}\,A\cap (\textrm{Ran}\,B)^\perp = \dim \textrm{Ran}\,A\cap (\ker W) = {\text {rank}}A - {\text {rank}}B = n_B-n_A\). This leads to \(\textrm{Ran}\,A + (\textrm{Ran}\,B)^\perp \) having dimension \(l-n_A + n_B-(n_B-n_A) = l\). So W is uniquely determined on the whole \({\mathbb {C}}^l\). \(\square \)

The rest of the arguments from [4] do not directly depend on the above lemma, it is only needed when we want to know the exact conditions for the existence of nonnegative definite canonical weights \(w_j\)’s. Notice that  (2.6) provides an explicit expression for the canonical weight \(w_j\) in terms of values of u(z) (or L(z)).

The condition (iii) in  [4, Theorem 5.10] should be correspondingly modified.

Theorem 2.3

(Corrected [4, Theorem 5.10]) A polynomial L(z) is the perturbation determinant for some Jacobi matrix with \(\textbf{1}-A_nA_n^*=B_n=\textbf{0}\) for all large n if and only if it obeys (3.1) and

  1. (i)

    L(z) is invertible on \((\overline{\mathbb {D}}\setminus \mathbb {R})\cup \{0\}\);

  2. (ii)

    all zeros on \(\overline{\mathbb {D}}\cap \mathbb {R}\) are simple;

  3. (iii)

    for each zero \(z_j\) in \(\mathbb {D}\),

    $$\begin{aligned}&\frac{z_j}{z_j^{-1}-z_j}{w}_j L(1/{\bar{z}_j})^*=-(z_j-z_j^{-1})\mathop {\textrm{Res}}\limits _{z= z_j} L(z)^{-1}, \end{aligned}$$
    (2.11)
    $$\begin{aligned}&\textrm{Ran}\,{w}_j=\textrm{Ran}\,\mathop {\textrm{Res}}\limits _{z= z_j} L(z)^{-1} \end{aligned}$$
    (2.12)

    has a unique nonnegative definite solution \({w}_j\) (see Lemma 2.1);

  4. (iv)

    \(L(0)=\textbf{1}\).

In other words, for (iii) to hold it is necessary and sufficient to have (2.3), (2.4), (2.5) with \(A=\frac{z_j}{z_j^{-1}-z_j} L(1/{\bar{z}_j})^*\) and \(B=-(z_j-z_j^{-1})\mathop {\textrm{Res}}\limits _{z= z_j} L(z)^{-1}\), see Lemma 2.1.

3 Symmetry Requirement in [4, Theorems 3.4, 3.6, 3.7, 5.1, 5.2, 5.8, 5.9, 5.10, 5.11]

Each of [4, Theorems 3.4, 3.6, 3.7, 5.1, 5.2, 5.8, 5.9, 5.10, 5.11] requires a symmetry condition that was not explicitly written. Specifically, we must require u (and L in [4, Theorems 5.8, 5.9, 5.10, 5.11]) to satisfy

$$\begin{aligned} u(1/\bar{z})^* u(z) = \left( u(1/{z})^* u(\bar{z})\right) ^* \end{aligned}$$
(3.1)

in the appropriate region.

Indeed, recall in the inverse problem ([4, Sect 5]) that \(M(z)= -\int _{\mathbb {R}}\frac{d\mu (x)}{x-(z+z^{-1})}\) for \(z\in {\mathbb {D}}\), where \(\mu \) is constructed using \(\tfrac{d\mu }{dx}(2\cos \theta ) = \pi ^{-1} |\sin \theta |\left[ u(e^{i\theta })^* u(e^{i\theta })\right] \) with \(0<\theta <\pi \). Then we would like to meromorphically continue M through \(\partial {\mathbb {D}}\) to a neighbourhood of \({\mathbb {D}}\) using

$$\begin{aligned} M(z)=M(1/\bar{z})^*+(z-z^{-1})\left[ u(1/\bar{z})^*u(z)\right] ^{-1}, \quad 1<|z|<R \end{aligned}$$

(see [4, p.294]) and analyticity of u. This defines a meromorphic function on \(\{z:1<|z|<R\}\) which by construction is meromorphic (in fact, analytic since u does not vanish there) on \(\left\{ z=e^{i\theta }:0<\theta <\pi \right\} \). In order for this continuation to be meromorphic on \(\left\{ z=e^{i\theta }:\pi<\theta <2\pi \right\} \) we must require (3.1) however. The rest of the proofs go through without any change.

Notice that (3.1) automatically holds in the direct problem since M trivially satisfies \(M(\bar{z}) = M(z)^*\).

4 Other Minor Typos

Here we collect a few typos that need to be addressed. These are only typos, the actual arguments are correct.

Estimates for \(||g_{n+1}(z)||\) and \(||c_{n+1}(z)||\) on [4, pp.285] have an extra \(\prod _{j=1}^n\) sign and should read

$$\begin{aligned}{} & {} ||g_{n+1}(z)|| \le ||A_{n+1}^{-1}||\left[ (n+1) (||B_{n+1}||+||\textbf{1}-A_{n+1}A_{n+1}^*||)+1 \right] \\{} & {} \quad \times \prod _{j=1}^n ||A_j^{-1}||\, \prod _{j=1}^n \left[ 1+j ( ||B_j||+||\textbf{1}-A_jA_j^*||)\right] \end{aligned}$$

and

$$\begin{aligned} ||c_{n+1}(z)||\le & {} ||A_{n+1}^{-1}|| \left[ (n+1) (1+||B_{n+1}||) +1 \right] \\{} & {} \times \prod _{j=1}^n ||A_j^{-1}||\, \prod _{j=1}^n \left[ 1+j ( ||B_j||+||\textbf{1}-A_jA_j^*||)\right] \\\le & {} (n+2)\,\prod _{j=1}^{n+1} ||A_j^{-1}|| \, \prod _{j=1}^{n+1} \left[ 1+j ( ||B_j||+||\textbf{1}-A_jA_j^*||)\right] , \end{aligned}$$

respectively.

Estimate [4, Eq.(4.16)] should have \(\left[ \max (1,r)\right] \) power 2 instead of 2n; the corrected version is

$$\begin{aligned}{} & {} ||g_{n+1}(z)-g_n(z)||=||A_{n+1}^{-1}\left( z^2 \left( \textbf{1}-A_{n+1}A_{n+1}^*\right) -z B_{n+1}\right) c_n(z)\\{} & {} \qquad +\left( A_{n+1}^{-1}-\textbf{1}\right) g_n(z)|| \\{} & {} \quad \le \left[ \sup _j||A_j^{-1}|| \, \left[ \max (1,r)\right] ^{2}\, \left( ||B_n||+||\textbf{1}-A_nA_n^*||\right) + ||\textbf{1}-A_{n+1}^{-1}||\right] \\{} & {} \qquad \times \sup _{n\in {\mathbb {N}}, z\in K}\left( ||c_n(z)||+||g_n(z)||\right) . \end{aligned}$$

The same corresponding correction should be made in  [4, Eq.(4.29)].