Abstract
In this manuscript we give an extension of the classic Salem–Zygmund inequality for locally sub-Gaussian random variables. As an application, the concentration of the roots of a Kac polynomial is studied, which is the main contribution of this manuscript. More precisely, we assume the existence of the moment generating function for the iid random coefficients for the Kac polynomial and prove that there exists an annulus of width
around the unit circle that does not contain roots with high probability. As an another application, we show that the smallest singular value of a random circulant matrix is at least \(n^{-\rho }\), \(\rho \in (0,1/4)\) with probability \(1-\text {O}( n^{-2\rho })\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A classical problem in Harmonic Analysis is the quantification of the magnitude of the modulus for a trigonometric polynomial on the unit circle. Erdös [10] studied the trigonometric polynomial \(T_n(x)=\sum _{j=0}^{n-1} \alpha _j e^{ijx}\), \(x\in [0,2\pi ]\), for choices of signs \(\pm 1\) for all \(\alpha _j\), and estimated how large \(\left| T_n(x) \right|\) for \(x\in [0,2\pi )\) can be. Salem and Zygmund [30] proved that almost all choices of signs satisfy
Inequalities of type (1) are known as Salem–Zygmund inequality. There are different versions of Salem–Zygmund inequality that appear in many areas of modern analysis, see [8]. In a probabilistic context, the common version of Salem–Zygmund inequality is usually established when the coefficients \(\alpha _1,\ldots ,\alpha _{n-1}\) of \(T_n\) are iid sub-Gaussian random variables, see Chapter 6 in [14]. In the present manuscript, we give an extension of Salem–Zygmund inequality for locally sub-Gaussian random coefficients. This extension allows us to study the localization of the roots of a random Kac polynomial and the probability for the singularity of a random circulant matrix.
1.1 Roots of random trigonometric polynomials
The study of the roots of a polynomial is an old topic in Mathematics. There are formulas to compute the roots for polynomials of degree 2, degree 3 (Tartaglia–Cardano’s formula), degree 4 (Ferrari’s formula), but due to Galois’ work, for a generic polynomial of degree 5 or more it is not possible to find explicit formulas for computing its roots in terms of radicals.
For a random polynomial, Bloch and Polya [3] considered a random polynomial with iid Rademacher random variables (uniform distribution on \(\{-1,1\}\)) and proved that the expected number of real zeros are \(\text{ O }({n}^{{1}/{2}})\). In a series of papers between 1938 and 1939, Littlewood and Offord gave a better bound for the number of real roots of a random polynomial with iid random coefficients for the cases of Rademacher, Uniform\([-1,1]\), and standard Gaussian [21]. Kac [13] established his famous integral formula for the density of the number of real roots of a random polynomial with iid coefficients with standard Gaussian distribution. Those were the first steps in the study of roots of random functions, which nowadays is a relevant part of modern Probability and Analysis. For further details, see [9] and the references therein.
The localization of the roots of a polynomial is in general a hard problem. However, there are relevant results in the theory of random polynomials [2]. For instance, for iid non-degenerate random coefficients with finite logarithm moment, the roots cluster asymptotically near the unit circle and the arguments of the roots are asymptotically uniform distributed. More precisely, Ibragimov and Zaporozhets [12] showed that for a Kac polynomial
with (real or complex) iid non-degenerate coefficients satisfying \(\mathbb {E}\left( \log (1+|\xi _0|)\right) <\infty\), its roots are concentrated around the unit circle as \(n\rightarrow \infty\), almost surely. Moreover, they proved that the condition \(\mathbb {E}\left( \log (1+|\xi _0|)\right) <\infty\) is necessary and sufficient for the roots of \(G_n\) to be asymptotically near the unit circle.
For iid standard Gaussian random coefficients of \(G_n\), most of the roots are concentrated in an annulus of width \({1}/{n}\) centered in the unit circle. However, the nearest root to the unit circle is at least a distance \(\text{ O }({n^{-2}})\), for further details see [23]. Larry and Vanderbei [20] conjectured that the last statement holds not only for standard Gaussian coefficients but also for Rademacher coefficients. This conjecture was proved by Konyagin and Schlag [19]. Our Theorem 2.3 establishes that most of the roots of \(G_n\) are near to the unit circle in a distance at least \(\text{ O }({n^{-2}} \left( \log n\right) ^{-1/2-\gamma })\) for \(\gamma >{1}/{2}\) with probability \(1-\text {O}((\log n)^{-\gamma +1/2})\). Konyagin and Schlag [19] showed that if \(G_n\) has iid Rademacher or standard Gaussian random coefficients, then for all \(\varepsilon >0\) and large n the following expression
holds with probability at least \(1-C\varepsilon\), for some positive constant C. Karapetyan [15, 16] studied the sub-Gaussian case, but up to our knowledge, his proof is not complete. Even so, using our extension of Salem–Zygmund inequality and the notion of least common denominator, which was developed to study the singularity of the random matrices [28], we show that for fixed \(t\ge 1\),
with probability at least \(1-\text {O}((\log n)^{-\gamma +1/2})\). The techniques using in the present paper are not the same using in Konyagin and Schlag [19]. The main result of Konyagin and Schlag only holds for Rademacher and Gaussian iid random coefficients. They did a refined analysis of the characteristic function and applying the so-called circle method. This approach is not straightforward to apply for more general random coefficients, even sub-Gaussian or with finite moment generating function (mgf for short).
The novelty of this manuscript is the use of the notion of least common denominator for cover more general random coefficients. This approach works for quite general random coefficients. However, the authors still working for relaxing the assumption of the existence of a mgf. The main obstacle for relaxing this assumption arises in the control of the maximum modulus over the unit circle of the random polynomial under the assumption of the existence of some \(p-\)moment. We emphasize that the proof is not a direct consequence of [29] since good estimates of the least common denominator typically are difficult to obtain. We remark that this result and the main result in [19], up to our knowledge, are not direct consequences of the so-called concentration inequalities.
1.2 Random circulant matrices
Recall, an \(n\times n\) complex circulant matrix, denoted by \(\text {circ}(c_0,\ldots ,c_{n-1})\), has the form
where \(c_0,\ldots ,c_{n-1}\in \mathbb {C}\). For \(\xi _0,\ldots ,\xi _{n-1}\) being random variables, we say that
is an \(n\times n\) random circulant matrix. The circulant matrices are a very common object in different areas of mathematics [11, 17, 26]. In particular, circulant matrices play a crucial role in the study of large-dimensional Toeplitz matrices [5, 31]. In the theory of the random matrices, the singularity is one aspect that has been intensively studied during recent years [4, 27, 28]. In the case of the random circulant matrices have Rademacher entries, Meckes [22] proved that the probability of a random circulant matrix is singular tends to zero when its dimension is growing.
As a consequence of our concentration result of the roots for Kac polynomials, for a random circulant matrix with iid zero-mean entries and finite mfg, it follows that for all fixed \(t\ge 1\) and \(\gamma >1/2\), the smallest singular value \(s_n\left( \mathcal {C}_n\right)\) of \(\mathcal {C}_n\) satisfies
with probability \(1-\text {O}\left( (\log n)^{-\gamma +1/2}\right)\). However, under weaker assumptions (see below the condition (H)), for \(\rho \in (0,1/4)\) we also show
with probability \(1-\text {O}\left( n^{-2\rho }\right)\).
The manuscript is organized as follows. In Sect. 2 we state the main results and their consequences. In Section 3 we give the proof of a Salem–Zygmund inequality for random variables with mgf. In Sect. 4 with the help of Salem–Zygmund inequality and the notion of least common denominator we prove Theorem 2.3 about the location of the roots of a Kac polynomial. Finally, in Sect. 5 we prove Theorem 2.6 about that the smallest singular value of a random circulant is relatively large with high probability.
2 Main results
2.1 Salem–Zygmund inequality
Recall that a real-valued random variable \(\xi\) is said to be sub-Gaussian if its mgf is bounded by the mgf of a Gaussian random variable, i.e., there is \(b> 0\) such that
When this condition is satisfied for a particular value of \(b>0\), we say that \(\xi\) is b-sub-Gaussian or sub-Gaussian with parameter b. In particular, it is straigforward to show that the mean of a sub-Gaussian random variable is necessarily equal to zero. For more details see [6] and the references therein.
According to [6], a random variable \(\xi\) is called locally sub-Gaussian when its mgf \(M_{\xi }\) exists in an open interval around zero. Due to this, it is possible to find constants \(\alpha \ge 0\), \(\delta \in (0,\infty ]\) and \(\nu \in \mathbb {R}\) such that
If the mean of \(\xi\) is zero and its variance \(\sigma ^2\) is finite and positive then we can take \(\nu =0\) and \(\alpha ^2>\sigma ^2\) for some \(\delta >0\) as the next lemma states.
Lemma 2.1
(Locally sub-Gaussian r.v.). Let \(\xi\) be a random variable such that its mgf \(M_\xi\) exists in an interval around zero. Assume that \(\mathbb {E}\left( \xi \right) =0\) and \(\mathbb {E}\left( \xi ^2\right) =\sigma ^2>0\). Then there is a positive constant \(\delta\) such that
The preceding lemma is not suprising, see for instance Remark 2.7.9 in [34]. Since its proof is simple, we give it here for completeness of the presentation.
Proof
Assume that \(M_\xi (t)\) is well-defined for any \(t\in (-\delta _1,\delta _1)\), for some \(\delta _1>0\). Then \(M_\xi (t)\) has derivatives of all orders at \(t=0\). Define \(g(t):=e^{{\alpha ^2 t^2}/{2}}\), for \(t\in \mathbb {R}\). Then \(g(0)=1\), \(g^\prime (0)=0\) and \(g^{\prime \prime }(0)=\alpha ^2\). Let \(h(t):= g(t) - M_\xi (t)\), for all \(t\in (-\delta _1,\delta _1)\). Since h is continuous and \(h^{\prime \prime }(0)=\alpha ^2-\sigma ^2>0\), then there exists \(0<\delta <\delta _1\) such that \(h^{\prime \prime }(t)>0\), for every \(t\in (-\delta ,\delta )\). Therefore, the function h is convex in the interval \((-\delta ,\delta )\). As \(h^\prime (0)=0\) then 0 is a local minimum of h. Therefore, it follows that \(h(t)\ge h(0)=0\), for every \(t\in (-\delta ,\delta )\). Thus, the result follows.
The classic Salem–Zygmund inequality is usually established for iid sub-Gaussian random variables. But thanks to Lemma 2.1 we are able to extend it to iid locally sub-Gaussian random variables as it is stated in Theorem 2.2. Even though, Theorem 2.2 is interesting on its own, we stress that it is also crucial for our approach using in the proof of the main result Theorem 2.3.
Before presenting Theorem 2.2, we introduce some useful notations. For simplicity, we keep the same notation between the Euclidean norm and the modulus for the complex numbers. Denote by \(\mathbb {T}\) the unit circle \(\mathbb {R}/(2\pi \mathbb {Z})\). For any bounded function \(f:\mathbb {T}\rightarrow \mathbb {C}\), the infinite norm of f is defined as \(\Vert f\Vert _\infty =\sup \limits _{x\in \mathbb {T}}|f(x)|\), and \({\mathop {=}\limits ^{\mathcal {D}}}\) means “equal in distribution”.
Theorem 2.2
(Salem–Zygmund inequality for locally sub-Gaussian random variables). Let \(\xi\) be a random variable with zero mean and finite positive variance. Assume that the mgf \(M_\xi\) of \(\xi\) exists in an open interval around zero. Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables with \(\xi _k{\mathop {=}\limits ^{\mathcal {D}}}\xi\) for every \(k\ge 0\). Let \(\phi :[0,1] \rightarrow \mathbb {R}\) is a non-zero continuous function. Consider \(W_n(x)=\sum _{j=0}^{n-1} \xi _j \phi ({j}/{n})e^{ijx}\) for any \(x\in \mathbb {T}\). Then, for all large n
where \(C_0\) and \(C_1\) are positive constants that only depend on the mgf of \(\xi\) and the function \(\phi\).
Actually, under the assumption of finite second moment, a version of a Salem–Zygmund type inequality can be obtained in terms of the expected value of the infinite norm of a random trigonometric polynomial, for more details see [33]. Theorem 2.2 provides an upper bound of how large the infinite norm of a random trigonometric polynomial is in probability. Moreover, Theorem 2.2 gives a better bound than Corollary 2 in [33] as we see below.
Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables such that \(\mathbb {E}\left( \xi _0\right) =0\) and \(\mathbb {E}\left( \xi _0^2\right) =\sigma ^2>0\). By Corollary 2 in [33] we have
where C is a universal positive constant. By the Markov inequality we obtain
Note that the upper bound asymptotically equals a positive constant. On the other hand, under the assumptions of Theorem 2.2 we deduce
for all large n, where \(C_0\) and \(C_1\) are positive constants that only depend on the mgf of \(\xi _0\).
2.2 Kac polynomials
For using the concept of least common denominator we introduce the following condition. We say that a random variable \(\xi _0\) satisfies the condition (H) if
The notion of concentration function was introduced by P. Lévy in the context of the study of distributions of sums of random variables. For \(\xi _0\) being not degenerate, zero mean with mgf, one can deduce that condition (H) is valid for some \(M>0\) and \(q\in (0,1)\). We refer to [32].
The main result of this manuscript is the following theorem.
Theorem 2.3
Let \(\xi\) be a random variable with zero mean and finite positive variance. Assume that the mgf \(M_\xi\) of \(\xi\) exists in an open interval around zero. Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables with \(\xi _k{\mathop {=}\limits ^{\mathcal {D}}}\xi\) for every \(k\ge 0\). Let
Then for any fixed \(t\ge 1\),
where \(\gamma >{1}/{2}\) and the implicit constant in the O-notation depends on t and the mgf of \(\xi\).
Remark 2.4
Observe that all bounded random variables satisfy (H) in Theorem 2.3 (with a suitable scaling). In particular, the Rademacher distribution which corresponds to the uniform distribution on \(\{-1,1\}\) and the uniform distribution on the interval \([-1,1]\) satisfy (H).
2.3 Random circulant matrices
It is well-known that any circulant matrix can be diagonalized in \(\mathbb {C}\) using a Fourier basis. Indeed, let \(\omega _n:=\exp \left( i\frac{2\pi }{n}\right)\), \(i^2=-1\), and \(F_n=\frac{1}{\sqrt{n}}(\omega ^{jk}_n)_{0\le j,k\le n-1}\). The matrix \(F_n\) is called the Fourier matrix of order n. Note that \(F_n\) is a unitary matrix. By a straightforward computation it follows
where \(G_n\) is the polynomial given by \(G_n(z):=\sum _{k=0}^{n-1}c_kz^k\). Hence, the eigenvalues of \(\text {circ}(c_0,\ldots ,c_{n-1})\) are \(G_n(1),G_n(\omega _n), \ldots ,G_n(\omega _n^{n-1}),\) or equivalently
Expressions like (3) appear naturally in the study of Fourier transform of periodic functions. For a complete understanding of circulant matrices, we recommend the monograph [7].
In the sequel, we consider an \(n\times n\) random circulant matrix \(\mathcal {C}_n\), i.e., \(\mathcal {C}_n:=\text {circ}(\xi _0,\ldots ,\xi _{n-1})\), where \(\xi _0,\ldots ,\xi _{n-1}\) are independent random variables. The smallest singular value of the random circulant matrix \(\mathcal {C}_n\) is given by
We remark that in general the smallest singular value is not equal to the smallest eigenvalue modulus. Since \(\mathcal {C}_n\) is a normal matrix, its singular values are the modulus of its eigenvalues. Thus, the following corollary is a direct consequence of Theorem 2.3.
Corollary 2.5
Let \(\xi\) be a random variable with zero mean and finite positive variance. Assume that the mgf \(M_\xi\) of \(\xi\) exists in an open interval around zero. Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables with \(\xi _k{\mathop {=}\limits ^{\mathcal {D}}}\xi\) for every \(k\ge 0\). Let \(\mathcal {C}_n:=\text {circ}(\xi _0,\ldots ,\xi _{n-1})\) be an \(n\times n\) random circulant matrix and let \(s_n(\mathcal {C}_n)\) be the smallest singular value of \(\mathcal {C}_n\). Then, for all fixed \(t\ge 1\) and \(\gamma > {1}/{2}\) we have
It is possible to weaken the assumptions of Corollary 2.5. Using similar reasoning as in the proof of Theorem 2.3 we obtain the following theorem.
Theorem 2.6
Let \(\xi\) be a non-degenerate random variable which satisfies (H). Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables with \(\xi _k{\mathop {=}\limits ^{\mathcal {D}}}\xi\) for every \(k\ge 0\). Let \(\mathcal {C}_n:=\text {circ}(\xi _0,\ldots ,\xi _{n-1})\) be an \(n\times n\) random circulant matrix. Then, for each \(\rho \in (0,1/4)\) we have
3 Proof of Theorem 2.2. Salem–Zygmund inequality for locally sub-Gaussian random variables
Firstly, we provide the proof of the following claim which is an important fact that we use in the proof of Theorem 2.2.
Claim 1: There exists a random interval \(I\subset \mathbb {T}\) of length \({1}/{\rho_n}\) with \(\rho_n={3n}/{8}\) such that
Proof
Let \(p_n(x):=\sum _{j=0}^{n-1} b_j e^{ijx}\), \(x\in \mathbb {T}\) be a trigonometric polynomial on \(\mathbb {T}\), where \(b_0,\ldots ,b_{n-1}\) are real numbers. For \(x\in \mathbb {T}\) write
and
Then
Recall the Bernstein inequality \(\Vert p^{\prime }_n\Vert _\infty \le n\Vert p_n\Vert _\infty\) (see for instance Theorem 14.1.1, Chapter 14, page 508 in [25]). For any \(x\in \mathbb {T}\) we have
Since g is continuous, there exists \(x_0\in \mathbb {T}\) such that \(g(x_0)=\Vert g_n\Vert _\infty\). Moreover, by the Mean Value Theorem and relation (8) we obtain
for any \(x\in \mathbb {T}\). Take \(I:=[x_0 - \frac{3}{16n},x_0+\frac{3}{16n}]\subset \mathbb {T}\). Notice that the length of I is \(\frac{3}{8 n}\). The preceding inequality yields
Since \(g(x_0)=\Vert g_n\Vert _\infty\), the triangle inequality yields \(({1}/{4}) \Vert g_n\Vert _\infty \le \left| g_n(x)\right|\) for any \(x\in I\). The preceding inequality with the help of relation (6) and relation (7) implies
Now, we are ready to provide the proof of Theorem 2.2.
Proof of Theorem 2.2
By Lemma 2.1, there exists a \(\delta >0\) such that
For each \(j\in \{0,\ldots ,n-1\}\), define \(f_j(x)=\phi ({j}/{n})e^{ijx}\), \(x\in \mathbb {T}\). Let \(r_n:=\sum _{j=0}^{n-1} |\phi ({j}/{n})|^2\). At first, we suppose that the \(f_j\) are real (we consider only the real part or the imaginary part) and we write \(S_n:=\Vert W_n\Vert _\infty\). Since \(\Vert f_j\Vert _\infty \le \Vert \phi \Vert _\infty =:K\) for every \(j=0,\ldots ,n-1\), we obtain
By Claim 1, there exists a random interval \(I\subset \mathbb {T}\) of length \({1}/{\rho _n}\) with \(\rho _n={8n}/{3}\) such that \(W_n(x)\ge {S_n}/{2}\) or \(-W_n(x) \ge {S_n}/{2}\) on I. Denote by \(\mu\) the normalized Lebesgue measure on \(\mathbb {T}\). Observe that
Then, for every \(t\in (-{\delta }/{K},{\delta }/{K})\) we have
The preceding inequality yields
which implies
Note that \(\lim \limits _{n\rightarrow \infty }\frac{r_n}{n}=\int _{0}^{1}|\phi (x)|^2\mathrm {d}x>0\). By taking \(l_n=cn^2\) where c is a positive constant, we have \(\left| \frac{\log (2\rho _n l_n)}{\alpha ^2 r_n}\right| <{\delta ^2}/{K^2}\) for all large n. By choosing \(t_n=\left( \frac{\log (2\rho _n l_n)}{\alpha ^2 r_n}\right) ^{{1}/{2}}\) we obtain
Since \(f_j=\text{ Re }(f_j)+i\text{ Im }(f_j)\), we get for all large n
and
Finally, since \(\rho _n=\frac{8n}{3}\), the choose of \(l_n=\frac{3n^2}{16}\) yields
4 Proof of Theorem 2.3. Localization of the roots for Kac polynomials
The proof is based on the small ball probability of linear combinations of iid random variables introduced by Rudelson and Vershynin in [29]. Throughout the proof, \(\Vert \cdot \Vert _2\) denotes the Euclidean norm, \(|\cdot |\) denotes the complex norm and \(\det (\cdot )\) the determinant function that acts on the squared matrices. We consider the module \(\pi\) of a real number y, \(y \mod \pi\), which is defined as the set of numbers x such that \(x-y =k\pi\) for some \(k\in \mathbb {Z}\).
Definition 4.1
(Least common denominator (lcd for short)). Let L be any positive number and let V be any deterministic matrix of dimension \(2\times n\). The least common denominator (lcd) of V is defined as
where \(\mathrm {dist}(v,\mathbb {Z}^n)\) denotes the distance between the vector \(v\in \mathbb {R}^n\) and the set \(\mathbb {Z}^n\), and \(\log _{+}:=\max \{\log ,0\}\).
For more details of the concept of lcd see Section 7 of [29]. Observe that \(D\left( a V\right) = \left({1}/{|a|}\right) D(V)\) for any \(a\not =0\). Indeed, from the definition of \(D\left( a V\right)\) we have that \(D\left( a V\right) \le \Vert \theta \Vert _2\) for any \(\theta \in \mathbb {R}^2\) such that
Therefore, from the definition of D(V) we deduce \(D(V)\le \Vert a\theta \Vert _2=|a|\Vert \theta \Vert _2\). Since \(a\not =0\), then \(({1}/{|a|})D(V)\le \Vert \theta \Vert _2\). Again, from the definition of D(aV) we deduce that \(({1}/{|a|})D(V)\le D(aV)\). On the other hand, from the definition of D(V) we have that \(D\left( V\right) \le \Vert \theta \Vert _2\) for any \(\theta \in \mathbb {R}^2\) such that
Therefore, from the definition of D(aV) we deduce \(D(aV)\le \Vert {{\theta}/{a}}\Vert _2= {{\Vert \theta \Vert _2}/{|a|}}\). Consequently, \(|a|D(aV)\le \Vert \theta \Vert _2\). Again, from the definition of D(V) we deduce that \(|a|D(aV)\le D(V)\). Putting all these pieces together we obtain the next useful lemma.
Lemma 4.2
For all \(a\ne 0\), the lcd of any matrix \(V\in \mathbb {R}^{2\times n}\) satisfies \(D(V)=|a|D(aV)\).
Let X be a random vector of dimension \(n\times 1\) whose entries are iid satisfying (H). Assume \(\det (VV^T)>0\). For any \(a>0\) and \(t\ge 1\), by Theorem 7.5 (Section 7 in [29]) we have
where \(L\ge \sqrt{{2}/{q}}\) with q given in (H), D(aV) is the least common denominator of aV, and the constant C only depends on M, q. Recall the well-known inequality \((x+y)^2\le 2 x^2+ 2y^2\) for any \(x,y\in \mathbb {R}\). By Lemma 4.2, it follows that \(D(aV)= ({1}/{a})D(V)\) for all \(a>0\). Therefore,
In order to obtain a meaningful upper bound for the left-hand side of the preceding inequality, it is needed to do a refined analysis of the following quantities: a lower bound for \(\det (VV^T)\) and a lower bound for D(V). Implicitly, in the definition of the D(V) we also need to estimate \(\Vert V^T\theta \Vert _2\) for some adequate \(\theta \in \mathbb {R}^2\).
4.1 Small ball probability analysis
The following analysis explains the reason of introducing the concept of the least common denominator, which is a crucial part along the proof of Theorem 2.3. Recall
For \(G_n\), we associate a random trigonometric polynomial
where \(\mathbb {T}\) denotes the unit circle \(\mathbb {R}/(2\pi \mathbb {Z})\). Assume \(n\ge 2\) and \(\gamma >{1}/{2}\). Let \(N=\lfloor n^2 \left( \log n\right) ^{{1}/{2}+\gamma } \rfloor\) and \(x_\alpha ={\alpha }/{N}\) for \(\alpha \in \{0,1,2,\ldots , N-1\}\). Let \(t\ge 1\) be fixed and let \(C_0>0\) be the suitable positive constant being given in Theorem 2.2. Define the following event
where \(W^{\prime }_n\) denotes the derivative of \(W_n\) on \(\mathbb {T}\). For short, we also denote by \(\mathbb {P}\left( A,B\right)\) the probability \(\mathbb {P}\left( A\cap B\right)\) for any two events A and B. Recall
By the Boole–Bonferroni inequality we obtain
Our goal is to show that every probability on the right side of the above expression is decreasing to zero when n tends to infinity.
Using the Berstein inequality (Theorem 14.1.1 in [25]) and Theorem 2.2 for \(\phi \equiv 1\), for all large n we have
On the other hand, using the Markov inequality we obtain
where the last inequality follows from the following fact: for any \(j\in \{0,\ldots ,n^2\}\) we have
Therefore,
where the implicit constant depends on the distribution of \(\xi _0\) and t. We stress that the rate of the convergence in (11) can be improved, however, the contributed term in the right-hand side of (10) is \(\mathbb {P}\left( \mathcal {M}_n, \mathcal {G}_n\right)\).
In the sequel, we analyze the strategy to prove that \(\mathbb {P}(\mathcal {M}_n, \mathcal {G}_n)\) is small. First, we construct a set of closed balls that covers \(\{z\in \mathbb {C}:\left| \left| z \right| -1 \right| \le tn^{-2}\}\). For each closed ball, we reduce the event \(\{\mathcal {M}_n, \mathcal {G}_n\}\) to a “simple event” using Taylor’s Theorem. Finally, we use the concept of lcd to show that the probability of each “simple event” is sufficiently small.
The strategy is to consider a set of balls centered at a point on the unit circle with a suitable radius. We distinguish two kind of balls. The special balls centered in \(1+0i\) and \(-1+0i\), where the radius r is large, \(r=2tn^{-11/10}\), and the balls centered in points z with argument satisfying \(n^{-11/10}<\left| \arg (z)\mod \pi \right| < \pi - n^{-11/10}\) with small radius, \(r=2t n^{-2}\).
Recall that for any \(x\in \mathbb {R}\), \(\lfloor x \rfloor\) denotes the greatest integer less than or equal to x. Let \(N:= \lfloor n^2\left( \log n \right) ^{{1}/{2}+\gamma } \rfloor\) and \(x_\alpha := \frac{\alpha }{N}\) for \(\alpha =0,1,\ldots ,N-1\). For \(a\in \mathbb {C}\) and \(s>0\), denote by \(\text {B}\left( a, s\right)\) the closed ball with center a and radius s, i.e., \(\text {B}\left( a, s\right) = \left\{ z\in \mathbb {C}: \left| z-a \right| \le s\right\}\). Denote by \(\mathbb {S}^1\) the unit circle. Let
Note that
Let \(t\ge 1\) and observe that
The preceding inclusion yields that any \(z\in \mathcal {A}\) with small argument belongs in the balls centered at \(1+0i\) and \(-1+0i\) with radius \(2tn^{-11/10}\). On the other hand, for \(z\in \mathcal {A}\) with large argument we have
We define \([N-1]:=[1,N-1]\cap \mathbb {N}\) and
where \(\gcd (\alpha ,N)\) denote the greatest common divisor between \(\alpha\) and N. Observe that for any \(\alpha \in J_3(n,N)\) we have
The preceding inequalities yield that the irreducible fraction of \(x_\alpha\) is as small as a multiple of \(n^{-11/10}\). Therefore,
We emphasize that if \(\alpha \in J_1(n,N)\cup J_2(n,N)\cup J_3(n,N)\), then we have
Consequently,
where
4.1.1 Small ball analysis at the points \({\varvec{1+0i}}\) and \({\varvec{-1+0i}}\)
On the two points \(1\pm 0i\) we have the largest two closed balls, which are considered in our set of balls. This is remarkable since the number of real roots of a Kac polynomial for some common random variables is at least \(\text {O}( \frac{\log n}{\log \log \log n})\) with high probability [24]. This means that the real roots of a Kac Polynomial are moving slowly to the unit circle.
On the one hand, let \(z\in \text {B}\left( 1+0i, 2tn^{-{11}/{10}}\right)\). By Taylor’s Theorem we obtain
where \(R_2(z)\) is the error of the Taylor approximation of order 2. On the event \(\mathcal {G}_n\) we have
where \(\text {o}(1) = 2tn^{-1-1/10}\). Assuming that \(\mathcal {G}_n\) holds, the preceding inequality yields
Hence,
where \(2C_2 = 2C_0t +4t^2 + 1\). Since \(G_n(1)=\sum _{j=0}^{n-1} \xi _j\), Corollary 7.6 in [29] implies for \(L\ge \sqrt{1/q}\) (with q given in (H)) that
where \(C_3\) is a positive constant and \(D(\mathbf {a})\) is the lcd of the vector
By Proposition 7.4 in [29] we have \(D(\mathbf {a})\ge \frac{1}{2}n^{1/2-1/10}\left( \log n\right) ^{1/2}\). Therefore,
On the other hand, let \(z\in \text {B}\left( -1+0i, 2tn^{-11/10}\right)\). Assuming that \(G_n\) holds, Taylor’s Theorem implies
Thus,
Since \(G_n(-1) = \sum _{j=0}^{n-1} \left( -1\right) ^{j}\xi _j\), by Corollary 7.6 in [29] for \(L\ge \sqrt{1/q}\) (with q given in (H)) we obtain
where \(C_3\) is a positive constant and \(D(\mathbf {b})\) is the lcd of the vector
By Proposition 7.4 in [29], we have \(D(\mathbf {b})\ge \frac{1}{2} n^{1/2-1/10} \left( \log n\right) ^{1/2}\). Therefore,
Combining (13) and (14) we obtain
4.1.2 Small ball analysis at \({\varvec{e}}^{{\varvec{i2\pi x_\alpha }}}\)
In this part, we are focusing mainly on the complex roots of a Kac polynomial. We remark that the complex roots are more dispersed than the real roots, but they are approaching faster than the real roots to the unit circle. However, the complex roots do not approach extremely fast.
Let \(z\in \text {B}( e^{i2\pi x_\alpha },2tn^{-2} \left( \log n\right) ^{-1/2-\gamma })\) and assume that \(\mathcal {G}_n\) holds. By Taylor’s Theorem we obtain
where \(R_2(z)\) is the error of the Taylor approximation of order 2, and it satisfies
Then
Hence,
where \(2C_4 = 2C_0 + 4t + 1\). For proving that \(\mathbb {P}\left( \mathcal {G}_n, \text {B}_\alpha \right)\) tends to zero as \(n\rightarrow \infty\), we rewrite the sum \(G_n(e^{i2\pi x_\alpha })\) as the product of a matrix by a vector. This simple rewriting allows us to apply lcd techniques for matrices. To be precise, we define the \(2\times n\) matrix \(V_\alpha\) as follows
and \(X:=\left[ \xi _0,\ldots ,\xi _{n-1}\right] ^T \in \mathbb {R}^n\). Notice that
which implies
Let \(\Theta = r\left[ \cos (\theta ), \sin (\theta )\right] ^T\in \mathbb {R}^2\), where \(r>0\) and \(\theta \in \left[ 0,2\pi \right]\). For fixed \(r,\theta\), we have
Note that \(\Vert V_\alpha ^T\Theta \Vert _2\le r\sqrt{n}\). On the other hand, we have
Now, we are in the setting of inequality (9). Recall that \(x_\alpha\) satisfies
In the following we distinguish three cases for \(x_\alpha\).
4.1.3 Case 1. \(\alpha \in J_1(n,N)\)
Assume that \(\gcd \left( \alpha ,N\right) \ge n^{1+1/10}\left( \log n\right) ^{-\gamma }\). Recall that \(N=\lfloor n^2\left( \log n\right) ^{1/2+\gamma } \rfloor\). Then we have
Note that \(2\pi x_\alpha\) satisfies \(n^{-1}<\left| 2\pi x_\alpha \mod \pi \right| <\pi - n^{-1}\) for all large n. By Lemma 3.2 part 1 in [19], there exist positive constants \(c_5,C_5\) such that
Before continue with our arguments, we estimate the number of indexes \(\alpha\) where the condition \(\gcd \left( \alpha , N\right) \ge n^{1+1/10}\left( \log n\right) ^{-\gamma }\) holds. The following lemma provides such estimate.
Lemma 4.3
The number of indices \(\alpha\) such that
is at most
By Proposition 7.4 in [29], the lcd of \(V_\alpha\) satisfies \(D\left( V_\alpha \right) \ge 1/2\). Thus, by inequalities (9) and (15), and Lemma 4.3 we obtain
where \(C_6=4c_5^{-1/2}C^2L^2 \left( \left( 2tC_4\right) ^2 +4\right)\).
4.1.4 Case 2. \(\alpha \in J_2(n,N)\)
Assume that
Since \(N=\lfloor n^2 \left( \log n\right) ^{1/2+\gamma } \rfloor\), we have
where \(\text {o}(1)=n^{-1-1/10}\left( \log n \right) ^{\gamma }\). We observe that \(2\pi x_\alpha\) is such that
By Lemma 3.2 part 1 in [19] there exist positive constants \(c_5,C_5\) such that
Also, we observe that \(x_\alpha = \frac{\alpha }{N}=\frac{\alpha '}{N'}\) where \(\alpha = \alpha ' \gcd \left( \alpha ,N\right)\) and \(N = N'\gcd \left( \alpha , N\right)\). Note that \(\gcd \left( \alpha ',N'\right) =1\). Since \(N'\le n\), for any \(\theta\) we have
The above observation allows us to assume that \(x_\alpha = {1}/{N'}\). To apply inequality (9) we need to estimate the lcd. The following lemma shows an arithmetic property of the values \(\cos \left( j2\pi x_\alpha - \theta \right)\) for \(j=0,\ldots ,N'\) which becomes crucial for estimating the lcd.
Lemma 4.4
Fixed \(\theta \in [0,2\pi )\) and positive \(m\in \mathbb {Z}\). Let \(\mathcal {V}\) be a vector in \(\mathbb {R}^m\) which entries are \(\mathcal {V}_j= r\cos \left( j 2\pi x-\theta \right)\) for \(j=0,\ldots ,m-1\) with positive integer \(r\ge 2\) and \(x={1}/{m}\). Then
Since it is needed to analyze
in the definition of the least common denominator, we can assume without loss of generality that r is a positive integer. In fact, by Proposition 7.4 in [29], we can take \(r\ge {1}/{2}\). For the case \(2>r\ge {1}/{2}\), we can replicate the ideas in the proof of Lemma 4.4 to obtain that \(\text {dist}\left( V_\alpha ^T\Theta , \mathbb {Z}^n\right) \ge Cn^{1-1/10}\) for some positive constant C. If \(r\ge 2\), we can use \(\lfloor r\rfloor\) instead of r in Lemma 4.4.
If \(r\le \frac{1}{2\cdot 6\cdot 2\pi x_\alpha }\), by Lemma 4.4 and expression (16), we would obtain
which is a contradiction since \(L\ge \sqrt{{2}/{q}}\) is fixed. Thus, we should have \(r > \frac{1}{2\cdot 6\cdot 2\pi x_\alpha }\) which implies that lcd of \(V_\alpha\) satisfies
By inequality (9) we obtain
where \(C_7 = 2C^2L^2\left( \left( 2tC_2\right) ^2 + c_5^{-1/2}\right)\).
4.1.5 Case 3. \(\alpha \in J_3(n,N)\)
Assume that \(n\left( \log n\right) ^{1/2+\gamma } \ge \gcd \left( \alpha , N\right) \ge n^{9/10}\left( \log n\right) ^{1/2+\gamma }\). Since that \(N=\lfloor n^2 \left( \log n\right) ^{1/2+\gamma }\rfloor\), then
where \(\text {o}(1) = \frac{1}{n\left( \log n\right) ^{1/2+\gamma }}\). Note that \(2\pi x_\alpha\) satisfies
or
By Lemma 3.2 part 2 in [19], there exist positive constants \(c_5,C_5\) such that
On the other hand, the number of indexes \(\alpha\) which satisfy the condition over \(\gcd \left( \alpha ,N\right)\) is at most
In order to use the inequality (9), we need to analyze the least common denominator of \(V_\alpha\) for this case. In particular, we need to obtain a suitable lower bound for the distance between \(V_\alpha ^T\Theta\) and \(\mathbb {Z}^n\). We use similar ideas using in the proof of Lemma 4.4.
As \(x_\alpha =\frac{\alpha }{N} = \frac{\alpha '}{N'}\) with \(\gcd \left( \alpha ',N'\right) = 1\) and \(N'\ge n - 1\), then all the points in
Let r be a positive integer and we consider the set of intervals of the form \(\left[ \frac{m}{r}, \frac{m+1}{r}\right]\) for all \(m\in \left[ -r,r\right] \cap \mathbb {Z}\). Let \(I_m\) and \(J_{m}\) be the corresponding arcs on the unit circle whose projection on the horizontal axis is the interval \(\left[ \frac{m}{r}, \frac{m+1}{r}\right]\). If \(r < n\), by the pigeon-hole principle we have that there exists at least one \(I_{M}\) (or \(J_M\)) for some \(M\in \left[ -r,r\right] \cap \mathbb {Z}\), which contain at least \({n}/{(2r)}\) points \(\exp \left( i \left( j 2\pi x_\alpha - \theta \right) \right)\) in it. For each \(\cos \left( j 2\pi x_\alpha - \theta \right) \in \left[ \frac{M}{r}, \frac{M+1}{r}\right]\), it is defined
Note among the values \(d_j\) at most two can be equal and
Observe that for each \(0\le \lambda \le L\), with \(L =\min \left\{ \lfloor \frac{n}{4\cdot 2r} - \frac{3}{2} \rfloor , \lfloor \frac{N' }{2\cdot 2r\cdot 2\pi } - \frac{1}{2}\rfloor \right\}\), there exists at least one \(d_j\) such that \(d_j\ge \left( 2\lambda + 1\right) 2\pi \frac{1}{N'}\). So, the sum of all \(d_j\) is at least
and taking \(r \le \lfloor n^{1/4} \rfloor\) it follows that
Now, let v be a vector in \(\mathbb {R}^n\) whose entries are \(v_j=\cos \left( j 2\pi x_\alpha - \theta \right)\) for each \(j=0,\ldots ,n-1\). If a positive integer \(r\le \lfloor n^{1/4} \rfloor\), by the previous discussion it follows that the vector \(rv=(rv_j)_{1\le j\le n}\) satisfies
Thus, if \(r\le \lfloor n^{1/4} \rfloor\) and taking a fixed \(L\ge \sqrt{2/q}\), by the definition of lcd we would deduce that
which implies that the lcd of \(V_\alpha\) should satisfy \(D\left( V_\alpha \right) \ge n^{1/4}\). By (9), we obtain
where \(C_8=8c_5^{-1/2}C^2 L^2\left( \left( 2t C_4\right) ^2 + 1 \right)\).
Combining Case 1, Case 2 and Case 3 we obtain
Hence, inequality (12) with the help of (13), (14) and (17) yields
The preceding estimate, inequality (10) and relation (11) imply Theorem 2.3.
5 Proof of Theorem 2.6. On the lower bound for the smallest singular value for random circulant matrices
Let \(\rho \in (0,{1}/{4})\) be fixed. We define \(x_k={k}/{n}\), \(k=0,\ldots ,n-1\). Note that
In the sequel, we prove that the right-hand side of the preceding inequality is \(\text {O}\left( n^{-2\rho }\right)\). We consider the following three cases.
Case 1. The same reasoning using in Section 4.1.1 yields
Case 2. \(\gcd \left( k,n\right) > n^{1/2}\). By similar reasoning using in the first case of the proof of Theorem 2.3, Section 4.1.3, we deduce
where \(C_9 = 4c_5^{-1/2}C^2L^2\).
Case 3. \(\gcd \left( k,n\right) \le n^{1/2}\).
By similar reasoning using in the second case of the proof of Theorem 2.3, Section 4.1.4, we obtain
where \(C_{10}=c_{5}^{-1/2}C^2L^2\left( 2+1152\pi ^2\right)\).
The combination of all the preceding cases yields \(\mathbb {P}\left( s_n(\mathcal {C}_n)\le n^{-\rho }\right) = \text {O}\left( {n^{-2\rho }}\right)\) for any \(\rho \in (0,{1}/{4})\).
References
Apostol, T.: Introduction to Analytic Number Theory. Undergraduate Texts in Mathematics, Springer-Verlag, (1976)
Bharucha-Reid, A., Sambandham, M.: Random Polynomials: Probability and Mathematical Statistics: a Series of Monographs and Textbooks. Academic Press, (2014)
Bloch, A., Pölya, G.: On the roots of certain algebraic equations. Proc London Math Soc 2(1), 102–114 (1932)
Bordenave, C., Chafaï, D.: Around the circular law. Prob Surv 9, 1–89 (2012)
Bose, A., Subhra, R., Saha, K.: Spectral orm of circulant-type matrices. J Theor Prob 24(2), 479–516 (2011)
Chareka, P., Chareka, O., Kennendy, S.: Locally sub-Gaussian random variables and the strong law of the large numbers. Atlantic Elect J Math 1(1), 75–81 (2006)
Davis, P.: Circulant Matrices. American Mathematical Society, 2012
Defant, A., Mierczyslaw, L.: Norm estimates for random polynomials on the scale of Orlicz spaces. Banach J Math Anal 11(2), 335–347 (2017)
Do, Y., Nguyen, O., Vu, V.: Roots of random polynomials with coefficients having polynomial growth. Annals Prob 46(5), 2407–2494 (2018)
Erdös, P.: Problems and Results on Polynomials and Interpolation. Aspects on Contemporary Complex Analysis 1980, 383–391
Huhtanen, M., Perämäki, A.: Factoring matrices into the product of circulant and diagonal matrices. J Fourier Anal Appl 21(5), 1018–1033 (2015)
Ibragimov, I. & Zaporozhets, D.: On Distribution of Zeros of Random Polynomials in Complex Plane, Prokhorov and Contemporary Probability Theory, Proceedings in Mathematics and Statistics 33, Editors: A. Shiryaev, S. Varadhan, E. Presman, Springer, 2013, 303–323
Kac, M.: A correction to on the average number of real roots of a random algebraic equation. Bull Am Math Soc 49(1), 314–320 (1943)
Kahane, J.: Some Random Series of Functions, 2nd edn. Cambridge University Press, Cambridge (1985)
Karapetyan, A.: On minimum modulus of trigonometric polynomials with random coefficients. Math Notes 61, 369–373 (1997)
Karapetyan, A.: The values of stochastic polynomials in a neighbourhood of the unit circle. Math Notes 63, 127–130 (1998)
Khattree, R.: Multidimensional Statistical Analysis and Theory of Random Matrices. Proceedings of the Sixth Eugene Lukacs Symposium, Bowling Green, Ohio, USA, 1996, 29–30
Konyagin, S.: Minimum of the absolute value of random trigonometric polynomials with coefficients \(\pm 1\). Math Notes 56, 931–947 (1994)
Konyagin, S., Schlag, W.: Lower bounds for the absolute value of random polynomials on a neighbourhood of the unit circle. Trans Am Math Soc 351, 4963–4980 (1999)
Larry, A., Vanderbei, R.: The complex zeros of random polynomials. Trans Am Math Soc 347(11), 4365–4384 (1995)
Lubinsky, D., Pritsker, I., Xie, X.: Expected number of real zeros for random linear combinations of orthogonal polynomials. Proc Am Math Soc 144(4), 1631–1642 (2016)
Meckes, M.: Some Results on Random Circulant Matrices. High Dimensional Probability: The Luminy V, Institute of Mathematical Statistics, 213–223 (2009)
Mezincescu, G., Bessis, D., Fournier, J., Mantica, G., Aaron, F.: Distribution of roots of random real generalized polynomials. J Stat Phys 86(3–4), 675–705 (1997)
Nguyen, H., Nguyen, O., Vu, V.: On the number of real roots of random polynomial. Commun Contemp Math 18(4), 1550052, 17 pp. (2016)
Rahman, Q., Schmeisser, G.: Analytic theory of polynomials: critical points. Oxford Science Publications, Zeros and Extremal Properties (2002)
Rauhut, H.: Circulant and toeplitz matrices in compressed sensing. Proc. SPARS’09, Saint-Malo, France (2009)
Rudelson, M. & Vershynin, R.: Non-asymptotic theory of random matrices: extreme singular values. Proceedings of the International Congress of Mathematicians Hyderabad Volume III, India, Editor: Rajendra Bhatia, 1576–1602 (2010)
Rudelson, M., Vershynin, R.: The Littlewood-Offord problem and invertibility of random matrices. Adv Math 218, 600–633 (2008)
Rudelson, M., Vershynin, R.: No-gaps elocalization for general random matrices. Geomet Funct Anal 26(6), 1716–1776 (2016)
Salem, R., Zygmund, A.: Some properties of trigonometric series whose terms have random signs. Acta Math 91, 245–301 (1954)
Sen, A., Virág, B.: The top Eigenvalue of the random toeplitz matrix and the Sine Kernel. Annals Prob 41(6), 4050–4079 (2013)
Tikhomirov, K.: The smallest singular value of random rectangular matrices with no moment assumptions on entries. Israel J Math 212, 289–314 (2016)
Weber, M.: On a stronger form of Salem-Zygmund’s inequality for random trigonometric sums with examples. Period Math Hungar 52(2), 73–104 (2006)
Vershynin, R.: High-dimensional probability. An introduction with applications in data science. Cambridge Series in Statistical and Probabilistic Mathematics 47. Cambridge University Press, Cambridge (2018)
Acknowledgements
The authors would like to thank the constructive and useful suggestions provided by professor Jesús López Estrada. G. Barrera acknowledges support from a post-doctorate grant held at Center for Research in Mathematics, (CIMAT, 2015–2016). He would like to express his gratitude to Pacific Institute for the Mathematical Sciences (PIMS, 2017–2019) for the grant held at the Department of Mathematical and Statistical Sciences at University of Alberta. He also would like to thank to CIMAT, University of Alberta and University of Helsinki for all the facilities used along the realization of this manuscript. P. Manrique acknowledges support from Cátedras CONACyT-México for the research position held at Mathematics Institute, Cuernavaca (UNAM, 2017–2020). He also would like to thank to UNAM for all the facilities used along the realization of this manuscript.
Funding
Open Access funding provided by University of Helsinki including Helsinki University Central Hospital.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
G. Barrera was supported by CIMAT and PIMS.
P. Manrique was supported by Cátedras-CONACyT, México.
Appendix A. Proofs of Lemma 4.3 and Lemma 4.4
Appendix A. Proofs of Lemma 4.3 and Lemma 4.4
Proof of Lemma 4.3
Write \(m:=n^{1+1/10}\left( \log n\right) ^{-\gamma }\). Let T be the Euler totient function. Then we have
Notice that \(T\left( s\right) \le s - \sqrt{s}\) for all \(s\in \mathbb {N}\). Moreover, if d(s) is the number of divisors of s, it is well-known (see Theorem 13.12 in [1]) that there exists an absolute constant \(C>0\) such that
Hence,
where \(\text {o}(1) = C\left( \log \log \left( N\right) \right) ^{-1}\).
Proof of Lemma 4.4
We define the following sequence
where i is the imaginary unit. Note that P is a set of points on the unit circle which can be seen as vertices of a regular polygon with m sides inscribed in the unit circle. Since the arguments of points \(\exp \left( i\left( j2\pi x - \theta \right) \right)\) are separated exactly by a distance \(2\pi x\), the number of points \(\exp \left( i\left( j2\pi x - \theta \right) \right)\) which are in any arc on the unit circle is at least \(\frac{l}{2\pi x}-2\), where l is the length of the arc.
Let \(\left[ y, y + 3(2\pi x)\right]\) be a subinterval of \([-1,1]\) and consider the arc A on the unit circle whose projection on the horizontal axis is \(\left[ y, y + 3(2\pi x)\right]\). If the length of the arc A is l, then the number of values \(\cos \left( j2\pi x - \theta \right)\) which are still in \(\left( y, y + 3(2\pi x)\right)\) is at least \(\frac{l}{2\pi x}-2\ge \frac{3\left( 2\pi x\right) }{2\pi x} - 2 = 1\) since \(l\ge 3\left( 2\pi x\right)\).
Let \(s\in \left[ -(r-1),(r-1)\right] \cap \mathbb {Z}\). Note that there exists at least one value
for all positive integers \(k\le \frac{1}{3 r \left( 2\pi x\right) }\). In the sequel, we consider all the values \(\cos \left( j2\pi x - \theta \right) \in \left[ \frac{s}{r},\frac{s+1}{r}\right]\) and define
Let L be the biggest integer which satisfies \(\left( 3\cdot 2 \pi x\right) L \le \frac{1}{2r}\), or equivalently, \(L=\lfloor \frac{1}{2r\left( 3\cdot 2 \pi x\right) } \rfloor\). Therefore, the sum of \(d_j\) for all \(\cos \left( j2\pi x - \theta \right) \in \left[ \frac{s}{r},\frac{s+1}{r}\right]\) is at least
where we used the following inequality
which holds if \(\frac{1}{2r\left( 2\pi x\right) } \ge 6\). Let \(\sigma _s\) be the sum of \(d_j\) for each interval \(\left[ \frac{s}{r},\frac{s+1}{r}\right]\), \(s=-(r-1),\ldots ,(r-1)\). As \(r\ge 2\), then
By the previous analysis, we have that the distance between the vector \(\mathcal {V}\in \mathbb {R}^m\) whose entries are \(\mathcal {V}_j = r\cos \left( j2\pi x - \theta \right)\) for \(j=0,\ldots ,m-1\) with \(x={1}/{m}\) to \(\mathbb {Z}^m\) is at least
verifying that expression \(\frac{1}{2r\left( 2\pi x\right) } \ge 6\) is fulfilled.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Barrera, G., Manrique, P. Salem–Zygmund inequality for locally sub-Gaussian random variables, random trigonometric polynomials, and random circulant matrices. Bol. Soc. Mat. Mex. 28, 45 (2022). https://doi.org/10.1007/s40590-022-00437-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40590-022-00437-4