1 Introduction

A classical problem in Harmonic Analysis is the quantification of the magnitude of the modulus for a trigonometric polynomial on the unit circle. Erdös [10] studied the trigonometric polynomial \(T_n(x)=\sum _{j=0}^{n-1} \alpha _j e^{ijx}\), \(x\in [0,2\pi ]\), for choices of signs \(\pm 1\) for all \(\alpha _j\), and estimated how large \(\left| T_n(x) \right|\) for \(x\in [0,2\pi )\) can be. Salem and Zygmund [30] proved that almost all choices of signs satisfy

$$\begin{aligned} c_1\left( n\log n\right) ^{\frac {1}{2}} \le \max \limits _{x\in [0,2\pi ]} \left| T_n(x)\right| \le c_2 \left( n\log n\right) ^{\frac {1}{2}}\quad \text {for some positive constants } c_1\;\text { and} \; c_2. \end{aligned}$$
(1)

Inequalities of type (1) are known as Salem–Zygmund inequality. There are different versions of Salem–Zygmund inequality that appear in many areas of modern analysis, see [8]. In a probabilistic context, the common version of Salem–Zygmund inequality is usually established when the coefficients \(\alpha _1,\ldots ,\alpha _{n-1}\) of \(T_n\) are iid sub-Gaussian random variables, see Chapter 6 in [14]. In the present manuscript, we give an extension of Salem–Zygmund inequality for locally sub-Gaussian random coefficients. This extension allows us to study the localization of the roots of a random Kac polynomial and the probability for the singularity of a random circulant matrix.

1.1 Roots of random trigonometric polynomials

The study of the roots of a polynomial is an old topic in Mathematics. There are formulas to compute the roots for polynomials of degree 2, degree 3 (Tartaglia–Cardano’s formula), degree 4 (Ferrari’s formula), but due to Galois’ work, for a generic polynomial of degree 5 or more it is not possible to find explicit formulas for computing its roots in terms of radicals.

For a random polynomial, Bloch and Polya [3] considered a random polynomial with iid Rademacher random variables (uniform distribution on \(\{-1,1\}\)) and proved that the expected number of real zeros are \(\text{ O }({n}^{{1}/{2}})\). In a series of papers between 1938 and 1939, Littlewood and Offord gave a better bound for the number of real roots of a random polynomial with iid random coefficients for the cases of Rademacher, Uniform\([-1,1]\), and standard Gaussian [21]. Kac [13] established his famous integral formula for the density of the number of real roots of a random polynomial with iid coefficients with standard Gaussian distribution. Those were the first steps in the study of roots of random functions, which nowadays is a relevant part of modern Probability and Analysis. For further details, see [9] and the references therein.

The localization of the roots of a polynomial is in general a hard problem. However, there are relevant results in the theory of random polynomials [2]. For instance, for iid non-degenerate random coefficients with finite logarithm moment, the roots cluster asymptotically near the unit circle and the arguments of the roots are asymptotically uniform distributed. More precisely, Ibragimov and Zaporozhets [12] showed that for a Kac polynomial

$$\begin{aligned} G_n(z)=\sum _{j=0}^{n-1} \xi _j z^j \quad \text { for } z\in \mathbb {C}, \end{aligned}$$
(2)

with (real or complex) iid non-degenerate coefficients satisfying \(\mathbb {E}\left( \log (1+|\xi _0|)\right) <\infty\), its roots are concentrated around the unit circle as \(n\rightarrow \infty\), almost surely. Moreover, they proved that the condition \(\mathbb {E}\left( \log (1+|\xi _0|)\right) <\infty\) is necessary and sufficient for the roots of \(G_n\) to be asymptotically near the unit circle.

For iid standard Gaussian random coefficients of \(G_n\), most of the roots are concentrated in an annulus of width \({1}/{n}\) centered in the unit circle. However, the nearest root to the unit circle is at least a distance \(\text{ O }({n^{-2}})\), for further details see [23]. Larry and Vanderbei [20] conjectured that the last statement holds not only for standard Gaussian coefficients but also for Rademacher coefficients. This conjecture was proved by Konyagin and Schlag [19]. Our Theorem 2.3 establishes that most of the roots of \(G_n\) are near to the unit circle in a distance at least \(\text{ O }({n^{-2}} \left( \log n\right) ^{-1/2-\gamma })\) for \(\gamma >{1}/{2}\) with probability \(1-\text {O}((\log n)^{-\gamma +1/2})\). Konyagin and Schlag [19] showed that if \(G_n\) has iid Rademacher or standard Gaussian random coefficients, then for all \(\varepsilon >0\) and large n the following expression

$$\begin{aligned} \min _{z\in \mathbb {C}: \left| |z|-1 \right| <\varepsilon n^{-2}}|G_n(z)|\ge \varepsilon n^{-{1}/{2}} \end{aligned}$$

holds with probability at least \(1-C\varepsilon\), for some positive constant C. Karapetyan [15, 16] studied the sub-Gaussian case, but up to our knowledge, his proof is not complete. Even so, using our extension of Salem–Zygmund inequality and the notion of least common denominator, which was developed to study the singularity of the random matrices [28], we show that for fixed \(t\ge 1\),

$$\begin{aligned} \min _{z\in \mathbb {C}\; :\; \left| \left| z \right| -1 \right| \le tn^{-2}\left( \log n \right) ^{-1/2-\gamma }} \left| G_n(z)\right| \ge t n^{-{1}/{2}}(\log n)^{-\gamma }, \end{aligned}$$

with probability at least \(1-\text {O}((\log n)^{-\gamma +1/2})\). The techniques using in the present paper are not the same using in Konyagin and Schlag [19]. The main result of Konyagin and Schlag only holds for Rademacher and Gaussian iid random coefficients. They did a refined analysis of the characteristic function and applying the so-called circle method. This approach is not straightforward to apply for more general random coefficients, even sub-Gaussian or with finite moment generating function (mgf for short).

The novelty of this manuscript is the use of the notion of least common denominator for cover more general random coefficients. This approach works for quite general random coefficients. However, the authors still working for relaxing the assumption of the existence of a mgf. The main obstacle for relaxing this assumption arises in the control of the maximum modulus over the unit circle of the random polynomial under the assumption of the existence of some \(p-\)moment. We emphasize that the proof is not a direct consequence of [29] since good estimates of the least common denominator typically are difficult to obtain. We remark that this result and the main result in [19], up to our knowledge, are not direct consequences of the so-called concentration inequalities.

1.2 Random circulant matrices

Recall, an \(n\times n\) complex circulant matrix, denoted by \(\text {circ}(c_0,\ldots ,c_{n-1})\), has the form

$$\begin{aligned} \text {circ}(c_0,\ldots ,c_{n-1}):=\left[ \begin{array}{ccccc} c_0 &{} c_{1} &{} \cdots &{} c_{n-2} &{} c_{n-1} \\ c_{n-1} &{} c_{0} &{} \cdots &{} c_{n-3} &{} c_{n-2} \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ c_2 &{} c_3 &{} \cdots &{} c_0 &{} c_1\\ c_{1} &{} c_{2} &{} \cdots &{} c_{n-1} &{} c_{0} \end{array}\right] , \end{aligned}$$

where \(c_0,\ldots ,c_{n-1}\in \mathbb {C}\). For \(\xi _0,\ldots ,\xi _{n-1}\) being random variables, we say that

$$\begin{aligned} \mathcal {C}_n:=\text {circ}(\xi _0,\ldots \xi _{n-1}) \end{aligned}$$

is an \(n\times n\) random circulant matrix. The circulant matrices are a very common object in different areas of mathematics [11, 17, 26]. In particular, circulant matrices play a crucial role in the study of large-dimensional Toeplitz matrices [5, 31]. In the theory of the random matrices, the singularity is one aspect that has been intensively studied during recent years [4, 27, 28]. In the case of the random circulant matrices have Rademacher entries, Meckes [22] proved that the probability of a random circulant matrix is singular tends to zero when its dimension is growing.

As a consequence of our concentration result of the roots for Kac polynomials, for a random circulant matrix with iid zero-mean entries and finite mfg, it follows that for all fixed \(t\ge 1\) and \(\gamma >1/2\), the smallest singular value \(s_n\left( \mathcal {C}_n\right)\) of \(\mathcal {C}_n\) satisfies

$$\begin{aligned} s_n(\mathcal {C}_n)\ge tn^{-1/2}\left( \log n\right) ^{-\gamma } \end{aligned}$$

with probability \(1-\text {O}\left( (\log n)^{-\gamma +1/2}\right)\). However, under weaker assumptions (see below the condition (H)), for \(\rho \in (0,1/4)\) we also show

$$\begin{aligned} s_n(\mathcal {C}_n)\ge n^{-\rho } \end{aligned}$$

with probability \(1-\text {O}\left( n^{-2\rho }\right)\).

The manuscript is organized as follows. In Sect. 2 we state the main results and their consequences. In Section 3 we give the proof of a Salem–Zygmund inequality for random variables with mgf. In Sect. 4 with the help of Salem–Zygmund inequality and the notion of least common denominator we prove Theorem 2.3 about the location of the roots of a Kac polynomial. Finally, in Sect. 5 we prove Theorem 2.6 about that the smallest singular value of a random circulant is relatively large with high probability.

2 Main results

2.1 Salem–Zygmund inequality

Recall that a real-valued random variable \(\xi\) is said to be sub-Gaussian if its mgf is bounded by the mgf of a Gaussian random variable, i.e., there is \(b> 0\) such that

$$\begin{aligned} \mathbb {E}(e^{t\xi })\le e^{{b^2t^2}/{2}} \quad \text { for any } t\in \mathbb {R}. \end{aligned}$$

When this condition is satisfied for a particular value of \(b>0\), we say that \(\xi\) is b-sub-Gaussian or sub-Gaussian with parameter b. In particular, it is straigforward to show that the mean of a sub-Gaussian random variable is necessarily equal to zero. For more details see [6] and the references therein.

According to [6], a random variable \(\xi\) is called locally sub-Gaussian when its mgf \(M_{\xi }\) exists in an open interval around zero. Due to this, it is possible to find constants \(\alpha \ge 0\), \(\delta \in (0,\infty ]\) and \(\nu \in \mathbb {R}\) such that

$$\begin{aligned} M_{\xi }(t) \le e^{\nu t +\frac{1}{2}\alpha ^2 t^2}\quad \text { for any } t\in (-\delta ,\delta ). \end{aligned}$$

If the mean of \(\xi\) is zero and its variance \(\sigma ^2\) is finite and positive then we can take \(\nu =0\) and \(\alpha ^2>\sigma ^2\) for some \(\delta >0\) as the next lemma states.

Lemma 2.1

(Locally sub-Gaussian r.v.). Let \(\xi\) be a random variable such that its mgf \(M_\xi\) exists in an interval around zero. Assume that \(\mathbb {E}\left( \xi \right) =0\) and \(\mathbb {E}\left( \xi ^2\right) =\sigma ^2>0\). Then there is a positive constant \(\delta\) such that

$$\begin{aligned} M_\xi (t) \le e^{{\alpha ^2 t^2}/{2}} \quad \text { for any } t\in (-\delta ,\delta )\; \text{ and } \; \alpha ^2 >\sigma ^2. \end{aligned}$$

The preceding lemma is not suprising, see for instance Remark 2.7.9 in [34]. Since its proof is simple, we give it here for completeness of the presentation.

Proof

Assume that \(M_\xi (t)\) is well-defined for any \(t\in (-\delta _1,\delta _1)\), for some \(\delta _1>0\). Then \(M_\xi (t)\) has derivatives of all orders at \(t=0\). Define \(g(t):=e^{{\alpha ^2 t^2}/{2}}\), for \(t\in \mathbb {R}\). Then \(g(0)=1\), \(g^\prime (0)=0\) and \(g^{\prime \prime }(0)=\alpha ^2\). Let \(h(t):= g(t) - M_\xi (t)\), for all \(t\in (-\delta _1,\delta _1)\). Since h is continuous and \(h^{\prime \prime }(0)=\alpha ^2-\sigma ^2>0\), then there exists \(0<\delta <\delta _1\) such that \(h^{\prime \prime }(t)>0\), for every \(t\in (-\delta ,\delta )\). Therefore, the function h is convex in the interval \((-\delta ,\delta )\). As \(h^\prime (0)=0\) then 0 is a local minimum of h. Therefore, it follows that \(h(t)\ge h(0)=0\), for every \(t\in (-\delta ,\delta )\). Thus, the result follows.

The classic Salem–Zygmund inequality is usually established for iid sub-Gaussian random variables. But thanks to Lemma 2.1 we are able to extend it to iid locally sub-Gaussian random variables as it is stated in Theorem 2.2. Even though, Theorem 2.2 is interesting on its own, we stress that it is also crucial for our approach using in the proof of the main result Theorem 2.3.

Before presenting Theorem 2.2, we introduce some useful notations. For simplicity, we keep the same notation between the Euclidean norm and the modulus for the complex numbers. Denote by \(\mathbb {T}\) the unit circle \(\mathbb {R}/(2\pi \mathbb {Z})\). For any bounded function \(f:\mathbb {T}\rightarrow \mathbb {C}\), the infinite norm of f is defined as \(\Vert f\Vert _\infty =\sup \limits _{x\in \mathbb {T}}|f(x)|\), and \({\mathop {=}\limits ^{\mathcal {D}}}\) means “equal in distribution”.

Theorem 2.2

(Salem–Zygmund inequality for locally sub-Gaussian random variables). Let \(\xi\) be a random variable with zero mean and finite positive variance. Assume that the mgf \(M_\xi\) of \(\xi\) exists in an open interval around zero. Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables with \(\xi _k{\mathop {=}\limits ^{\mathcal {D}}}\xi\) for every \(k\ge 0\). Let \(\phi :[0,1] \rightarrow \mathbb {R}\) is a non-zero continuous function. Consider \(W_n(x)=\sum _{j=0}^{n-1} \xi _j \phi ({j}/{n})e^{ijx}\) for any \(x\in \mathbb {T}\). Then, for all large n

$$\begin{aligned} \mathbb {P}\Big ( \Vert W_n\Vert _\infty \ge C_0(\left( \log n\right) \sum _{j=0}^{n-1} |\phi ({j}/{n})|^2 )^{{1}/{2}}\Big )\le \frac{C_1}{n^2}, \end{aligned}$$

where \(C_0\) and \(C_1\) are positive constants that only depend on the mgf of \(\xi\) and the function \(\phi\).

Actually, under the assumption of finite second moment, a version of a Salem–Zygmund type inequality can be obtained in terms of the expected value of the infinite norm of a random trigonometric polynomial, for more details see [33]. Theorem 2.2 provides an upper bound of how large the infinite norm of a random trigonometric polynomial is in probability. Moreover, Theorem 2.2 gives a better bound than Corollary 2 in [33] as we see below.

Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables such that \(\mathbb {E}\left( \xi _0\right) =0\) and \(\mathbb {E}\left( \xi _0^2\right) =\sigma ^2>0\). By Corollary 2 in [33] we have

$$\begin{aligned} \mathbb {E}\left( \max \limits _{x\in \mathbb {T}}\left| \sum _{j=0}^{n-1} \xi _j e^{ijx}\right| \right) \le & \; C \min \left\{ (n\log (n+1))\mathbb {E}{(|\xi _0|^2)})^{{1}/{2}},n\mathbb {E}{|\xi _0|}\right\} \\ \le & \; C (n\log (n+1))\mathbb {E}{(|\xi _0|^2)})^{{1}/{2}}, \end{aligned}$$

where C is a universal positive constant. By the Markov inequality we obtain

$$\begin{aligned} \mathbb {P}\left( \max \limits _{x\in \mathbb {T}}\left| \sum _{j=0}^{n-1} \xi _j e^{ijx}\right| \ge C_0\left( n\log n\right) ^{{1}/{2}} \right) \le \frac{C (n\log (n+1))\mathbb {E}{(|\xi _0|^2)})^{{1}/{2}}}{C_0\left( n\log n\right) ^{{1}/{2}}}. \end{aligned}$$

Note that the upper bound asymptotically equals a positive constant. On the other hand, under the assumptions of Theorem 2.2 we deduce

$$\begin{aligned} \mathbb {P}\left( \max \limits _{x\in \mathbb {T}}\left| \sum _{j=0}^{n-1} \xi _j e^{ijx}\right| \le C_0\left( n\log n\right) ^{{1}/{2}} \right) \ge 1-\frac{C_1}{n^2} \end{aligned}$$

for all large n, where \(C_0\) and \(C_1\) are positive constants that only depend on the mgf of \(\xi _0\).

2.2 Kac polynomials

For using the concept of least common denominator we introduce the following condition. We say that a random variable \(\xi _0\) satisfies the condition (H) if

$$\begin{aligned} \sup _{u\in \mathbb {R}} \mathbb {P}\left\{ |\xi _0 - u| \le 1\right\} \le 1-q\;\; \text{ and } \;\;\mathbb {P}\left\{ |\xi _0|>M\right\} \le q/2 \quad \text { for some } M>0\; \text { and }\;q\in (0,1). \end{aligned}$$
(H)

The notion of concentration function was introduced by P. Lévy in the context of the study of distributions of sums of random variables. For \(\xi _0\) being not degenerate, zero mean with mgf, one can deduce that condition (H) is valid for some \(M>0\) and \(q\in (0,1)\). We refer to [32].

The main result of this manuscript is the following theorem.

Theorem 2.3

Let \(\xi\) be a random variable with zero mean and finite positive variance. Assume that the mgf \(M_\xi\) of \(\xi\) exists in an open interval around zero. Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables with \(\xi _k{\mathop {=}\limits ^{\mathcal {D}}}\xi\) for every \(k\ge 0\). Let

$$\begin{aligned} \mathcal {M}_n := \left\{ \min _{z\in \mathbb {C}\; :\; \left| \left| z \right| -1 \right| \le tn^{-2}\left( \log n\right) ^{-{1}/{2}-\gamma }} \left| G_n(z) \right| \le t n^{-{1}/{2}}(\log n)^{-\gamma } \right\} . \end{aligned}$$

Then for any fixed \(t\ge 1\),

$$\begin{aligned} \mathbb {P}\left( \mathcal {M}_n \right) = \text {O}\left( (\log n)^{-\gamma +1/2}\right) , \end{aligned}$$

where \(\gamma >{1}/{2}\) and the implicit constant in the O-notation depends on t and the mgf of \(\xi\).

Remark 2.4

Observe that all bounded random variables satisfy (H) in Theorem 2.3 (with a suitable scaling). In particular, the Rademacher distribution which corresponds to the uniform distribution on \(\{-1,1\}\) and the uniform distribution on the interval \([-1,1]\) satisfy (H).

2.3 Random circulant matrices

It is well-known that any circulant matrix can be diagonalized in \(\mathbb {C}\) using a Fourier basis. Indeed, let \(\omega _n:=\exp \left( i\frac{2\pi }{n}\right)\), \(i^2=-1\), and \(F_n=\frac{1}{\sqrt{n}}(\omega ^{jk}_n)_{0\le j,k\le n-1}\). The matrix \(F_n\) is called the Fourier matrix of order n. Note that \(F_n\) is a unitary matrix. By a straightforward computation it follows

$$\begin{aligned} \text {circ}(c_0,\ldots ,c_{n-1}) = F^*_n \text {diag}\left( G_n(1),G_n(\omega _n),\ldots ,G_n(\omega _n^{n-1})\right) F_n, \end{aligned}$$

where \(G_n\) is the polynomial given by \(G_n(z):=\sum _{k=0}^{n-1}c_kz^k\). Hence, the eigenvalues of \(\text {circ}(c_0,\ldots ,c_{n-1})\) are \(G_n(1),G_n(\omega _n), \ldots ,G_n(\omega _n^{n-1}),\) or equivalently

$$\begin{aligned} G_n(\omega _n^k)=\sum \limits _{j=0}^{n-1} c_j\exp \left( i\frac{2\pi kj}{n}\right) \quad \text { for any} \quad k=0,\ldots ,n-1. \end{aligned}$$
(3)

Expressions like (3) appear naturally in the study of Fourier transform of periodic functions. For a complete understanding of circulant matrices, we recommend the monograph [7].

In the sequel, we consider an \(n\times n\) random circulant matrix \(\mathcal {C}_n\), i.e., \(\mathcal {C}_n:=\text {circ}(\xi _0,\ldots ,\xi _{n-1})\), where \(\xi _0,\ldots ,\xi _{n-1}\) are independent random variables. The smallest singular value of the random circulant matrix \(\mathcal {C}_n\) is given by

$$\begin{aligned} s_n(\mathcal {C}_n) = \min _{0\le k\le n-1} |G_n(\omega _n^k)|. \end{aligned}$$
(4)

We remark that in general the smallest singular value is not equal to the smallest eigenvalue modulus. Since \(\mathcal {C}_n\) is a normal matrix, its singular values are the modulus of its eigenvalues. Thus, the following corollary is a direct consequence of Theorem 2.3.

Corollary 2.5

Let \(\xi\) be a random variable with zero mean and finite positive variance. Assume that the mgf \(M_\xi\) of \(\xi\) exists in an open interval around zero. Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables with \(\xi _k{\mathop {=}\limits ^{\mathcal {D}}}\xi\) for every \(k\ge 0\). Let \(\mathcal {C}_n:=\text {circ}(\xi _0,\ldots ,\xi _{n-1})\) be an \(n\times n\) random circulant matrix and let \(s_n(\mathcal {C}_n)\) be the smallest singular value of \(\mathcal {C}_n\). Then, for all fixed \(t\ge 1\) and \(\gamma > {1}/{2}\) we have

$$\begin{aligned} \mathbb {P}\left( s_n(\mathcal {C}_n)\le tn^{-1/2}\left( \log n\right) ^{-\gamma }\right) = \text {O}\left( \left( \log n\right) ^{-\gamma +{1}/{2}}\right) . \end{aligned}$$
(5)

It is possible to weaken the assumptions of Corollary 2.5. Using similar reasoning as in the proof of Theorem 2.3 we obtain the following theorem.

Theorem 2.6

Let \(\xi\) be a non-degenerate random variable which satisfies (H). Let \(\{\xi _k:k\ge 0\}\) be a sequence of iid random variables with \(\xi _k{\mathop {=}\limits ^{\mathcal {D}}}\xi\) for every \(k\ge 0\). Let \(\mathcal {C}_n:=\text {circ}(\xi _0,\ldots ,\xi _{n-1})\) be an \(n\times n\) random circulant matrix. Then, for each \(\rho \in (0,1/4)\) we have

$$\begin{aligned} \mathbb {P}\left( s_n(\mathcal {C}_n)\le n^{-\rho }\right) = \text {O}({n^{-2\rho }}). \end{aligned}$$

3 Proof of Theorem 2.2. Salem–Zygmund inequality for locally sub-Gaussian random variables

Firstly, we provide the proof of the following claim which is an important fact that we use in the proof of Theorem 2.2.

Claim 1: There exists a random interval \(I\subset \mathbb {T}\) of length \({1}/{\rho_n}\) with \(\rho_n={3n}/{8}\) such that

$$\begin{aligned} |W_n(x)|\ge \frac{1}{2} \Vert W_n\Vert _\infty \quad \text { for any } x\in I. \end{aligned}$$

Proof

Let \(p_n(x):=\sum _{j=0}^{n-1} b_j e^{ijx}\), \(x\in \mathbb {T}\) be a trigonometric polynomial on \(\mathbb {T}\), where \(b_0,\ldots ,b_{n-1}\) are real numbers. For \(x\in \mathbb {T}\) write

$$\begin{aligned} g_n(x):=|p_n(x)|^2=\left( \sum _{j=0}^{n-1} b_j \cos (jx) \right) ^2 + \left( \sum _{j=0}^{n-1} b_j \sin (jx) \right) ^2 \end{aligned}$$
(6)

and

$$\begin{aligned} h_n(x):= \left( \sum _{j=0}^{n-1} jb_j \cos (jx) \right) ^2 + \left( \sum _{j=0}^{n-1} jb_j \sin (jx) \right) ^2. \end{aligned}$$

Then

$$\begin{aligned} \Vert p_n\Vert ^2_\infty = \sup _{x\in \mathbb {T}} g_n(x)=\Vert g_n\Vert _\infty \quad \text { and } \quad \Vert p^{\prime }_n\Vert ^2_\infty =\sup _{x\in \mathbb {T}} h_n(x). \end{aligned}$$
(7)

Recall the Bernstein inequality \(\Vert p^{\prime }_n\Vert _\infty \le n\Vert p_n\Vert _\infty\) (see for instance Theorem 14.1.1, Chapter 14, page 508 in [25]). For any \(x\in \mathbb {T}\) we have

$$\begin{aligned} \left| g^{\prime }_n(x) \right| \le 4 \Vert p_n\Vert _\infty \Vert p^{\prime }_n\Vert _\infty \le 4n \Vert p_n\Vert ^2_\infty = 4n \Vert g_n\Vert _\infty . \end{aligned}$$
(8)

Since g is continuous, there exists \(x_0\in \mathbb {T}\) such that \(g(x_0)=\Vert g_n\Vert _\infty\). Moreover, by the Mean Value Theorem and relation (8) we obtain

$$\begin{aligned} \left| g(x) - g(x_0)\right| \le \Vert g^\prime _n\Vert _\infty \left| x - x_0\right| \le 4n \Vert g_n\Vert _\infty \left| x - x_0\right| \end{aligned}$$

for any \(x\in \mathbb {T}\). Take \(I:=[x_0 - \frac{3}{16n},x_0+\frac{3}{16n}]\subset \mathbb {T}\). Notice that the length of I is \(\frac{3}{8 n}\). The preceding inequality yields

$$\begin{aligned} \left| g(x) - g(x_0)\right| \le \frac{3}{4} \Vert g_n\Vert _\infty \quad \text { for any } x\in I. \end{aligned}$$

Since \(g(x_0)=\Vert g_n\Vert _\infty\), the triangle inequality yields \(({1}/{4}) \Vert g_n\Vert _\infty \le \left| g_n(x)\right|\) for any \(x\in I\). The preceding inequality with the help of relation (6) and relation (7) implies

$$\begin{aligned} \frac{1}{2} \Vert p_n\Vert _\infty \le \left| p_n(x)\right| \;\; \text { for any } x\in I. \end{aligned}$$

Now, we are ready to provide the proof of Theorem 2.2.

Proof of Theorem 2.2

By Lemma 2.1, there exists a \(\delta >0\) such that

$$\begin{aligned} M_\xi (t) \le e^{\alpha ^2 t^2/2}\quad \text{ for } \text{ any } t\in (-\delta ,\delta ),\; \text{ where } \alpha ^2>\sigma ^2>0. \end{aligned}$$

For each \(j\in \{0,\ldots ,n-1\}\), define \(f_j(x)=\phi ({j}/{n})e^{ijx}\), \(x\in \mathbb {T}\). Let \(r_n:=\sum _{j=0}^{n-1} |\phi ({j}/{n})|^2\). At first, we suppose that the \(f_j\) are real (we consider only the real part or the imaginary part) and we write \(S_n:=\Vert W_n\Vert _\infty\). Since \(\Vert f_j\Vert _\infty \le \Vert \phi \Vert _\infty =:K\) for every \(j=0,\ldots ,n-1\), we obtain

$$\begin{aligned} e^{{\alpha ^2 t^2 r_n}/{2}}&= \prod _{j=0}^{n-1} e^{{\alpha ^2 t^2\Vert f_j\Vert ^2_\infty }/{2}} \ge \prod _{j=0}^{n-1} e^{{\alpha ^2 t^2|f_j(x)|^2}/{2}}\ge \prod _{j=0}^{n-1} \mathbb {E}\left( e^{t \xi _j f_j(x)}\right) \\&=\mathbb {E}\left( \prod _{j=0}^{n-1} e^{t \xi _j f_j(x)}\right) =\mathbb {E}\left( e^{t W_n(x)}\right) \quad \text { for any } t\in (-{\delta }/{K},{\delta }/{K}). \end{aligned}$$

By Claim 1, there exists a random interval \(I\subset \mathbb {T}\) of length \({1}/{\rho _n}\) with \(\rho _n={8n}/{3}\) such that \(W_n(x)\ge {S_n}/{2}\) or \(-W_n(x) \ge {S_n}/{2}\) on I. Denote by \(\mu\) the normalized Lebesgue measure on \(\mathbb {T}\). Observe that

$$\begin{aligned} e^{{tS_n}/{2}}=\frac{1}{\mu (I)}\int \limits _{I}e^{ {tS_n}/{2}}\mathrm {d}x \le \frac{1}{\mu (I)}\int \limits _{I} \left( e^{tW_n(x)} + e^{-tW_n(x)} \right) \mathrm {d}x. \end{aligned}$$

Then, for every \(t\in (-{\delta }/{K},{\delta }/{K})\) we have

$$\begin{aligned} \mathbb {E}\left( e^{{t S_n}/{2}}\right)&\le \rho _n \mathbb {E}\left( \int _{I} \left( e^{tW_n(x)} + e^{-tW_n(x)} \right) \mu (\mathrm {d}x) \right) \\&\le \rho _n \mathbb {E}\left( \int _{\mathbb {T}} \left( e^{tW_n(x)} + e^{-tW_n(x)} \right) \mu (\mathrm {d}x) \right) \; \le \; 2\rho _n e^{{\alpha ^2 t^2 r_n}/{2}}. \end{aligned}$$

The preceding inequality yields

$$\begin{aligned} \mathbb {E}\left( \exp \left\{ \frac{t}{2}\left( S_n -\alpha ^2 t r_n - \frac{2}{t} \log \left( 2\rho _n l\right) \right) \right\} \right) \le \frac{1}{l} \quad \text {for any } l>0\; \text { and }\; t\in (-{\delta }/{K},{\delta }/{K}), \end{aligned}$$

which implies

$$\begin{aligned} \mathbb {P}\left( S_n \ge \alpha ^2 tr_n + \frac{2}{t}\log \left( 2\rho _n l \right) \right) \le \frac{1}{l}\quad \text {for any } l>0\; \text { and }\; t\in (-{\delta }/{K},{\delta }/{K}). \end{aligned}$$

Note that \(\lim \limits _{n\rightarrow \infty }\frac{r_n}{n}=\int _{0}^{1}|\phi (x)|^2\mathrm {d}x>0\). By taking \(l_n=cn^2\) where c is a positive constant, we have \(\left| \frac{\log (2\rho _n l_n)}{\alpha ^2 r_n}\right| <{\delta ^2}/{K^2}\) for all large n. By choosing \(t_n=\left( \frac{\log (2\rho _n l_n)}{\alpha ^2 r_n}\right) ^{{1}/{2}}\) we obtain

$$\begin{aligned} \mathbb {P}\left( S_n \ge 3 \left( \alpha ^2 r_n\log \left( 2\rho _n l_n\right) \right) ^{{1}/{2}}\right) \le \frac{1}{l_n} \quad \text { for all large } n. \end{aligned}$$

Since \(f_j=\text{ Re }(f_j)+i\text{ Im }(f_j)\), we get for all large n

$$\begin{aligned} \mathbb {P}\left( \Vert \text{ Re }(W_n) \Vert _\infty \ge 3 \left( \alpha ^2\sum _{j=0}^{n-1} \Vert \text{ Re } (f_j)\Vert ^2_\infty \log \left( 2\rho _n l_n\right) \right) ^{{1}/{2}}\right) \le \frac{1}{l_n} \end{aligned}$$

and

$$\begin{aligned} \mathbb {P}\left( \Vert \text{ Im }(W_n) \Vert _\infty \ge 3 \left( \alpha ^2\sum _{j=0}^{n-1} \Vert \text{ Im } (f_j)\Vert ^2_\infty \log \left( 2\rho _n l_n\right) \right) ^{{1}/{2}}\right) \le \frac{1}{l_n}. \end{aligned}$$

Finally, since \(\rho _n=\frac{8n}{3}\), the choose of \(l_n=\frac{3n^2}{16}\) yields

$$\begin{aligned} \mathbb {P}\left( \Vert W_n\Vert _\infty \ge 6\alpha \sqrt{3} \left( r_n \log n\right) ^{{1}/{2}} \right) \le \frac{32}{3n^2} \quad \text { for all large } n. \end{aligned}$$

4 Proof of Theorem 2.3. Localization of the roots for Kac polynomials

The proof is based on the small ball probability of linear combinations of iid random variables introduced by Rudelson and Vershynin in [29]. Throughout the proof, \(\Vert \cdot \Vert _2\) denotes the Euclidean norm, \(|\cdot |\) denotes the complex norm and \(\det (\cdot )\) the determinant function that acts on the squared matrices. We consider the module \(\pi\) of a real number y, \(y \mod \pi\), which is defined as the set of numbers x such that \(x-y =k\pi\) for some \(k\in \mathbb {Z}\).

Definition 4.1

(Least common denominator (lcd for short)). Let L be any positive number and let V be any deterministic matrix of dimension \(2\times n\). The least common denominator (lcd) of V is defined as

$$\begin{aligned} D(V):= \inf \left\{ \Vert \theta \Vert _2 >0 : \theta \in \mathbb {R}^2, \;\mathrm {dist}\left( V^T\theta ,\mathbb {Z}^n \right) < L\sqrt{\log _{+} \left( \frac{\Vert V^T\theta \Vert _2}{L}\right) } \right\} , \end{aligned}$$

where \(\mathrm {dist}(v,\mathbb {Z}^n)\) denotes the distance between the vector \(v\in \mathbb {R}^n\) and the set \(\mathbb {Z}^n\), and \(\log _{+}:=\max \{\log ,0\}\).

For more details of the concept of lcd see Section 7 of [29]. Observe that \(D\left( a V\right) = \left({1}/{|a|}\right) D(V)\) for any \(a\not =0\). Indeed, from the definition of \(D\left( a V\right)\) we have that \(D\left( a V\right) \le \Vert \theta \Vert _2\) for any \(\theta \in \mathbb {R}^2\) such that

$$\begin{aligned} \text{ dist }\left( (aV)^T\theta ,\mathbb {Z}^n \right) < L\sqrt{\log _{+} \left( \frac{\Vert (aV)^T\theta \Vert _2}{L}\right) }=L\sqrt{\log _{+} \left( \frac{\Vert V^T(a\theta )\Vert _2}{L}\right) }. \end{aligned}$$

Therefore, from the definition of D(V) we deduce \(D(V)\le \Vert a\theta \Vert _2=|a|\Vert \theta \Vert _2\). Since \(a\not =0\), then \(({1}/{|a|})D(V)\le \Vert \theta \Vert _2\). Again, from the definition of D(aV) we deduce that \(({1}/{|a|})D(V)\le D(aV)\). On the other hand, from the definition of D(V) we have that \(D\left( V\right) \le \Vert \theta \Vert _2\) for any \(\theta \in \mathbb {R}^2\) such that

$$\begin{aligned} \text{ dist }\left( V^T\theta ,\mathbb {Z}^n \right) < L\sqrt{\log _{+} \left( \frac{\Vert V^T\theta \Vert _2}{L}\right) }=L\sqrt{\log _{+} \left( \frac{\Vert (aV)^T({\theta }/{a})\Vert _2}{L}\right) }. \end{aligned}$$

Therefore, from the definition of D(aV) we deduce \(D(aV)\le \Vert {{\theta}/{a}}\Vert _2= {{\Vert \theta \Vert _2}/{|a|}}\). Consequently, \(|a|D(aV)\le \Vert \theta \Vert _2\). Again, from the definition of D(V) we deduce that \(|a|D(aV)\le D(V)\). Putting all these pieces together we obtain the next useful lemma.

Lemma 4.2

For all \(a\ne 0\), the lcd of any matrix \(V\in \mathbb {R}^{2\times n}\) satisfies \(D(V)=|a|D(aV)\).

Let X be a random vector of dimension \(n\times 1\) whose entries are iid satisfying (H). Assume \(\det (VV^T)>0\). For any \(a>0\) and \(t\ge 1\), by Theorem 7.5 (Section 7 in [29]) we have

$$\begin{aligned} {\begin{matrix} {\mathbb{P}}\left\{ \Vert V X \Vert _2 \le \frac{t\sqrt{2}}{a} \right\}&= {\mathbb{P}}\left\{ \Vert aV X \Vert _2 \le \sqrt{2}t \right\} {\le } \frac{C^2 L^2}{2a^2(\det (VV^T))^{{1}/{2}}} \left( t + \frac{\sqrt{2}}{D(a V)}\right) ^2, \end{matrix}} \end{aligned}$$

where \(L\ge \sqrt{{2}/{q}}\) with q given in (H), D(aV) is the least common denominator of aV, and the constant C only depends on M, q. Recall the well-known inequality \((x+y)^2\le 2 x^2+ 2y^2\) for any \(x,y\in \mathbb {R}\). By Lemma 4.2, it follows that \(D(aV)= ({1}/{a})D(V)\) for all \(a>0\). Therefore,

$$\begin{aligned} \mathbb {P}\left\{ \Vert aV X\Vert _2 \le \sqrt{2}t \right\}&\le \frac{C^2 L^2}{a^2(\det (VV^T))^{{1}/{2}}}t^2+ \frac{2C^2 L^2}{a^2(\det (VV^T))^{{1}/{2}}(D(a V))^2} \nonumber \\&\le \frac{C^2 L^2}{a^2(\det (VV^T))^{{1}/{2}}}t^2+ \frac{2C^2 L^2}{(\det (VV^T))^{{1}/{2}}(D(V))^2}. \end{aligned}$$
(9)

In order to obtain a meaningful upper bound for the left-hand side of the preceding inequality, it is needed to do a refined analysis of the following quantities: a lower bound for \(\det (VV^T)\) and a lower bound for D(V). Implicitly, in the definition of the D(V) we also need to estimate \(\Vert V^T\theta \Vert _2\) for some adequate \(\theta \in \mathbb {R}^2\).

4.1 Small ball probability analysis

The following analysis explains the reason of introducing the concept of the least common denominator, which is a crucial part along the proof of Theorem 2.3. Recall

$$\begin{aligned} G_n(z)= \sum _{j=0}^{n-1} \xi _j z^j\quad \text { for } z\in \mathbb {C}. \end{aligned}$$

For \(G_n\), we associate a random trigonometric polynomial

$$\begin{aligned} W_n(x)= \sum _{j=0}^{n-1} \xi _j e^{ijx}\quad \text { for } x\in \mathbb {T}, \end{aligned}$$

where \(\mathbb {T}\) denotes the unit circle \(\mathbb {R}/(2\pi \mathbb {Z})\). Assume \(n\ge 2\) and \(\gamma >{1}/{2}\). Let \(N=\lfloor n^2 \left( \log n\right) ^{{1}/{2}+\gamma } \rfloor\) and \(x_\alpha ={\alpha }/{N}\) for \(\alpha \in \{0,1,2,\ldots , N-1\}\). Let \(t\ge 1\) be fixed and let \(C_0>0\) be the suitable positive constant being given in Theorem 2.2. Define the following event

$$\begin{aligned} \mathcal {G}_n : = \left\{ \Vert W^{\prime }_n\Vert _\infty \le C_0 n^{{3}/{2}}\left( \log n\right) ^{{1}/{2}}, \max _{z\in \mathbb {C}\; :\; \left| \left| z \right| -1 \right| \le 2tn^{-2}} \left| G_n(z) \right| \le n^{{3}/{2}} \right\} , \end{aligned}$$

where \(W^{\prime }_n\) denotes the derivative of \(W_n\) on \(\mathbb {T}\). For short, we also denote by \(\mathbb {P}\left( A,B\right)\) the probability \(\mathbb {P}\left( A\cap B\right)\) for any two events A and B. Recall

$$\begin{aligned} \mathcal {M}_n = \left\{ \min _{z\in \mathbb {C}\; :\; \left| \left| z \right| -1 \right| \le tn^{-2}\left( \log n\right) ^{-{1}/{2}-\gamma }} \left| G_n(z) \right| \le t n^{-{1}/{2}}(\log n)^{-\gamma } \right\} . \end{aligned}$$

By the Boole–Bonferroni inequality we obtain

$$\begin{aligned} \mathbb {P}\left( \mathcal {M}_n \right)&\le \mathbb {P}\left( \mathcal {M}_n, \mathcal {G}_n\right) + \mathbb {P}\left( \Vert W'_n\Vert _\infty \ge C_0 n^{{3}/{2}}\left( \log n\right) ^{{1}/{2}} \right) \nonumber \\&\quad +\; \mathbb {P}\left( \displaystyle \max _{z\in \mathbb {C}\; :\; \left| \left| z \right| -1 \right| \le 2tn^{-2}}\left| G_n(z) \right| \ge n^{{3}/{2}} \right) \nonumber \\&=: \mathbb {P}\left( \mathcal {M}_n, \mathcal {G}_n\right) + I_1 + I_2. \end{aligned}$$
(10)

Our goal is to show that every probability on the right side of the above expression is decreasing to zero when n tends to infinity.

Using the Berstein inequality (Theorem 14.1.1 in [25]) and Theorem 2.2 for \(\phi \equiv 1\), for all large n we have

$$\begin{aligned} \mathbb {P}\left( \Vert W^{\prime }_n\Vert _\infty \ge C_0 n^{{3}/{2}}\left( \log n\right) ^{{1}/{2}} \right) \le \mathbb {P}\left( \Vert W_n\Vert _\infty \ge C_0 \left( n\log n\right) ^{{1}/{2}} \right) \le \frac{C_1}{n^2}. \end{aligned}$$

On the other hand, using the Markov inequality we obtain

$$\begin{aligned} {\begin{matrix} &{}\mathbb {P}\left( \displaystyle \max _{z\in \mathbb {C}\; :\; \left| \left| z \right| -1 \right| \le 2tn^{-2}}\left| G_n(z) \right| \ge n^{3/2} \right) \le \mathbb {P}\left( \displaystyle \sum _{j=0}^{n-1} \left| \xi _j \right| \left( 1+\frac{2t}{n^2}\right) ^{j} \ge n^{3/2}\right) \\ &{} \le \frac{1}{n^{3/2}} \mathbb {E}\left( \sum _{j=0}^{n-1} \left| \xi _j \right| \left( 1 + \frac{2t}{n^2}\right) ^j\right) \le \frac{e^{2t} \mathbb {E}\left( \left| \xi _0 \right| \right) n}{n^{{3}/{2}}} = \frac{e^{2t}\mathbb {E}\left( \left| \xi _0 \right| \right) }{n^{{1}/{2}}}, \end{matrix}} \end{aligned}$$

where the last inequality follows from the following fact: for any \(j\in \{0,\ldots ,n^2\}\) we have

$$\begin{aligned} \left( 1 + \frac{2t}{n^2}\right) ^{j}\le \left( 1 + \frac{2t}{n^2}\right) ^{n^2}\le e^{2t}. \end{aligned}$$

Therefore,

$$\begin{aligned} I_1 + I_2 = \text {O}\left( n^{-1/2}\right) , \end{aligned}$$
(11)

where the implicit constant depends on the distribution of \(\xi _0\) and t. We stress that the rate of the convergence in (11) can be improved, however, the contributed term in the right-hand side of (10) is \(\mathbb {P}\left( \mathcal {M}_n, \mathcal {G}_n\right)\).

In the sequel, we analyze the strategy to prove that \(\mathbb {P}(\mathcal {M}_n, \mathcal {G}_n)\) is small. First, we construct a set of closed balls that covers \(\{z\in \mathbb {C}:\left| \left| z \right| -1 \right| \le tn^{-2}\}\). For each closed ball, we reduce the event \(\{\mathcal {M}_n, \mathcal {G}_n\}\) to a “simple event” using Taylor’s Theorem. Finally, we use the concept of lcd to show that the probability of each “simple event” is sufficiently small.

The strategy is to consider a set of balls centered at a point on the unit circle with a suitable radius. We distinguish two kind of balls. The special balls centered in \(1+0i\) and \(-1+0i\), where the radius r is large, \(r=2tn^{-11/10}\), and the balls centered in points z with argument satisfying \(n^{-11/10}<\left| \arg (z)\mod \pi \right| < \pi - n^{-11/10}\) with small radius, \(r=2t n^{-2}\).

Recall that for any \(x\in \mathbb {R}\), \(\lfloor x \rfloor\) denotes the greatest integer less than or equal to x. Let \(N:= \lfloor n^2\left( \log n \right) ^{{1}/{2}+\gamma } \rfloor\) and \(x_\alpha := \frac{\alpha }{N}\) for \(\alpha =0,1,\ldots ,N-1\). For \(a\in \mathbb {C}\) and \(s>0\), denote by \(\text {B}\left( a, s\right)\) the closed ball with center a and radius s, i.e., \(\text {B}\left( a, s\right) = \left\{ z\in \mathbb {C}: \left| z-a \right| \le s\right\}\). Denote by \(\mathbb {S}^1\) the unit circle. Let

$$\begin{aligned} \mathcal {A}\left( \mathbb {S}^1,tn^{-2}\left( \log n\right) ^{-1/2-\gamma }\right) :=\left\{ z\in \mathbb {C}:\left| \left| z \right| -1 \right| \le tn^{-2}\left( \log n\right) ^{-1/2-\gamma }\right\} . \end{aligned}$$

Note that

$$\begin{aligned} \mathcal {A}\left( \mathbb {S}^1,tn^{-2}\left( \log n\right) ^{-1/2-\gamma }\right) &= \left\{ z\in \mathcal {A} : n^{-11/10}<\left| \arg (z) \right| < \pi - n^{-11/10} \right\} \\&\quad\cup \left\{ z\in \mathcal {A} : \left| \arg (z) \right| \le n^{-11/10}\quad \text{ or } \quad \left| \arg (z)-\pi \right| \le n^{-11/10} \right\} . \end{aligned}$$

Let \(t\ge 1\) and observe that

$$\begin{aligned}&\left\{ z\in \mathcal {A} : \left| \arg (z) \right| \le n^{-11/10}\quad \text{ or } \quad \left| \arg (z)-\pi \right| \le n^{-11/10} \right\} \\& \qquad \subset \text {B}\left( -1+0i, 2tn^{-11/10}\right) \cup \text {B}\left( 1+0i, 2tn^{-11/10}\right) . \end{aligned}$$

The preceding inclusion yields that any \(z\in \mathcal {A}\) with small argument belongs in the balls centered at \(1+0i\) and \(-1+0i\) with radius \(2tn^{-11/10}\). On the other hand, for \(z\in \mathcal {A}\) with large argument we have

$$\begin{aligned}&\left\{ z\in \mathcal {A} : n^{-11/10}<\left| \arg (z) \right|< \pi - n^{-11/10} \right\} \\& \quad \subset \bigcup ^{N-1} _{\begin{array}{c} \alpha =1 \\ \alpha \;:\; n^{-11/10}<\left| 2\pi x_\alpha \mod \pi \right| < \pi - n^{-11/10} \end{array}} \text {B}\left( e^{i2\pi x_\alpha },2tn^{-2}\left( \log n\right) ^{-{1}/{2}-\gamma }\right) . \end{aligned}$$

We define \([N-1]:=[1,N-1]\cap \mathbb {N}\) and

$$\begin{aligned} J_1(n,N)&{}:=\left\{ \alpha \in [N-1]: \gcd \left( \alpha , N\right) \ge n^{1+1/10} \left( \log n\right) ^{-\gamma }\right\} ,\\ J_2(n,N)&{}:=\left\{ \alpha \in [N-1]: n^{1+1/10} \left( \log n\right) ^{-\gamma }\ge \gcd \left( \alpha ,N\right) \ge n\left( \log n\right) ^{{1}/{2}+\gamma }\right\} ,\\ J_3(n,N)&{}:=\left\{ \alpha \in [N-1]: n\left( \log n\right) ^{{1}/{2}+\gamma } \ge \gcd \left( \alpha , N\right) \ge n^{9/10}\left( \log n\right) ^{{1}/{2}+\gamma }\right\} , \end{aligned}$$

where \(\gcd (\alpha ,N)\) denote the greatest common divisor between \(\alpha\) and N. Observe that for any \(\alpha \in J_3(n,N)\) we have

$$\begin{aligned} n - \frac{1}{n\left( \log n\right) ^{{1}/{2}+\gamma }}\le \frac{N}{\gcd \left( \alpha , N\right) } \le n^{11/10}. \end{aligned}$$

The preceding inequalities yield that the irreducible fraction of \(x_\alpha\) is as small as a multiple of \(n^{-11/10}\). Therefore,

$$\begin{aligned}&\bigcup ^{N-1} _{\begin{array}{c} \alpha =1 \\ \alpha \;:\; n^{-11/10}<\left| 2\pi x_\alpha \mod \pi \right| < \pi - n^{-11/10} \end{array}} \text {B}\left( e^{i2\pi x_\alpha },2tn^{-2}\left( \log n\right) ^{-{1}/{2}-\gamma }\right) \\& \quad\quad\quad\quad\quad\quad = \bigcup _{\alpha \in J_1(n,N)} \text {B}\left( e^{i2\pi x_\alpha },2tn^{-2}\left( \log n\right) ^{-{1}/{2}-\gamma }\right) \\&\qquad \quad\quad\quad\quad\quad\quad \cup \bigcup _{\alpha \in J_2(n,N)} \text {B}\left( e^{i2\pi x_\alpha },2tn^{-2}\left( \log n\right) ^{-{1}/{2}-\gamma }\right) \\& \qquad \quad\quad\quad\quad\quad\quad \cup \bigcup _{\alpha \in J_3(n,N)} \text {B}\left( e^{i2\pi x_\alpha },2tn^{-2}\left( \log n\right) ^{-{1}/{2}-\gamma }\right) . \end{aligned}$$

We emphasize that if \(\alpha \in J_1(n,N)\cup J_2(n,N)\cup J_3(n,N)\), then we have

$$\begin{aligned} n^{-11/10}< \left| 2\pi x_\alpha \mod \pi \right| < \pi - n^{-11/10}. \end{aligned}$$

Consequently,

$$\begin{aligned} \mathbb {P}\left\{ \mathcal {M}_n, \mathcal {G}_n\right\} & \le \; \mathbb {P}\left\{ \mathcal {G}_n, \min _{z\in \text {B}\left( 1+0i, 2tn^{-11/10}\right) } \left| G_n(z) \right|< tn^{-1/2}\left( \log n\right) ^{-\gamma }\right\} \\ &{}\quad +\mathbb {P}\left\{ \mathcal {G}_n, \min _{z\in \text {B}\left( -1 + 0i, 2tn^{-11/10}\right) } \left| G_n(z) \right| < tn^{-1/2}\left( \log n\right) ^{-\gamma }\right\} \\ &{}\quad + \sum \limits _{\alpha \in J_1(n,N)} \mathbb {P}\left\{ \mathcal {G}_n, \text {B}_\alpha \right\} +\quad \sum \limits _{\alpha \in J_2(n,N)} \mathbb {P}\left\{ \mathcal {G}_n, \text {B}_\alpha \right\} \quad +\sum \limits _{\alpha \in J_3(n,N)} \mathbb {P}\left\{ \mathcal {G}_n, \text {B}_\alpha \right\} , \end{aligned}$$
(12)

where

$$\begin{aligned} \text {B}_\alpha := \left\{ \min _{z\in B\left( e^{i2\pi x_\alpha }, 2t n^{-2}\left( \log n\right) ^{-{1}/{2}-\gamma }\right) } \left| G_n(z) \right| < tn^{-1/2}\left( \log n \right) ^{-\gamma } \right\} . \end{aligned}$$

4.1.1 Small ball analysis at the points \({\varvec{1+0i}}\) and \({\varvec{-1+0i}}\)

On the two points \(1\pm 0i\) we have the largest two closed balls, which are considered in our set of balls. This is remarkable since the number of real roots of a Kac polynomial for some common random variables is at least \(\text {O}( \frac{\log n}{\log \log \log n})\) with high probability [24]. This means that the real roots of a Kac Polynomial are moving slowly to the unit circle.

On the one hand, let \(z\in \text {B}\left( 1+0i, 2tn^{-{11}/{10}}\right)\). By Taylor’s Theorem we obtain

$$\begin{aligned} \left| G_n(z)-G_n(1) \right| \le \left| z-1 \right| \left| G^{\prime }_n(1) \right| + \left| R_2(z) \right| , \end{aligned}$$

where \(R_2(z)\) is the error of the Taylor approximation of order 2. On the event \(\mathcal {G}_n\) we have

$$\begin{aligned} \left| R_2(z) \right|&\le \frac{\left( 2tn^{-1-1/10}\right) ^{2}}{1-\text {o}(1)} \left[ \max _{z\in \mathbb {C}\; :\; \left| \left| z \right| -1 \right| \le 2tn^{-2}}\left| G_n(z) \right| \right] \\&\le \frac{4t^2 n^{-2-1/5} n^{3/2}}{1-\text {o}(1)} = \frac{4t^2 n^{-1/2-1/5}}{1-\text {o}(1)}, \end{aligned}$$

where \(\text {o}(1) = 2tn^{-1-1/10}\). Assuming that \(\mathcal {G}_n\) holds, the preceding inequality yields

$$\begin{aligned} \left| G_n(z)-G_n(1) \right|&\le 2tn^{-1-1/10} \left| G_n'(1) \right| + \frac{4t^2 n^{-1/2-1/5}}{1-\text {o}(1)} \le 2tn^{-1-1/10} \Vert W_n'\Vert _\infty + \frac{4t^2 n^{-1/2-1/5}}{1-\text {o}(1)} \\&\le 2C_0 t n^{1/2-1/10} \left( \log n\right) ^{1/2} + \frac{4t^2 n^{-1/2-1/5}}{1-\text {o}(1)}. \end{aligned}$$

Hence,

$$\begin{aligned}&\mathbb {P}\left( \mathcal {G}_n, \displaystyle \min _{z\in \text {B}\left( 1+0i, 2tn^{-11/10}\right) } \left| G_n(z) \right| \le tn^{-1/2}\left( \log n\right) ^{-\gamma }\right) \\& \le \mathbb {P}\left( \left| G_n(1) \right| \le 2C_2 t n^{1/2-1/10} \left( \log n\right) ^{1/2} \right) , \end{aligned}$$

where \(2C_2 = 2C_0t +4t^2 + 1\). Since \(G_n(1)=\sum _{j=0}^{n-1} \xi _j\), Corollary 7.6 in [29] implies for \(L\ge \sqrt{1/q}\) (with q given in (H)) that

$$\begin{aligned} \mathbb {P}\left\{ \left| G_n(1) \right| \le 2C_2tn^{1/2-1/10}\left( \log n\right) ^{1/2}\right\} \le \frac{C_3 L}{\Vert \mathbf {a}\Vert } \left( 2C_2t + \frac{1}{D(\mathbf {a})}\right) , \end{aligned}$$

where \(C_3\) is a positive constant and \(D(\mathbf {a})\) is the lcd of the vector

$$\begin{aligned} \mathbf {a}=( n^{1/2-1/10} \left( \log n\right) ^{1/2})^{-1} \left( 1,\ldots ,1\right) \in \mathbb {R}^{n}. \end{aligned}$$

By Proposition 7.4 in [29] we have \(D(\mathbf {a})\ge \frac{1}{2}n^{1/2-1/10}\left( \log n\right) ^{1/2}\). Therefore,

$$\begin{aligned}&\mathbb {P}\left( \left| G_n(1) \right| \le 2C_2 t n^{1/2-1/10} \left( \log n\right) ^{1/2} \right)&\nonumber \\&\quad \le \frac{C_3 L \left( \log n\right) ^{1/2}}{n^{1/10}}\left( 2C_2t + \frac{2}{n^{1/2-1/10}\left( \log n\right) ^{1/2}}\right) \le \frac{\left( 2C_2t+2\right) L\left( \log n\right) ^{1/2}}{n^{1/10}}. \end{aligned}$$
(13)

On the other hand, let \(z\in \text {B}\left( -1+0i, 2tn^{-11/10}\right)\). Assuming that \(G_n\) holds, Taylor’s Theorem implies

$$\begin{aligned} \left| G_n(z)-G_n(-1) \right|&\le \left| z+1 \right| \left| G'_n(-1) \right| + \left| R_2(z) \right| \le 2tn^{-1-1/10} \Vert W'_n\Vert _\infty + \frac{4t^2n^{-1/2-1/5}}{1-\text {o}(1)} \\&\le \left( 2C_0 t + 4t^2\right) n^{1/2-1/10}\left( \log n\right) ^{1/2}. \end{aligned}$$

Thus,

$$\begin{aligned}&\mathbb {P}\left( \mathcal {G}_n,\displaystyle \min _{z\in \text {B}\left( -1+0i, 2tn^{-11/10}\right) } \left| G_n(z) \right| \le tn^{-1/2}\left( \log n\right) ^{-\gamma }\right) \\& \quad \le \mathbb {P}\left( \left| G_n(-1) \right| \le 2C_2 t n^{1/2-1/10} \left( \log n\right) ^{1/2}\right) . \end{aligned}$$

Since \(G_n(-1) = \sum _{j=0}^{n-1} \left( -1\right) ^{j}\xi _j\), by Corollary 7.6 in [29] for \(L\ge \sqrt{1/q}\) (with q given in (H)) we obtain

$$\begin{aligned} \mathbb {P}\left( \left| G_n(-1) \right| \le 2C_2 t n^{1/2-1/10} \left( \log n\right) ^{1/2}\right) \le \frac{C_3L}{\Vert \mathbf {b}\Vert }\left( 2C_2t +\frac{1}{D(\mathbf {b})}\right) , \end{aligned}$$

where \(C_3\) is a positive constant and \(D(\mathbf {b})\) is the lcd of the vector

$$\begin{aligned} \mathbf {b}=( n^{1/2-1/10}\left( \log n\right) ^{1/2})^{-1}\left( 1,-1,\ldots ,(-1)^{n-1}\right) \in \mathbb {R}^n. \end{aligned}$$

By Proposition 7.4 in [29], we have \(D(\mathbf {b})\ge \frac{1}{2} n^{1/2-1/10} \left( \log n\right) ^{1/2}\). Therefore,

$$\begin{aligned}&\mathbb {P}\left( \left| G_n(-1) \right| \le 2C_2 t n^{1/2-1/10} \left( \log n\right) ^{1/2}\right) \nonumber \\& \quad \le \frac{C_3 L \left( \log n\right) ^{1/2}}{n^{1/10}} \left( 2C_2 t + \frac{2}{n^{1/2-1/10}\left( \log n\right) ^{1/2}}\right) \le \frac{\left( 2C_2 t + 2\right) L \left( \log n \right) ^{1/2}}{n^{1/10}}. \end{aligned}$$
(14)

Combining (13) and (14) we obtain

$$\begin{aligned}&\mathbb {P}\left( \mathcal {G}_n, \min _{z\in \text {B}\left( 1+0i, 2tn^{-{11}/{10}}\right) } \left| G_n(z) \right| \le tn^{-{1}/{2}}\left( \log n\right) ^{-\gamma }\right) \\ {}&\quad +\mathbb {P}\left( \mathcal {G}_n,\min _{z\in \text {B}\left( -1+0i, 2tn^{-{11}/{10}}\right) } \left| G_n(z) \right| \le tn^{-{1}/{2}}\left( \log n\right) ^{-\gamma }\right) = \text {O}\left( {n^{-1/10}}\right) . \end{aligned}$$

4.1.2 Small ball analysis at \({\varvec{e}}^{{\varvec{i2\pi x_\alpha }}}\)

In this part, we are focusing mainly on the complex roots of a Kac polynomial. We remark that the complex roots are more dispersed than the real roots, but they are approaching faster than the real roots to the unit circle. However, the complex roots do not approach extremely fast.

Let \(z\in \text {B}( e^{i2\pi x_\alpha },2tn^{-2} \left( \log n\right) ^{-1/2-\gamma })\) and assume that \(\mathcal {G}_n\) holds. By Taylor’s Theorem we obtain

$$\begin{aligned} \left| G_n(z) - G_n\left( e^{i2\pi x_\alpha }\right) \right| \le \left| z-e^{i2\pi x_\alpha } \right| \left| G'_n\left( e^{i2\pi x_\alpha }\right) \right| + \left| R_2(z) \right| , \end{aligned}$$

where \(R_2(z)\) is the error of the Taylor approximation of order 2, and it satisfies

$$\begin{aligned} \left| R_2(z) \right| \le \frac{\left( 2tn^{-2}\right) ^2}{1-2tn^{-2}} \left[ \max _{z\in \mathbb {C}\; :\;\left| \left| z \right| -1 \right| < tn^{-2}} \left| G_n(z) \right| \right] \le \frac{4t^2 n^{-5/2}}{1-2tn^{-2}}. \end{aligned}$$

Then

$$\begin{aligned} \left| G_n(z) - G_n\left( e^{i2\pi x_\alpha }\right) \right|&\le 2tn^{-2}\left( \log n\right) ^{-1/2-\gamma } \Vert W'_n\Vert + \frac{4t^2 n^{-5/2}}{1-2tn^{-2}} \\&\le 2C_0 tn^{-1/2} \left( \log n\right) ^{-\gamma } +\frac{4t^2 n^{-5/2}}{1-2tn^{-2}}. \end{aligned}$$

Hence,

$$\begin{aligned} \mathbb {P}\left( \mathcal {G}_n, \text {B}_\alpha \right) \le \mathbb {P}\left( \left| G_n\left( e^{i2\pi x_\alpha }\right) \right| \le 2C_4 t n^{-2}\left( \log n\right) ^{-\gamma } \right) , \end{aligned}$$

where \(2C_4 = 2C_0 + 4t + 1\). For proving that \(\mathbb {P}\left( \mathcal {G}_n, \text {B}_\alpha \right)\) tends to zero as \(n\rightarrow \infty\), we rewrite the sum \(G_n(e^{i2\pi x_\alpha })\) as the product of a matrix by a vector. This simple rewriting allows us to apply lcd techniques for matrices. To be precise, we define the \(2\times n\) matrix \(V_\alpha\) as follows

$$\begin{aligned} V_\alpha := \left[ \begin{array}{cccc} 1 &{} \cos \left( 2\pi x_\alpha \right) &{} \ldots &{} \cos \left( (n-1)2\pi x_\alpha \right) \\ 0 &{} \sin \left( 2\pi x_\alpha \right) &{} \ldots &{} \sin \left( (n-1)2\pi x_\alpha \right) \end{array} \right] \end{aligned}$$

and \(X:=\left[ \xi _0,\ldots ,\xi _{n-1}\right] ^T \in \mathbb {R}^n\). Notice that

$$\begin{aligned} V_\alpha X = \left[ \sum _{j=0}^{n-1} \xi _j \cos \left( j2\pi x_\alpha \right) , \sum _{j=0}^{n-1} \xi _j \sin \left( j2\pi x_\alpha \right) \right] ^T \in \mathbb {R}^2, \end{aligned}$$

which implies

$$\begin{aligned} \Vert V_\alpha X\Vert _2 = \left| \sum _{j=0}^{n-1} \xi _j e^{ij2\pi x_\alpha } \right| = \left| G_n\left( e^{i2\pi x_\alpha }\right) \right| . \end{aligned}$$

Let \(\Theta = r\left[ \cos (\theta ), \sin (\theta )\right] ^T\in \mathbb {R}^2\), where \(r>0\) and \(\theta \in \left[ 0,2\pi \right]\). For fixed \(r,\theta\), we have

$$\begin{aligned} V_\alpha ^T\Theta = r\left[ \cos \left( -\theta \right) ,\cos \left( 2\pi x_\alpha - \theta \right) ,\ldots ,\cos \left( 2\left( n-1\right) \pi x_\alpha - \theta \right) \right] ^T. \end{aligned}$$

Note that \(\Vert V_\alpha ^T\Theta \Vert _2\le r\sqrt{n}\). On the other hand, we have

$$\begin{aligned} \det \left( V_\alpha V_\alpha ^T \right) = \det \left[ \begin{array}{cc} \sum _{j=0}^{n-1} \cos ^2\left( j2\pi x_\alpha \right) &{} \frac{1}{2}\sum _{j=0}^{n-1} \sin \left( 2\cdot j2\pi x_\alpha \right) \\ \frac{1}{2}\sum _{j=0}^{n-1} \sin \left( 2\cdot j2\pi x_\alpha \right) &{} \sum _{j=0}^{n-1} \sin ^2\left( j2\pi x_\alpha \right) \end{array} \right] . \end{aligned}$$

Now, we are in the setting of inequality (9). Recall that \(x_\alpha\) satisfies

$$\begin{aligned} n^{-11/10}< \left| 2\pi x_\alpha \mod \pi \right| <\pi - n^{-11/10}. \end{aligned}$$

In the following we distinguish three cases for \(x_\alpha\).

4.1.3 Case 1. \(\alpha \in J_1(n,N)\)

Assume that \(\gcd \left( \alpha ,N\right) \ge n^{1+1/10}\left( \log n\right) ^{-\gamma }\). Recall that \(N=\lfloor n^2\left( \log n\right) ^{1/2+\gamma } \rfloor\). Then we have

$$\begin{aligned} \frac{N}{\gcd \left( \alpha ,N\right) } \le \frac{n^2\left( \log n\right) ^{1/2+\gamma }}{n^{1+1/10}\left( \log n\right) ^{-\gamma }} = n^{1-1/10}\left( \log n\right) ^{1/2+2\gamma }. \end{aligned}$$

Note that \(2\pi x_\alpha\) satisfies \(n^{-1}<\left| 2\pi x_\alpha \mod \pi \right| <\pi - n^{-1}\) for all large n. By Lemma 3.2 part 1 in [19], there exist positive constants \(c_5,C_5\) such that

$$\begin{aligned} c_5 n^2 \le \det \left( V_\alpha V_\alpha ^T\right) \le C_5 n^2. \end{aligned}$$
(15)

Before continue with our arguments, we estimate the number of indexes \(\alpha\) where the condition \(\gcd \left( \alpha , N\right) \ge n^{1+1/10}\left( \log n\right) ^{-\gamma }\) holds. The following lemma provides such estimate.

Lemma 4.3

The number of indices \(\alpha\) such that

$$\begin{aligned} \gcd \left( \alpha , N\right) \ge \frac{n^{1+1/10}}{\left( \log n\right) ^{\gamma }} \end{aligned}$$

is at most

$$\begin{aligned} n^{1-1/10+\text {o}(1)}\left( \log n\right) ^{1/2+2\gamma +\text {o}(1)}. \end{aligned}$$

By Proposition 7.4 in [29], the lcd of \(V_\alpha\) satisfies \(D\left( V_\alpha \right) \ge 1/2\). Thus, by inequalities (9) and (15), and Lemma 4.3 we obtain

$$\begin{aligned} &{} \sum\limits_{\alpha \in J_1(n,N)} \mathbb {P}\left( \left| G_n\left( e^{i2\pi x_\alpha } \right) \right| \le 2tC_4 n^{-1/2}\left( \log n\right) ^{-\gamma } \right) \\ &{} \quad \le 2n^{1-1/10+\text {o}\left( 1 \right) } \left( \log n\right) ^{1/2+2\gamma +\text {o}\left( 1 \right) }\left( \frac{2C^2L^2\left( 2tC_4\right) ^2}{\left( c_5 n^2 \right) ^{1/2} \left( n^{1/2} \left( \log n\right) ^{\gamma }\right) ^2} + \frac{2C^2L^2}{\frac{1}{4} \left( c_5 n^2 \right) ^{1/2}}\right) \\ &{}\quad = \frac{4C^2L^2 \left( 2tC_4\right) ^2 \left( \log n\right) ^{1/2+\text {o}(1)}}{c_5^{1/2} n^{1+1/10 -\text {o}(1)}} + \frac{4C^2L^2\left( \log n\right) ^{1/2+2\gamma +\text {o}(1)}}{\frac{1}{4} c_5^{1/2} n^{1/10-\text {o}(1)}} \\ &{} \quad \le C_6 \frac{\left( \log n\right) ^{{1}/{2}+2\gamma +\text {o}(1)}}{n^{{1}/{10}-\text {o}(1)}}, \end{aligned}$$

where \(C_6=4c_5^{-1/2}C^2L^2 \left( \left( 2tC_4\right) ^2 +4\right)\).

4.1.4 Case 2. \(\alpha \in J_2(n,N)\)

Assume that

$$\begin{aligned} n^{1+1/10} \left( \log n\right) ^{-\gamma } \ge \gcd \left( \alpha , N\right) \ge n \left( \log n\right) ^{1/2+\gamma }. \end{aligned}$$

Since \(N=\lfloor n^2 \left( \log n\right) ^{1/2+\gamma } \rfloor\), we have

$$\begin{aligned} n\ge \frac{N}{\gcd \left( \alpha ,N\right) }\ge n^{1-1/10}\left( \log n\right) ^{1/2+2\gamma } - \text {o}(1), \end{aligned}$$
(16)

where \(\text {o}(1)=n^{-1-1/10}\left( \log n \right) ^{\gamma }\). We observe that \(2\pi x_\alpha\) is such that

$$\begin{aligned} n^{-1}\le \left| 2\pi x_\alpha \mod \pi \right| \le \pi - n^{-1}. \end{aligned}$$

By Lemma 3.2 part 1 in [19] there exist positive constants \(c_5,C_5\) such that

$$\begin{aligned} c_5 n^2 \le \det \left( V_\alpha V_\alpha ^T\right) \le C_5 n^2. \end{aligned}$$

Also, we observe that \(x_\alpha = \frac{\alpha }{N}=\frac{\alpha '}{N'}\) where \(\alpha = \alpha ' \gcd \left( \alpha ,N\right)\) and \(N = N'\gcd \left( \alpha , N\right)\). Note that \(\gcd \left( \alpha ',N'\right) =1\). Since \(N'\le n\), for any \(\theta\) we have

$$\begin{aligned} {\begin{matrix} \left\{ \exp \left( i \left( j2\pi \frac{\alpha '}{N'} - \theta \right) \right) : j = 0,\ldots , N'-1 \right\} = \left\{ \exp \left( i \left( j2\pi \frac{1}{N'} - \theta \right) \right) : j = 0,\ldots , N'-1 \right\} . \end{matrix}} \end{aligned}$$

The above observation allows us to assume that \(x_\alpha = {1}/{N'}\). To apply inequality (9) we need to estimate the lcd. The following lemma shows an arithmetic property of the values \(\cos \left( j2\pi x_\alpha - \theta \right)\) for \(j=0,\ldots ,N'\) which becomes crucial for estimating the lcd.

Lemma 4.4

Fixed \(\theta \in [0,2\pi )\) and positive \(m\in \mathbb {Z}\). Let \(\mathcal {V}\) be a vector in \(\mathbb {R}^m\) which entries are \(\mathcal {V}_j= r\cos \left( j 2\pi x-\theta \right)\) for \(j=0,\ldots ,m-1\) with positive integer \(r\ge 2\) and \(x={1}/{m}\). Then

$$\begin{aligned} \mathrm {dist}\left( \mathcal {V},\mathbb {Z}^m\right) \ge \frac{1}{48}\cdot \frac{1}{2\pi x}\quad \text{ whenever } \frac{1}{2r\left( 2\pi x\right) }\ge 6. \end{aligned}$$

Since it is needed to analyze

$$\begin{aligned} V_\alpha ^T\Theta = r\left[ \cos \left( -\theta \right) ,\cos \left( 2\pi x_\alpha - \theta \right) ,\ldots ,\cos \left( 2\left( n-1\right) \pi x_\alpha - \theta \right) \right] ^T \end{aligned}$$

in the definition of the least common denominator, we can assume without loss of generality that r is a positive integer. In fact, by Proposition 7.4 in [29], we can take \(r\ge {1}/{2}\). For the case \(2>r\ge {1}/{2}\), we can replicate the ideas in the proof of Lemma 4.4 to obtain that \(\text {dist}\left( V_\alpha ^T\Theta , \mathbb {Z}^n\right) \ge Cn^{1-1/10}\) for some positive constant C. If \(r\ge 2\), we can use \(\lfloor r\rfloor\) instead of r in Lemma 4.4.

If \(r\le \frac{1}{2\cdot 6\cdot 2\pi x_\alpha }\), by Lemma 4.4 and expression (16), we would obtain

$$\begin{aligned} \frac{1}{48}\cdot \frac{1}{2\pi } n^{1-1/10} \left( \log n\right) ^{1/2+2\gamma } - \text {o}\left( 1\right) &{} \le \frac{1}{48} \cdot \frac{1}{2\pi x_\alpha } \le \text {dist}\left( V_\alpha ^T\Theta , \mathbb {Z}^n\right) \\ &{} \le L\sqrt{\log _+\frac{\Vert V_\alpha ^T \Theta \Vert _2}{L}} \le L\sqrt{\log _+\frac{rn^{1/2}}{L}} \le L\sqrt{\log _+\frac{n^{3/2} }{L}}, \end{aligned}$$

which is a contradiction since \(L\ge \sqrt{{2}/{q}}\) is fixed. Thus, we should have \(r > \frac{1}{2\cdot 6\cdot 2\pi x_\alpha }\) which implies that lcd of \(V_\alpha\) satisfies

$$\begin{aligned} D\left( V_\alpha \right) > \frac{1}{12}\cdot \frac{1}{2\pi } n^{1-1/10}\left( \log n\right) ^{1/2+2\gamma }-\text {o}(1). \end{aligned}$$

By inequality (9) we obtain

$$\begin{aligned} &{} \sum \limits_{\alpha \in J_2(n,N)} \mathbb {P}\left( \left| G_n\left( e^{i2\pi x_\alpha } \right) \right| \le 2tC_4 n^{-1/2}\left( \log n\right) ^{-\gamma }\right) \\ &{} \quad \le n^2 \left( \log n\right) ^{1/2+\gamma } \left( \frac{2C^2L^2 \left( 2tC_4\right) ^2}{\left( c_5 n^2\right) ^{1/2} \left( n^{1/2} \left( \log n \right) ^{\gamma }\right) ^2} \right) \\ &{} \quad \quad \;\;+ n^2 \left( \log n\right) ^{1/2+\gamma } \left( \frac{2C^2L^2}{\left( c_5 n^2\right) ^{1/2} \left( \frac{1}{12}\cdot \frac{1}{2\pi }\cdot n^{1-1/10} \left( \log n\right) ^{1/2+2\gamma } - \text {o}\left( 1\right) \right) ^2} \right) \\ &{} \quad \le \frac{2C^2L^2 \left( 2tC_2\right) ^2}{\left( \log n \right) ^{\gamma -1/2}} + \frac{2C^2 L^2}{c_5^{1/2}\left( \frac{1}{12}\cdot \frac{1}{2\pi }\right) ^2 n^{1-1/5}\left( \log n\right) ^{1/2+3\gamma } \left( 1- \text {o}\left( 1\right) \right) ^2}.\\ &{} \quad \le \frac{C_7}{\left( \log n\right) ^{\gamma - {1}/{2}}}, \end{aligned}$$

where \(C_7 = 2C^2L^2\left( \left( 2tC_2\right) ^2 + c_5^{-1/2}\right)\).

4.1.5 Case 3. \(\alpha \in J_3(n,N)\)

Assume that \(n\left( \log n\right) ^{1/2+\gamma } \ge \gcd \left( \alpha , N\right) \ge n^{9/10}\left( \log n\right) ^{1/2+\gamma }\). Since that \(N=\lfloor n^2 \left( \log n\right) ^{1/2+\gamma }\rfloor\), then

$$\begin{aligned} n^{11/10} \ge \frac{N}{\gcd \left( \alpha ,N\right) } \ge n - \text {o}(1), \end{aligned}$$

where \(\text {o}(1) = \frac{1}{n\left( \log n\right) ^{1/2+\gamma }}\). Note that \(2\pi x_\alpha\) satisfies

$$\begin{aligned} n^{-11/10} \le \left| 2\pi x_\alpha \mod \pi \right| \le \left( n -\text {o}(1)\right) ^{-1} \end{aligned}$$

or

$$\begin{aligned} \pi - \left( n -\text {o}(1)\right) ^{-1} \le \left| 2\pi x_\alpha \mod \pi \right| \le \pi - n^{-11/10}. \end{aligned}$$

By Lemma 3.2 part 2 in [19], there exist positive constants \(c_5,C_5\) such that

$$\begin{aligned} c_5 n^{2-1/5} \le \det \left( V_\alpha V_\alpha ^T\right) \le C_5n^2. \end{aligned}$$

On the other hand, the number of indexes \(\alpha\) which satisfy the condition over \(\gcd \left( \alpha ,N\right)\) is at most

$$\begin{aligned} 4N\left( \frac{1}{n-\text {o}(1)} - \frac{1}{n^{1+1/10}}\right) \le 4n\left( \log n\right) ^{1/2+\gamma }\left( \frac{1}{1-\text {o}(1)}-\frac{1}{n^{1/10}}\right) . \end{aligned}$$

In order to use the inequality (9), we need to analyze the least common denominator of \(V_\alpha\) for this case. In particular, we need to obtain a suitable lower bound for the distance between \(V_\alpha ^T\Theta\) and \(\mathbb {Z}^n\). We use similar ideas using in the proof of Lemma 4.4.

As \(x_\alpha =\frac{\alpha }{N} = \frac{\alpha '}{N'}\) with \(\gcd \left( \alpha ',N'\right) = 1\) and \(N'\ge n - 1\), then all the points in

$$\begin{aligned} \left\{ \exp \left( i \left( j 2\pi x_\alpha - \theta \right) \right) : j=0,\ldots , n-1\right\} \quad \text {are different}. \end{aligned}$$

Let r be a positive integer and we consider the set of intervals of the form \(\left[ \frac{m}{r}, \frac{m+1}{r}\right]\) for all \(m\in \left[ -r,r\right] \cap \mathbb {Z}\). Let \(I_m\) and \(J_{m}\) be the corresponding arcs on the unit circle whose projection on the horizontal axis is the interval \(\left[ \frac{m}{r}, \frac{m+1}{r}\right]\). If \(r < n\), by the pigeon-hole principle we have that there exists at least one \(I_{M}\) (or \(J_M\)) for some \(M\in \left[ -r,r\right] \cap \mathbb {Z}\), which contain at least \({n}/{(2r)}\) points \(\exp \left( i \left( j 2\pi x_\alpha - \theta \right) \right)\) in it. For each \(\cos \left( j 2\pi x_\alpha - \theta \right) \in \left[ \frac{M}{r}, \frac{M+1}{r}\right]\), it is defined

$$\begin{aligned} d_j = \min \left\{ \left| \cos \left( j 2\pi x_\alpha - \theta \right) - \frac{M}{r} \right| , \left| \cos \left( j 2\pi x_\alpha - \theta \right) - \frac{M+1}{r} \right| \right\} . \end{aligned}$$

Note among the values \(d_j\) at most two can be equal and

$$\begin{aligned} \min _{0\;\le \; l,k\; \le \; n-1} \left\{ \left| l 2\pi x_\alpha - k 2\pi x_\alpha \right| \right\} \ge 2\pi \frac{1}{N'}. \end{aligned}$$

Observe that for each \(0\le \lambda \le L\), with \(L =\min \left\{ \lfloor \frac{n}{4\cdot 2r} - \frac{3}{2} \rfloor , \lfloor \frac{N' }{2\cdot 2r\cdot 2\pi } - \frac{1}{2}\rfloor \right\}\), there exists at least one \(d_j\) such that \(d_j\ge \left( 2\lambda + 1\right) 2\pi \frac{1}{N'}\). So, the sum of all \(d_j\) is at least

$$\begin{aligned} {\begin{matrix} \sum _{\lambda =0}^{L} \left( 2\lambda + 1\right) 2\pi \frac{1}{N'}&\ge 2\pi \frac{L^2}{N'}, \end{matrix}} \end{aligned}$$

and taking \(r \le \lfloor n^{1/4} \rfloor\) it follows that

$$\begin{aligned}2\pi \frac{L^2}{N'} \ge 2\pi \cdot \frac{1}{n^{1+1/10}}\left( \frac{n^{3/4} -\text {o}\left( 1\right) }{16\pi }\right) ^2 \ge \frac{1}{128\pi }\left( n^{1/4-1/20} - \text {o}\left( 1\right) \right) ^2. \end{aligned}$$

Now, let v be a vector in \(\mathbb {R}^n\) whose entries are \(v_j=\cos \left( j 2\pi x_\alpha - \theta \right)\) for each \(j=0,\ldots ,n-1\). If a positive integer \(r\le \lfloor n^{1/4} \rfloor\), by the previous discussion it follows that the vector \(rv=(rv_j)_{1\le j\le n}\) satisfies

$$\begin{aligned} \mathrm {dist}(rv,\mathbb {Z}^n)\ge \frac{1}{128\pi }\left( n^{1/4-1/20} - \text {o}\left( 1\right) \right) ^2. \end{aligned}$$

Thus, if \(r\le \lfloor n^{1/4} \rfloor\) and taking a fixed \(L\ge \sqrt{2/q}\), by the definition of lcd we would deduce that

$$\begin{aligned} \frac{1}{128\pi }\left( n^{1/4-1/20} - \text {o}\left( 1\right) \right) ^2 & \le \text {dist}\left( V_\alpha ^T \Theta , \mathbb {Z}^n\right) \le L\sqrt{ \log _+ \frac{\Vert V_\alpha ^T \Theta \Vert _2}{L} } \\ &{} \le L\sqrt{\log _+\frac{rn^{1/2}}{L}} \le L\sqrt{\log _+\frac{n^{3/4}}{L}}, \end{aligned}$$

which implies that the lcd of \(V_\alpha\) should satisfy \(D\left( V_\alpha \right) \ge n^{1/4}\). By (9), we obtain

$$\begin{aligned} &{} \sum \limits_{\alpha \in J_3(n,N)} \mathbb {P}\left( \left| G_n\left( e^{i2\pi x_\alpha } \right) \right| \le 2tC_4 n^{-1/2}\left( \log n\right) ^{-\gamma }\right) \\ &{} \quad \le 4 n\left( \log n\right) ^{1/2+\gamma } \left( \frac{1}{1-\text {o}\left( 1\right) } - \frac{1}{n^{1/10}}\right) \left( \frac{2C^2L^2\left( 2tC_4\right) ^2}{\left( c_5n^{2-1/5}\right) ^{1/2}\left( n^{1/2}\left( \log n\right) ^{\gamma }\right) ^2}\right) \\ &{} \quad \quad +\;4 n\left( \log n\right) ^{1/2+\gamma } \left( \frac{1}{1-\text {o}\left( 1\right) } - \frac{1}{n^{1/10}}\right) \left( \frac{2C^2L^2}{\left( c_5 n^{2-1/5}\right) ^{1/2} \left( n^{1/4} \right) ^2} \right) \\ &{} \quad =\; 4\left( \frac{1}{1-\text {o}\left( 1\right) } - \frac{1}{n^{1/10}}\right) \left( \frac{2C^2 L^2\left( 2t C_4\right) ^2}{c_5^{1/2} n^{1-1/10} \left( \log n\right) ^{\gamma -1/2}} + \frac{2 C^2 L^2 \left( \log n\right) ^{1/2+\gamma }}{c_5^{1/2} n^{1/2-1/10}}\right) \\ &{} \quad \le C_8\left( \frac{1}{1-\text {o}\left( 1\right) }\right) \frac{\left( \log n\right) ^{{1}/{2}+\gamma }}{n^{{1}/{2}-{1}/{10}}}, \end{aligned}$$

where \(C_8=8c_5^{-1/2}C^2 L^2\left( \left( 2t C_4\right) ^2 + 1 \right)\).

Combining Case 1, Case 2 and Case 3 we obtain

$$\begin{aligned} \sum _{\alpha \in \Lambda } \mathbb {P}\left( \mathcal {G}_n, \text {B}_\alpha \right) = \text {O}\left( (\log n)^{-\gamma +1/2}\right) ,\quad \text {where } \gamma >{1}/{2}. \end{aligned}$$
(17)

Hence, inequality (12) with the help of (13), (14) and (17) yields

$$\begin{aligned} \mathbb {P}\left( \mathcal {M}_n \right) = \text {O}\left( (\log n)^{-\gamma +1/2}\right) ,\quad \text {where } \gamma >{1}/{2}. \end{aligned}$$

The preceding estimate, inequality (10) and relation (11) imply Theorem 2.3.

5 Proof of Theorem 2.6. On the lower bound for the smallest singular value for random circulant matrices

Let \(\rho \in (0,{1}/{4})\) be fixed. We define \(x_k={k}/{n}\), \(k=0,\ldots ,n-1\). Note that

$$\begin{aligned} \mathbb {P}\left( s_n(\mathcal {C}_n)\le n^{-\rho }\right)&\le \sum _{k=0}^{n-1} \mathbb {P}\left( \left| G_n\left( e^{i 2\pi x_k}\right) \right| \le n^{-\rho }\right) \\&\le \mathbb {P}\left( \left| G_n\left( 1 \right) \right| \le n^{-\rho }\right) + \mathbb {P}\left( \left| G_n\left( -1 \right) \right| \le n^{-\rho }\right) \\&\quad + \sum _{\begin{array}{c} k=0 \\ {k}\;:\; \gcd \left( k, n\right) \;>\; n^{1/2} \end{array}}^{n-1} \mathbb {P}\left( \left| G_n\left( e^{i 2\pi x_k}\right) \right| \le n^{-\rho }\right) \\&\quad + \sum _{\begin{array}{c} k=0 \\ {k}\;:\; \gcd \left( k, n\right) \;\le \; n^{1/2} \end{array}}^{n-1} \mathbb {P}\left( \left| G_n\left( e^{i 2\pi x_k}\right) \right| \le n^{-\rho }\right) . \end{aligned}$$

In the sequel, we prove that the right-hand side of the preceding inequality is \(\text {O}\left( n^{-2\rho }\right)\). We consider the following three cases.

Case 1. The same reasoning using in Section 4.1.1 yields

$$\begin{aligned} \mathbb {P}\left( \left| G_n\left( 1 \right) \right| \le n^{-\rho }\right) + \mathbb {P}\left( \left| G_n\left( -1 \right) \right| \le n^{-\rho }\right) = \text{ O }\left( n^{{-1/2}}\right) . \end{aligned}$$

Case 2. \(\gcd \left( k,n\right) > n^{1/2}\). By similar reasoning using in the first case of the proof of Theorem 2.3, Section 4.1.3, we deduce

$$\begin{aligned} \sum _{\begin{array}{c} k=0 \\ {k}\;:\; \gcd \left( k, n\right) \;>\; n^{1/2} \end{array}}^{n-1} \mathbb {P}\left( \left| G_n\left( e^{i 2\pi x_k}\right) \right| \le n^{-\rho }\right)&\le n^{1/2+\text {o}(1)} \left( \frac{2C^2L^2}{c_{5}^{1/2} n^{1+2\rho }} + \frac{2C^2L^2}{ \frac{1}{2}c_{5}^{1/2}n}\right) \\&\le \frac{2C^2L^2}{c_{5}^{1/2}n^{1/2+2\rho -\text {o}(1)}} + \frac{4C^2L^2}{c_5^{1/2} n^{1/2-\text {o}(1)}}\le \frac{C_9}{n^{1/2-\text {o}(1)}}, \end{aligned}$$

where \(C_9 = 4c_5^{-1/2}C^2L^2\).

Case 3. \(\gcd \left( k,n\right) \le n^{1/2}\).

By similar reasoning using in the second case of the proof of Theorem 2.3, Section 4.1.4, we obtain

$$\begin{aligned} \sum _{\begin{array}{c} k=0 \\ {k}\;:\; \gcd \left( k, n\right) \;\le \; n^{1/2} \end{array}}^{n-1} \mathbb {P}\left( \left| G_n\left( e^{i 2\pi x_k}\right) \right| \le n^{-\rho }\right)&\le n \left( \frac{2C^2L^2}{c_{5}^{1/2} n^{1+2\rho }} + \frac{2C^2L^2}{ c_{5}^{1/2} n \left( \frac{1}{2\cdot 6 \cdot 2\pi }n^{1/2}\right) ^2}\right) \\&\le \frac{2C^2L^2}{c_{5}^{1/2}n^{2\rho }} + \frac{1152\pi ^2 C^2L^2}{c_5^{1/2} n}\le \frac{C_{10}}{n^{2\rho }}, \end{aligned}$$

where \(C_{10}=c_{5}^{-1/2}C^2L^2\left( 2+1152\pi ^2\right)\).

The combination of all the preceding cases yields \(\mathbb {P}\left( s_n(\mathcal {C}_n)\le n^{-\rho }\right) = \text {O}\left( {n^{-2\rho }}\right)\) for any \(\rho \in (0,{1}/{4})\).