Sbornik: Mathematics
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Forthcoming papers
Archive
Impact factor
Guidelines for authors
License agreement
Submit a manuscript

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Mat. Sb.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Sbornik: Mathematics, 2024, Volume 215, Issue 3, Pages 383–400
DOI: https://doi.org/10.4213/sm9976e
(Mi sm9976)
 

Recovery of analytic functions that is exact on subspaces of entire functions

K. Yu. Osipenkoab

a Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia
b Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow, Russia
References:
Abstract: A family of optimal recovery methods is developed for the recovery of analytic functions in a strip and their derivatives from inaccurately specified trace of the Fourier transforms of these functions on the real axis. In addition, the methods must be exact on some subspaces of entire functions.
Bibliography: 12 titles.
Keywords: Hardy classes, optimal recovery, Fourier transform, entire functions.
Received: 01.07.2023 and 02.12.2023
Russian version:
Matematicheskii Sbornik, 2024, Volume 215, Number 3, Pages 100–118
DOI: https://doi.org/10.4213/sm9976
Bibliographic databases:
Document Type: Article
MSC: Primary 41A46; Secondary 42B30, 46E35
Language: English
Original paper language: Russian

§ 1. Introduction

One popular idea in the development of numerical methods is to look for methods that are exact on some subspace of functions. This is based on the natural observation that if the original function can be approximated sufficiently accurately by elements of this subspace, then the error of the corresponding method (which is usually a linear functional or an operator of the function) is admissible. A typical example here is quadrature formulae, which are constructed to be exact on the algebraic polynomials of some fixed degree: the most spectacular example is Gauss’s quadrature formulae (for instance, see [1]).

Another approach to the development of numerical methods, or — in a broader sense — to approximations as such, is connected with Kolmogorov’s ideas. In this case one fixes come a priori information — a set (class) of functions — for which one develops an optimal (best) method based on the condition that this method must produce the minimum error in this class of functions. A typical example here also is quadrature formulae; in this setting such formulae were constructed for the first time by Nikol’skii (see [2]).

In [3] we proposed to combine these two approaches: the one going back to Gauss and based on developing methods exact on subspaces and the other going back to Kolmogorov and based on finding methods optimal on the class under consideration. In other words, we proposed to look for methods optimal on a class which are at the same time exact on a fixed subspace. In the framework of this approach, in [4] and [5] we solved several recovery problems for solutions of equations of mathematical physics.

In this paper we consider problems of developing optimal methods for the recovery of analytic functions in a strip and their derivatives from inaccurately prescribed traces of the Fourier transforms of these functions on the real axis. The optimal methods are additionally required to be exact on subspaces of entire functions.

§ 2. Statement of the problem

Let $X$ be a linear space and $Y$ and $Z$ be two normed linear spaces, and let $ A\colon X\to Z$ and $I\colon X\to Y$ be linear operators. We consider the problem of the optimal recovery of the values of $A$ on a set $W\subset X$ from the inaccurately prescribed values of $I$ at elements of this set. We assume that for each $x\in W$ we know a value $y\in Y$ such that $\|Ix-y\|_Y\leqslant\delta$, where $\delta$ is some positive number characterizing the error of the a priori information about elements of $W$. The problem consists in recovering the value of $Ax$ from $y$. A recovery method is a map $m\colon Y\to Z$ that assigns to $y\in Y$ an element $m(y)\in Z$, which is set to be the approximate value of $Ax$.

The error of the method $m$ is the quantity

$$ \begin{equation*} e(A,W,I,\delta,m)=\sup_{\substack{x\in W,\ y\in Y\\ \|Ix-y\|_Y\leqslant\delta}}\|Ax-m(y)\|_Z. \end{equation*} \notag $$
The optimal recovery error is the quantity
$$ \begin{equation*} E(A,W,I,\delta)=\inf_{m\colon Y\to Z}e(A,W,I,\delta,m), \end{equation*} \notag $$
while methods delivering the infimum are called optimal on the set $W$. The above problem relates to optimal recovery theory. For more information about this theory and the problems considered it its framework the reader can consult the survey paper [6] and the books [7]–[10].

Let $L\subset X$ be a linear subspace of $X$. We say that a method $m\colon Y\to Z$ is exact on $L$ if $Ax=m(Ix)$ for all $x\in L$. Consider the set $\mathcal E_L$ of linear operators $m\colon Y\to Z$ that are exact on $L$. Set

$$ \begin{equation*} E_L(A,W,I,\delta)=\inf_{m\in\mathcal E_L}e(A,W,I,\delta,m). \end{equation*} \notag $$
We call methods delivering the infimum in this equality optimal on $W$ among the exact methods on $L$.

By the sum of two sets $A$ and $B$ in a linear space we mean the set

$$ \begin{equation*} A+B=\{a+b\colon a\in A,\ b\in B\}. \end{equation*} \notag $$

Proposiiton (see [4]). Let $L\subset X$ be a linear subspace of $X$, and let $m^*\colon Y\to Z$ be a linear operator presenting an optimal method for the recovery of $A$ on the set $W+L$. Then

$$ \begin{equation*} E_L(A,W,I,\delta)=E(A,W+L,I,\delta). \end{equation*} \notag $$
If $E_L(A,W+L,I,\delta)<\infty$, then $m^*$ is an optimal recovery method on $W$ among the exact methods on $L$.

Thus, to find a linear method optimal on $W$ among the ones exact on $L$ it is sufficient to find linear methods among the optimal methods on $W+L$.

In this paper we consider the problem of the optimal recovery of analytic functions in a strip

$$ \begin{equation*} S_\beta=\{z\in\mathbb C\colon |{\operatorname{Im} z}|<\beta\} \end{equation*} \notag $$
and their derivatives under the assumptions that the recovery methods are exact on the space $\mathcal B_{\sigma,2}(\mathbb R)$ of entire functions, the subspace of $L_2(\mathbb R)$ formed by the restrictions to $\mathbb R$ of entire functions of exponential type $\sigma$.

We turn to the precise statement. By the Hardy space $\mathcal H_2^\beta$ we mean the set of analytic functions $f$ in the strip $S_\beta$ such that

$$ \begin{equation*} \|f\|_{\mathcal H_2^\beta}=\biggl(\sup_{0\leqslant\eta<\beta} \frac12\int_{\mathbb R}(|f(t+i\eta)|^ 2+|f(t-i\eta)|^2)\,dt\biggr)^{1/2}<\infty. \end{equation*} \notag $$
We let $\mathcal H_2^{r,\beta}$ (the Hardy–Sobolev space) denote the set of analytic functions in $S_\beta$ such that $f^{(r)}\in\mathcal H_2^\beta$.

Let $H_2^{r,\beta}$ denote the set of functions $f\in\mathcal H_2^{r,\beta}\cap L_2(\mathbb R)$ satisfying $\|f^{(r)}\|_{\mathcal H_2^\beta}\leqslant1$. If ${\sigma>0}$, then $\mathcal B_{\sigma,2}(\mathbb R)$ denotes the subspace of $L_2(\mathbb R)$ formed by the restrictions to $\mathbb R$ of entire functions of exponential type $\sigma$. It is well known that $f\in\mathcal B_{\sigma,2}(\mathbb R)$ if and only if the support of the Fourier transform $Ff$ lies on the interval $\Delta_\sigma=[-\sigma,\sigma]$. By definition $\mathcal B_{0,2}(\mathbb R)=\{0\}$.

Consider the problem of the optimal recovery of the $k$th derivative of $f\in H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R)$, $k\leqslant r$, from the trace on $\Delta_{\sigma_1}$, $\sigma_1>0$, of its Fourier transform defined with some error in the metric $L_2(\Delta_{\sigma_1})$, that is, we assume that in place of the trace of $Ff$ on $\Delta_{\sigma_1}$ we know a function $y\in L_2(\Delta_{\sigma_1})$ such that

$$ \begin{equation*} \|Ff-y\|_{L_2(\Delta_{\sigma_1})}\leqslant\delta. \end{equation*} \notag $$
From $y$ we must recover the function $f^{(k)}$ on $\mathbb R$ in the best possible way, that is, the problem consists in the recovery of
$$ \begin{equation*} E(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)=\inf_{m\colon L_2(\Delta_{\sigma_1})\to L_2(\mathbb R)}e(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta,m), \end{equation*} \notag $$
where $D^kf=f^{(k)}$, $I_{\sigma_1}f=Ff_{\big|\Delta_{\sigma_1}}$ and
$$ \begin{equation*} e(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta,m) =\sup_{\substack{f\in H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),\ y\in L_2(\Delta_{\sigma_1})\\ \|Ff-y\|_{L_2(\Delta_{\sigma_1})} \leqslant\delta}}\|f^{(k)}-m(y)\|_{L_2(\mathbb R)}. \end{equation*} \notag $$
In other words, we are going to find optimal methods for the recovery of the $k$th derivative on the class $H_2^{r,\beta}$ among the methods exact on the subspace of entire functions $\mathcal B_{\sigma,2}(\mathbb R)$. Without the assumptions that the method is exact on $\mathcal B_{\sigma,2}(\mathbb R)$ this problem was considered in [11].

§ 3. Main results

Consider the function $y=s(x)$, $x\geqslant0$, defined parametrically by

$$ \begin{equation*} \begin{cases} x=t^{2r}\cosh 2\beta t,& \\ y=t^{2k}, \end{cases} \qquad t\geqslant0, \end{equation*} \notag $$
$k,r\in\mathbb N$, $r\geqslant k$, $\beta>0$. For $t>0$ its derivative is positive:
$$ \begin{equation*} \frac{dy}{dx}=\frac{kt^{2(k-r)}}{r\cosh2\beta t+t\beta\sinh 2\beta t}>0, \end{equation*} \notag $$
and it is monotonically decreasing, so that $s$ is an increasing concave function.

The straight line connecting a point $(x(t),y(t))$ with the origin has the form $y= \lambda_2x$, where

$$ \begin{equation*} \lambda_2=\frac{y(t)}{x(t)}=\frac1{t^{2(r-k)}\cosh2\beta t}. \end{equation*} \notag $$
Since $s$ is concave, there exists $t_0$ such that the tangent to $s$ at $(x(t_0),y(t_0))$ is parallel to $y=\lambda_2x$. Thus we can find $t_0$ from the equation
$$ \begin{equation*} \frac{y'(t_0)}{x'(t_0)}=\lambda_2. \end{equation*} \notag $$
This equation can be written as
$$ \begin{equation} \frac{kt_0^{2(k-r)}}{r\cosh2\beta t_0+t_0\beta\sinh2\beta t_0}=\frac1{t^{2(r-k)}\cosh2\beta t}. \end{equation} \tag{3.1} $$
The tangent line through $(x(t_0),y(t_0))$ has the form $y=\lambda_1+\lambda_2x$, where
$$ \begin{equation} \lambda_1=t_0^{2k}\biggl(1-\frac k{r+t_0\beta\tanh2\beta t_0}\biggr). \end{equation} \tag{3.2} $$
Let $h(t)$ denote the point at which $y(h(t))=\lambda_1$ (Figure 1). Thus,
$$ \begin{equation*} h(t)=t_0\biggl(1-\frac k{r+t_0\beta\tanh2\beta t_0}\biggr)^{1/(2k)}. \end{equation*} \notag $$

As the function on the right-hand side of (3.2) is monotonically increasing in ${t_0\in[0,+\infty)}$ from zero to $+\infty$, for each $\lambda_1>0$ there exists $t_0>0$ such that the tangent to $s$ at $(x(t_0),y(t_0))$ passes through the point $(0,\lambda_1)$. We denote this point $t_0$ by $h_1(\lambda_1)$.

The function $t^r\sqrt{\cosh2\beta t}$ is monotonically increasing from $0$ to $+\infty$ for $t\in\mathbb R_+$. Hence for each $x\in\mathbb R_+$ the equation

$$ \begin{equation*} t^r\sqrt{\cosh 2\beta t}=x \end{equation*} \notag $$
has a unique solution on the interval $[0,+\infty)$. We denote it by $\mu_{r\beta}(x)$.

Let $\widehat\sigma_1$ denote the value of the parameter $t$ such that $t_0=\widehat t_0=\mu_{r\beta}(\sqrt{2\pi}/\delta)$, that is, $x(\widehat t_0)=2\pi/\delta^2$. Set $\widehat\sigma=h(\widehat\sigma_1)$. The tangent line through $(x(\widehat t_0),y(\widehat t_0))$ has an equation $y=\widehat\lambda_1+\widehat\lambda_2x$, where

$$ \begin{equation*} \widehat\lambda_1 =\widehat t_0^{\,2k}\biggl(1-\frac k{r+\widehat t_0\beta\tanh2\beta\widehat t_0}\biggr)\quad\text{and} \quad \widehat\lambda_2 =\frac{k\widehat t_0^{\,2(k-r)}}{r\cosh2\beta\widehat t_0+\widehat t_0\beta\sinh2\beta\widehat t_0}. \end{equation*} \notag $$
Thus,
$$ \begin{equation*} \widehat\sigma=\widehat\lambda_1^{1/(k)}\quad\text{and} \quad \widehat\sigma_1=\mu_{r-k,\beta}\biggl(\frac{1}{\sqrt{\widehat\lambda_2}}\biggr) \end{equation*} \notag $$
(Figure 2).

Consider the following four domains in the plane $\mathbb R^2$ (Figure 3):

$$ \begin{equation*} \begin{aligned} \, \Sigma_1&=\bigl\{(\sigma_1,\sigma)\in\mathbb R^2\colon 0<h(\sigma_1)\leqslant\sigma\leqslant\sigma_1\bigr\}, \\ \Sigma_2&=\bigl\{(\sigma_1,\sigma)\in\mathbb R^2\colon 0\leqslant\sigma\leqslant h(\sigma_1),\ 0<\sigma_1\leqslant\widehat\sigma_1\bigr\}, \\ \Sigma_3&=\bigl\{(\sigma_1,\sigma)\in\mathbb R^2\colon \sigma_1\geqslant\widehat\sigma_1,\ 0\leqslant\sigma\leqslant\widehat\sigma\bigr\}, \\ \Sigma_4&=\bigl\{(\sigma_1,\sigma)\in\mathbb R^2\colon \widehat\sigma\leqslant\sigma\leqslant h(\sigma_1)\bigr\}. \end{aligned} \end{equation*} \notag $$

Set

$$ \begin{equation} (\lambda_1,\lambda_2)= \begin{cases} \biggl(\sigma^{2k},\dfrac1{\sigma_1^{2(r-k)}\cosh2\beta\sigma_1}\biggr), &(\sigma_1,\sigma)\in \Sigma_1, \\ \biggl(h^{2k}(\sigma_1),\dfrac1{\sigma_1^{2(r-k)}\cosh2\beta\sigma_1}\biggr), &(\sigma_1,\sigma) \in\Sigma_2, \\ \biggl(\widehat\sigma^{2k},\dfrac1{\widehat\sigma_1^{2(r-k)} \cosh2\beta\widehat\sigma_1}\biggr), &(\sigma_1,\sigma)\in\Sigma_3, \\ \biggl(\sigma^{2k},\dfrac{h_1^{2k}(\sigma^{2k})-\sigma^{2k}}{h_1^{2r}(\sigma^{2k}) \cosh(2\beta h_1(\sigma^{2k}))}\biggr), &(\sigma_1,\sigma)\in\Sigma_4. \end{cases} \end{equation} \tag{3.3} $$
We let $\Theta(\sigma,\sigma_1)$ denote the set of measurable functions $\theta$ on $[-\sigma_1,-\sigma)\cup(\sigma,-\sigma_1]$ such that $\theta(t)|\leqslant1$ for almost all $\sigma<|t|\leqslant\sigma_1$.

Theorem. Let $k$ and $r$ be integers satisfying $0\leqslant k\leqslant r$.

(1) If $\sigma>\sigma_1$, then $E(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta) =\infty$.

(2) If $k\geqslant1$, then

$$ \begin{equation} E(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)=\sqrt{\lambda_1\frac{\delta^2}{2\pi}+\lambda_2} \end{equation} \tag{3.4} $$
for all $\sigma_1>0$ and $\sigma\geqslant0$ such that $\sigma\leqslant\sigma_1$, and for each function $\theta\in\Theta(\sigma,\sigma_1)$ the method
$$ \begin{equation*} \widehat m_\theta(y)(x)=\frac1{2\pi}\int_{-\sigma}^\sigma(it)^ky(t)e^{itx}\,dt +\frac1{2\pi}\int_{\sigma<|t|\leqslant\sigma_1}(it)^ka_\theta(t)y(t)e^{itx}\,dt, \end{equation*} \notag $$
where
$$ \begin{equation} a_\theta(t)=\frac{\lambda_1+\theta(t)|t|^{r-k}\sqrt{\lambda_1\lambda_2\cosh2\beta t} \, \sqrt{-t^{2k}+\lambda_1+ \lambda_2t^{2r}\cosh2\beta t}}{\lambda_1+\lambda_2t^{2r}\cosh2\beta t}, \end{equation} \tag{3.5} $$
is an optimal method.

(3) If $k=0$, then

$$ \begin{equation*} E(D^0,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)=\sqrt{\dfrac{\delta^2}{2\pi}+ \frac1{\sigma_1^{2r}\cosh2\beta\sigma_1}} \end{equation*} \notag $$
for all $\sigma_1>0$ and $\sigma\geqslant0$ such that $\sigma\leqslant\sigma_1$, and for each $\theta\in\Theta(\sigma,\sigma_1)$ the method
$$ \begin{equation*} \widehat m_\theta(y)(x)=\frac1{2\pi}\int_{-\sigma}^\sigma y(t)e^{itx}\,dt +\frac1{2\pi}\int_{\sigma<|t|\leqslant\sigma_1}a_\theta(t)y(t)e^{itx}\,dt, \end{equation*} \notag $$
where
$$ \begin{equation} a_\theta(t)=\frac{\sigma_1^{2r}\cosh2\beta\sigma_1+\theta(t)t^{2r}\cosh2\beta t} {\sigma_1^{2r}\cosh2\beta\sigma_1+t^{2r}\cosh2\beta t}, \end{equation} \tag{3.6} $$
is an optimal method.

Proof. By the main theorem on the representation of analytic functions in tube domains (see [12]) we have $f\in\mathcal H_2^\beta$ if and only if this function has the form
$$ \begin{equation} f(z)=\frac1{2\pi}\int_{\mathbb R}g(t)e^{izt}\,dt, \end{equation} \tag{3.7} $$
where $g$ is a function such that
$$ \begin{equation*} \sup_{|y|<\beta}\int_{\mathbb R}|g(t)|^2e^{-2yt}\,dt<\infty \end{equation*} \notag $$
($g$ is the Fourier transform of $f(x)$, $x\in\mathbb R$). By Plancherel’s theorem
$$ \begin{equation} \|f\|^2_{\mathcal H_2^\beta}=\frac1{2\pi}\sup_{0\leqslant y<\beta}\int_{\mathbb R}|Ff(t)|^ 2\cosh2yt\,dt=\frac1{2\pi}\int_{\mathbb R}|Ff(t)|^2\cosh2\beta t\,dt. \end{equation} \tag{3.8} $$

We show that $f\in\mathcal H_2^{r,\beta}\cap L_2(\mathbb R)$ is in the class $H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R)$ if and only if

$$ \begin{equation} \frac1{2\pi}\int_{|t|>\sigma}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt\leqslant1. \end{equation} \tag{3.9} $$
In fact, if $f\in H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R)$, then $f=f_1+f_2$, where $f_1\in H_2^{r,\beta}$ and $f_2\in\mathcal B_{\sigma,2}(\mathbb R)$. Now bearing in mind that $Ff_2$ has support on $\Delta_{\sigma}$, we have
$$ \begin{equation*} \frac1{2\pi}\int_{|t|>\sigma}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt=\frac1{2\pi}\int_{|t|>\sigma}t^{2r}|Ff_1(t)|^2\cosh2\beta t\,dt\leqslant1. \end{equation*} \notag $$

Conversely, let $f\in\mathcal H_2^{r,\beta}\cap L_2(\mathbb R)$ be a function such that (3.9) holds. Let ${f_2\in L_2(\mathbb R)}$ denote the function satisfying $Ff_2=\chi_{\sigma}Ff$, where $\chi_{\sigma}$ is the characteristic function of the interval $\Delta_\sigma$. Then it is clear that $f_2\in\mathcal B_{\sigma,2}(\mathbb R)$. Set ${f_1=f-f_2}$. Then it is obvious that $f_1\in\mathcal H_2^{r,\beta}\cap L_2(\mathbb R)$, and by (3.8) (since $Ff_1=0$ on $\Delta_{\sigma}$) we have

$$ \begin{equation*} \|f_1^{(r)}\|^2_{\mathcal H_2^\beta} =\frac1{2\pi}\int_{|t|>\sigma}\!t^{2r}|Ff_1(t)|^2\cosh2\beta t\,dt =\frac1{2\pi}\int_{|t|>\sigma}\!t^{2r}|Ff(t)|^2\cosh2\beta t\,dt\leqslant1, \end{equation*} \notag $$
that is, $f=f_1+f_2\in H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R)$.

Let $f\in\mathcal H_2^{r,\beta}\cap L_2(\mathbb R)$ be a function such that $\|Ff\|_{L_2(\Delta_{\sigma_1})}\leqslant\delta$ and inequality (3.9) holds. The for each method $m\colon L_2(\Delta_{\sigma_1})\to L_2(\mathbb R)$ we have

$$ \begin{equation*} \begin{aligned} \, 2\|f^{(k)}\|_{L_2(\mathbb R)} &=\|f^{(k)}-m(0)-(-f^{(k)}-m(0))\|_{L_2(\mathbb R)} \\ &\leqslant \|f^{(k)}-m(0)\|_{L_2(\mathbb R)}+\|-f^{(k)}-m(0)\|_{L_2(\mathbb R)} \\ &\leqslant2e(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta,m). \end{aligned} \end{equation*} \notag $$
Hence
$$ \begin{equation} \begin{aligned} \, \notag &\sup_{\substack{f\in\mathcal H_2^{r,\beta}\cap L_2(\mathbb R),\ \|Ff\|_{L_2(\Delta_{\sigma_1})}\leqslant\delta \\ \frac1{2\pi} \int_{|t|>\sigma}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt\leqslant1}} \|f^{(k)}\|_{L_2(\mathbb R)} \\ &\qquad\leqslant e(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta,m) \leqslant E(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta). \end{aligned} \end{equation} \tag{3.10} $$

Consider the extremal problem on the left-hand side of (3.10). Passing to squares for convenience we can write it as

$$ \begin{equation} \begin{gathered} \, \frac1{2\pi}\int_{\mathbb R}t^{2k}|Ff(t)|^2\,dt\to\max, \\ \int_{|t|\leqslant\sigma_1}|Ff(t)|^2\,dt\leqslant\delta^2, \qquad \frac1{2\pi}\int_{|t|>\sigma}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt\leqslant1, \\ f\in\mathcal H_2^{r,\beta}\cap L_2(\mathbb R). \end{gathered} \end{equation} \tag{3.11} $$

(1) Assume that $\sigma>\sigma_1$. Let $f_0$ be a function such that

$$ \begin{equation*} Ff_0(t)= \begin{cases} c,&t\in(\sigma_1,\sigma), \\ 0,&t\notin(\sigma_1,\sigma), \end{cases} \end{equation*} \notag $$
where $c>0$. Then $f_0$ is an admissible function in (3.11) and
$$ \begin{equation*} \|f_0^{(k)}\|^2_{L_2(\mathbb R)}=\frac{c^2}{2\pi}\int_{\sigma_1}^\sigma t^{2k}\,dt. \end{equation*} \notag $$
Letting $c$ tend to infinity, from (3.10) we obtain
$$ \begin{equation*} E(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)=\infty. \end{equation*} \notag $$

(2) Let $k\geqslant1$. We show that in each domain $\Sigma_j$, $j=1,2,3,4$, we have the inequality

$$ \begin{equation} E(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)\geqslant \sqrt{\lambda_1\frac{\delta^2}{2\pi}+\lambda_2}. \end{equation} \tag{3.12} $$

Let $(\sigma_1,\sigma)\in\Sigma_1$. For each $n\in\mathbb N$ such that $1/n<\sigma$ consider the function $f_n$ satisfying

$$ \begin{equation} Ff_n(t)=\begin{cases} \delta\sqrt n,&\sigma-\dfrac1n<t<\sigma, \\ \sqrt{2\pi n}\biggl(\sigma_1+\dfrac1n\biggr)^{-r}\cosh^{-1/2}\biggl(2\beta\biggl(\sigma_1+\dfrac1n\biggr) \biggr), &\sigma_1<t<\sigma_1+\dfrac1n, \\ 0& \text{otherwise}. \end{cases} \end{equation} \tag{3.13} $$
Then we have
$$ \begin{equation*} \|Ff_n\|^2_{L_2(\Delta_{\sigma_1})}=\int_{\sigma-1/n}^\sigma\delta^2n\,dt=\delta^2 \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, &\frac1{2\pi}\int_{|t|>\sigma}t^{2r}|Ff_n(t)|^2\cosh2\beta t\,dt \\ &\qquad =\frac n{(\sigma_1+1/n)^{2r}\cosh(2\beta(\sigma_1+1/n))}\int_{\sigma_1}^{\sigma_1+1/n}t^{2r}\cosh2\beta t\,dt\leqslant1. \end{aligned} \end{equation*} \notag $$
Hence the functions $f_n$ are admissible in problem (3.11). From (3.10) we obtain
$$ \begin{equation*} \begin{aligned} \, &E^2(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta) \\ &\qquad \geqslant\frac1{2\pi}\int_{\mathbb R}t^{2k}|Ff_n(t)|^2\,dt \\ &\qquad =\frac1{2\pi}\int_{\sigma-1/n}^\sigma t^{2k}\delta^2n\,dt +\frac n{(\sigma_1+1/n)^{2r}\cosh(2\beta(\sigma_1+1/n))}\int_{\sigma_1}^{\sigma_1+1/n}t^{2k}\,dt \\ &\qquad =\frac{\delta^2n(\sigma^{2k+1}-(\sigma-1/n)^{2k+1})}{2\pi(2k+1)} \\ &\qquad\qquad +\frac n{(\sigma_1+1/n)^{2r}\cosh(2\beta(\sigma_1+1/n))} \,\frac{(\sigma_1+1/n)^{2k+1}-\sigma_1^{2k+1}}{2k+1}. \end{aligned} \end{equation*} \notag $$
Taking the limit as $n\to\infty$ yields
$$ \begin{equation*} E^2(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)\geqslant\frac{\delta^2\sigma^{2k}}{2\pi}+ \frac1{\sigma_1^{2(r-k)}\cosh2\beta\sigma_1}=\lambda_1\frac{\delta^2}{2\pi}+\lambda_2. \end{equation*} \notag $$

Now let $(\sigma_1,\sigma)\in\Sigma_2$. The straight line connecting $(x(\sigma_1),y(\sigma_1))$ with the origin has the form $y=\lambda_2x$, where

$$ \begin{equation*} \lambda_2=\frac{y(\sigma_1)}{x(\sigma_1)}=\frac1{\sigma_1^{2(r-k)}\cosh2\beta\sigma_1}. \end{equation*} \notag $$
As mentioned above, since $s$ is concave, there exists a point $t_0$ such that the tangent to $s$ at $(x(t_0),y(t_0))$ is parallel to the line $y=\lambda_2x$. The tangent through $(x(t_0),y(t_0))$ itself has the form $y= \lambda_1+ \lambda_2x$, where
$$ \begin{equation*} \lambda_1=t_0^{2k}-\lambda_2t_0^{2r}\cosh2\beta t_0=t_0^{2k}\biggl(1-\frac{t_0^{2(r-k)}\cosh2\beta t_0}{\sigma_1^{2(r-k)}\cosh2\beta\sigma_1}\biggr)=h^{2k}(\sigma_1). \end{equation*} \notag $$

Since $\sigma_1\leqslant\widehat\sigma_1$, it follows that $t_0\leqslant\widehat t_0$. Therefore, $t_0^{2r}\cosh2\beta t_0\leqslant2\pi/\delta^2$. For each $n\in\mathbb N$ such that $h(\sigma_1)<t_0-1/n$ consider the function $f_n$ such that

$$ \begin{equation*} Ff_n(t)= \begin{cases} \delta\sqrt n,&t_0-\dfrac1n<t<t_0, \\ \dfrac{\sqrt{n(2\pi-\delta^2t_0^{2r}\cosh2\beta t_0)}}{(\sigma_1+1/n)^r\sqrt{\cosh(2\beta(\sigma_1+1/n))}}, &\sigma_1<t<\sigma_1+\dfrac1n, \\ 0& \text{otherwise}. \end{cases} \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \|Ff_n\|^2_{L_2(\Delta_{\sigma_1})}=\int_{t_0-1/n}^{t_0}\delta^2n\,dt=\delta^2 \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, &\frac1{2\pi}\int_{|t|>\sigma}t^{2r}|Ff_n(t)|^2\cosh2\beta t\,dt \\ &\qquad =\frac{\delta^2n}{2\pi}\int_{t_0-1/n}^{t_0} t^{2r}\cosh2\beta t\,dt \\ &\qquad\qquad+\frac{n(2\pi-\delta^2t_0^{2r}\cosh2\beta t_0)}{2\pi(\sigma_1+1/n)^{2r}\cosh(2\beta(\sigma_1+1/n))} \int_{\sigma_1}^{\sigma_1+1/n}t^{2r}\cosh2\beta t\,dt \\ &\qquad \leqslant\frac{\delta^2}{2\pi}t_0^{2r}\cosh2\beta t_0+1-\frac{\delta^2}{2\pi}t_0^{2r}\cosh2\beta t_0=1. \end{aligned} \end{equation*} \notag $$
Hence the function $f_n$ are admissible in problem (3.11). From (3.10) we obtain
$$ \begin{equation*} \begin{aligned} \, &E^2(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta) \\ &\qquad\geqslant\frac1{2\pi}\int_{\mathbb R}t^{2k}|Ff_n(t)|^2\,dt \\ &\qquad =\frac1{2\pi}\int_{t_0-1/n}^{t_0}t^{2k}\delta^2n\,dt+\frac{n(2\pi-\delta^2t_0^{2r}\cosh2\beta t_0)}{2\pi(\sigma_1+1/n)^{2r}\cosh(2\beta(\sigma_1+1/n))} \int_{\sigma_1}^{\sigma_1+1/n}t^{2k}\,dt \\ &\qquad =\frac{\delta^2n(t_0^{2k+1}-(t_0-1/n)^{2k+1})}{2\pi(2k+1)} \\ &\qquad\qquad +\frac{n(2\pi-\delta^2t_0^{2r}\cosh2\beta t_0)((\sigma_1+1/n)^{2k+1}-\sigma_1^{2k+1})}{2\pi(2k+1)(\sigma_1+1/n)^{2r} \cosh(2\beta(\sigma_1+1/n))}. \end{aligned} \end{equation*} \notag $$
Taking the limit as $n\to\infty$ yields
$$ \begin{equation*} \begin{aligned} \, &E^2(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta) \\ &\qquad\geqslant\frac{\delta^2t_0^{2k}}{2\pi}+\frac{2\pi-\delta^2t_0^{2r}\cosh2\beta t_0}{2\pi\sigma_1^{2(r-k)} \cosh2\beta\sigma_1} \\ &\qquad =\frac{\delta^2t_0^{2k}}{2\pi}\biggl(1-\frac{t_0^{2(r-k)}\cosh2\beta t_0}{\sigma_1^{2(r-k)} \cosh2\beta\sigma_1}\biggr)+\frac1{\sigma_1^{2(r-k)} \cosh2\beta\sigma_1} \\ &\qquad=\lambda_1\frac{\delta^2}{2\pi}+\lambda_2. \end{aligned} \end{equation*} \notag $$

Let $(\sigma_1,\sigma)\in\Sigma_3$. For each $n\in\mathbb N$ such that $\sigma<\widehat t_0-1/n$ consider the function $f_n$ such that

$$ \begin{equation*} Ff_n(t)= \begin{cases} \delta\sqrt n,&\widehat t_0-\dfrac1n<t<\widehat t_0, \\ 0& \text{otherwise}. \end{cases} \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \|Ff_n\|^2_{L_2(\Delta_{\sigma_1})}=\int_{\widehat t_0-1/n}^{\widehat t_0}\delta^2n\,dt=\delta^2 \end{equation*} \notag $$
and
$$ \begin{equation*} \begin{aligned} \, \frac1{2\pi}\int_{|t|>\sigma}t^{2r}|Ff_n(t)|^2\cosh2\beta t\,dt &=\frac{\delta^2n}{2\pi}\int_{\widehat t_0-1/n}^{\widehat t_0} t^{2r}\cosh2\beta t\,dt \\ &\leqslant\frac{\delta^2}{2\pi}\widehat t_0^{\,2r}\cosh2\beta\widehat t_0=1. \end{aligned} \end{equation*} \notag $$
Thus, the $f_n$ are admissible functions in (3.11). From (3.10) we obtain
$$ \begin{equation*} \begin{aligned} \, &E^2(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta) \\ &\qquad\geqslant\frac1{2\pi}\int_{\mathbb R}t^{2k}|Ff_n(t)|^2\,dt =\frac1{2\pi}\int_{\widehat t_0-1/n}^{\widehat t_0}t^{2k}\delta^2n\,dt \\ &\qquad =\frac{\delta^2n(\widehat t_0^{\,2k+1}-(\widehat t_0-1/n)^{2k+1})}{2\pi(2k+1)}. \end{aligned} \end{equation*} \notag $$
Taking the limit as $n\to\infty$ yields
$$ \begin{equation*} E^2(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)\geqslant\frac{\delta^2\widehat t_0^{\,2k}}{2\pi}=\lambda_1\frac{\delta^2}{2\pi} +\lambda_2. \end{equation*} \notag $$

Let $(\sigma_1,\sigma)\in\Sigma_4$, and let $t_0$ be the point defined as for $(\sigma_1,\sigma)\in\Sigma_2$. Set $\xi=h_1(\sigma^{2k})$. Since $\sigma\leqslant h(\sigma_1)$, we obtain $\xi\leqslant t_0<\sigma_1$. Moreover, as $\sigma\geqslant\widehat\sigma$, it follows that $\xi\geqslant\widehat t_0$. Hence $\xi^{2r}\cosh2\beta\xi\geqslant2\pi/\delta^2$. For each $n\in\mathbb N$ satisfying $1/n<\sigma$ and $\xi-1/n>\sigma$ consider the function $f_n$ such that

$$ \begin{equation*} Ff_n(t)= \begin{cases} \sqrt n\sqrt{\delta^2-\dfrac{2\pi}{\xi^{2r}\cosh2\beta\xi}},&\sigma-\dfrac1n<t<\sigma, \\ \dfrac{\sqrt{2\pi n}}{\xi^r\sqrt{\cosh2\beta\xi}},&\xi-\dfrac1n<t<\xi, \\ 0& \text{otherwise}. \end{cases} \end{equation*} \notag $$
Then we have
$$ \begin{equation*} \begin{gathered} \, \|Ff_n\|^2_{L_2(\Delta_{\sigma_1})}=\int_{\sigma-1/n}^\sigma n\biggl(\delta^2- \dfrac{2\pi}{\xi^{2r}\cosh2\beta\xi}\biggr)\,dt+\int_{\xi-1/n}^\xi\frac {2\pi n}{\xi^{2r}\cosh2\beta\xi}\,dt=\delta^2, \\ \frac1{2\pi}\int_{|t|>\sigma}\!t^{2r}|Ff_n(t)|^2\cosh2\beta t\,dt=\frac n{\xi^{2r}\cosh2\beta\xi}\int_{\xi-1/n}^\xi t^{2r}|Ff_n(t)|^2\cosh2\beta t\,dt\leqslant1. \end{gathered} \end{equation*} \notag $$
Thus the functions $f_n$ are admissible in problem (3.11). By (3.10) we have
$$ \begin{equation*} \begin{aligned} \, &E^2(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta) \\ &\qquad\geqslant\frac1{2\pi}\int_{\mathbb R}t^{2k}|Ff_n(t)|^2\,dt \\ &\qquad =\frac n{2\pi}\biggl(\delta^2-\dfrac{2\pi}{\xi^{2r}\cosh2\beta\xi}\biggr) \int_{\sigma-1/n}^\sigma t^{2k}\,dt+\frac n{\xi^{2r}\cosh2\beta\xi}\int_{\xi-1/n}^\xi t^{2k}\,dt \\ &\qquad =\frac n{2\pi}\biggl(\delta^2-\dfrac{2\pi}{\xi^{2r}\cosh2\beta\xi}\biggr) \frac{\sigma^{2k+1}-(\sigma-1/n)^{2k+1}}{2k+1} \\ &\qquad\qquad +\frac n{\xi^{2r}\cosh2\beta\xi}\frac{\xi^{2k+1}-(\xi-1/n)^{2k+1}}{2k+1}. \end{aligned} \end{equation*} \notag $$
Taking the limit as $n\to\infty$ yields
$$ \begin{equation*} \begin{aligned} \, E^2(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta) &\geqslant\frac {\sigma^{2k}}{2\pi}\biggl(\delta^2-\dfrac{2\pi}{\xi^{2r}\cosh2\beta\xi}\biggr)+\frac {\xi^{2k}}{\xi^{2r}\cosh2\beta\xi} \\ &=\lambda_1\frac{\delta^2}{2\pi}+\lambda_2. \end{aligned} \end{equation*} \notag $$

We look for optimal recovery methods $m_a\colon L_2(\Delta_{\sigma_1})\to L_2(\mathbb R)$ in the class of maps with the following representation in terms of Fourier transforms:

$$ \begin{equation} Fm_a(y)(t)= \begin{cases} (it)^ka(t)y(t),&t\in\Delta_{\sigma_1}, \\ 0,&t\notin\Delta_{\sigma_1}. \end{cases} \end{equation} \tag{3.14} $$
For an estimate of the error of such a method we must estimate the value of the extremal problem
$$ \begin{equation} \begin{gathered} \, \|f^{(k)}-m_a(y)\|_{L_2(\mathbb R)}\to\max, \\ \|Ff-y\|_{L_2(\Delta_{\sigma_1})}\leqslant\delta, \qquad f\in H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R). \end{gathered} \end{equation} \tag{3.15} $$
Considering Fourier images in the functional to be maximized, from Plancherel’s theorem we obtain that the square of the value of problem (3.15) is the value of the following problem:
$$ \begin{equation} \begin{gathered} \, \frac 1{2\pi}\int_{-\sigma_1}^{\sigma_1}t^{2k}|Ff(t)-a(t)y(t)|^2\,dt+\frac 1{2\pi}\int_{|t|>\sigma_1}t^{2k}|Ff(t)|^2\,dt\to\max, \\ \int_{-\sigma_1}^{\sigma_1}|Ff(t)-y(t)|^2 \,dt\leqslant\delta^2, \qquad \frac1{2\pi}\int_{|t|\geqslant\sigma}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt\leqslant1. \end{gathered} \end{equation} \tag{3.16} $$

Note that on pairs $(f,y)$ admissible for this problem, where $f\in\mathcal B_{\sigma,2}(\mathbb R)$ and $y=Ff$, the functional to be maximized takes the form

$$ \begin{equation*} \frac 1{2\pi}\int_{-\sigma}^\sigma t^{2k}|Ff(t)|^2|1-a(t)|^2\,dt. \end{equation*} \notag $$
Hence, if the function $a$ is not almost everywhere equal to one on $\Delta_\sigma$, then, as $\mathcal B_{\sigma,2}(\mathbb R)$ is a linear space, the value of problem (3.16) (and therefore of (3.15)) is infinite, that is, the method with this $a$ has an infinitely large error.

Let $a\equiv1$ on $\Delta_\sigma$. We estimate the functional maximized in (3.16) from above by representing it as a sum of three terms,

$$ \begin{equation*} \begin{gathered} \, I_1=\frac1{2\pi}\int_{-\sigma}^\sigma t^{2k}|Ff(t)-y(t)|^2\,dt, \\ I_2=\frac1{2\pi}\int_{\sigma<|t|\leqslant\sigma_1}t^{2k}|Ff(t)-a(t)y(t)|^2\,dt \end{gathered} \end{equation*} \notag $$
and
$$ \begin{equation*} I_3=\frac1{2\pi}\int_{|t|>\sigma_1}t^{2k}|Ff(t)|^2\,dt. \end{equation*} \notag $$

We show that

$$ \begin{equation} I_1\leqslant\frac{\lambda_1}{2\pi}\int_{-\sigma}^\sigma|Ff(t)-y(t)|^2\,dt \end{equation} \tag{3.17} $$
in all domains $\Sigma_j$, $j=1,2,3,4$. In fact, the inequality
$$ \begin{equation*} I_1\leqslant\frac{\sigma^{2k}}{2\pi}\int_{-\sigma}^\sigma|Ff(t)-y(t)|^2\,dt \end{equation*} \notag $$
is obvious. Since $\sigma^{2k}=\lambda_1$ in $\Sigma_1$ and $\Sigma_4$, (3.17) holds for these domains. If $(\sigma_1,\sigma)\in\Sigma_2$, then
$$ \begin{equation*} \lambda_1=h^{2k}(\sigma_1)\geqslant\sigma^{2k}, \end{equation*} \notag $$
while if $(\sigma_1,\sigma)\in\Sigma_3$, then
$$ \begin{equation*} \lambda_1=\widehat\sigma^{2k}\geqslant\sigma^{2k}, \end{equation*} \notag $$
so that (3.17) holds for all domains.

Next we estimate $I_2$. Using the Cauchy–Schwarz–Bunyakovsky inequality we obtain

$$ \begin{equation} \begin{aligned} \, \notag &t^{2k}|Ff(t)-a(t)y(t)|^2 \\ \notag &\qquad=t^{2k}|(1-a(t))Ff(t)+a(t)(Ff(t)-y(t))|^2 \\ &\qquad \leqslant t^{2k}\biggl(\frac{|1-a(t)|^2} {\lambda_2t^{2r}\cosh2\beta t}+\frac{|a(t)|^2}{\lambda_1}\biggr)(\lambda_2t^{2r} |Ff(t)|^2\cosh2\beta t+\lambda_1|Ff(t)-y(t)|^2). \end{aligned} \end{equation} \tag{3.18} $$
Set
$$ \begin{equation*} S_a=\operatorname*{ess\,max}_{\sigma<|t|\leqslant\sigma_1}t^{2k}\biggl(\frac{|1-a(t)|^2} {\lambda_2t^{2r}\cosh2\beta t}+\frac{|a(t)|^2}{\lambda_1}\biggr). \end{equation*} \notag $$
Then integrating (3.18) we arrive at the following bound for $I_2$:
$$ \begin{equation} I_2\leqslant\frac{S_a}{2\pi}\int_{\sigma<|t|\leqslant\sigma_1}(\lambda_2t^{2r} |Ff(t)|^2\cosh2\beta t+\lambda_1|Ff(t)-y(t)|^2)\,dt. \end{equation} \tag{3.19} $$

Now we show that

$$ \begin{equation} I_3\leqslant\frac{\lambda_2}{2\pi}\int_{|t|>\sigma_1}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt \end{equation} \tag{3.20} $$
in all domains $\Sigma_j$, $j=1,2,3,4$. We have
$$ \begin{equation*} I_3=\frac1{2\pi}\int_{|t|>\sigma_1}t^{2(k-r)}t^{2r}|Ff(t)|^2\,dt\leqslant \frac{\sigma_1^{2(k-r)}}{2\pi\cosh2\beta\sigma_1}\int_{|t|>\sigma_1}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt. \end{equation*} \notag $$
Since in $\Sigma_1$ and $\Sigma_2$ we have
$$ \begin{equation*} \lambda_2=\frac{\sigma_1^{2(k-r)}}{\cosh2\beta\sigma_1}, \end{equation*} \notag $$
inequality (3.20) holds in these domains. If $(\sigma_1,\sigma)\in\Sigma_3$, then $\sigma_1\geqslant \widehat\sigma_1$. Therefore,
$$ \begin{equation*} \lambda_2=\dfrac1{\widehat\sigma_1^{2(r-k)}\cosh2\beta\widehat\sigma_1} \geqslant\frac{\sigma_1^{2(k-r)}}{\cosh2\beta\sigma_1}. \end{equation*} \notag $$
Let $(\sigma_1,\sigma)\in\Sigma_4$. Then $\lambda_2$ is the slope of the tangent to $s$ at $(x(\xi),y(\xi))$, and $\sigma_1^{2(k-r)}\cosh^{-1}2\beta\sigma_1$ is the slope of the tangent to $s$ at $(x(t_0),y(t_0))$ (we defined $t_0$ when we considered the lower bound in the case $(\sigma_1,\sigma)\in\Sigma_2$). Since $\xi\leqslant t_0$ and $s$ is a concave function, it follows that
$$ \begin{equation*} \lambda_2\geqslant\frac{\sigma_1^{2(k-r)}}{\cosh2\beta\sigma_1}. \end{equation*} \notag $$
Thus, (3.20) holds in all domains.

Assuming that $a$ is a function such that $S_a\leqslant1$, adding (3.17), (3.19) and (3.20) we obtain the following estimate for the functional in (3.16):

$$ \begin{equation*} \begin{aligned} \, &\frac{\lambda_1}{2\pi}\int_{-\sigma}^\sigma|Ff(t)-y(t)|^2\,dt \\ &\qquad\qquad +\frac 1{2\pi}\int_{\sigma<|t|\leqslant\sigma_1}(\lambda_2t^{2r} |Ff(t)|^2\cosh2\beta t+\lambda_1|Ff(t)-y(t)|^2)\,dt \\ &\qquad\qquad +\frac{\lambda_2}{2\pi}\int_{|t|>\sigma_1}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt \\ &\qquad =\frac{\lambda_1}{2\pi}\int_{-\sigma_1}^{\sigma_1}|Ff(t)-y(t)|^2\,dt +\frac{\lambda_2}{2\pi} \int_{|t|>\sigma}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt \\ &\qquad\leqslant\lambda_1\frac{\delta^2}{2\pi}+\lambda_2. \end{aligned} \end{equation*} \notag $$
Hence
$$ \begin{equation*} e(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta,m_a)\leqslant\sqrt{\lambda_1\frac{\delta^2}{2\pi}+\lambda_2}. \end{equation*} \notag $$
Taking (3.12) into account we obtain
$$ \begin{equation*} E(D^k,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)=\sqrt{\lambda_1\frac{\delta^2}{2\pi}+\lambda_2}, \end{equation*} \notag $$
and the methods $m_a$ are optimal.

We show that there exist functions $a$ such that $S_a\leqslant1$. Note (by extracting a ‘full square’) that the condition $S_a\leqslant1$ is equivalent to the following one: for almost all $\sigma<|t|\leqslant\sigma_1$ we have

$$ \begin{equation*} \biggl|a(t)-\frac{\lambda_1}{\lambda_1+\lambda_2t^{2r}\cosh2\beta t}\biggr|^2\leqslant \frac{\lambda_1\lambda_2t^{2(r-k)}\cosh2\beta t(-t^{2k}+\lambda_1+ \lambda_2t^{2r}\cosh2\beta t)}{(\lambda_1+\lambda_2t^{2r}\cosh2\beta t)^2}. \end{equation*} \notag $$
If
$$ \begin{equation} -t^{2k}+\lambda_1+\lambda_2t^{2r}\cosh2\beta t\geqslant0 \end{equation} \tag{3.21} $$
for $\sigma<|t|\leqslant\sigma_1$, then it is obvious that such functions $a$ exist and can be described by equality (3.5).

If $(\sigma_1,\sigma)\in\Sigma_1$, then the straight line $y=\lambda_1+\lambda_2x$ is parallel to the tangent to $s$ at the point $(x(t_0),y(t_0))$, where $t_0$ is defined by the equality

$$ \begin{equation*} \frac{kt_0^{2(k-r)}}{r\cosh2\beta t_0+t_0\beta\sinh2\beta t_0}=\frac1{\sigma_1^{2(r-k)}\cosh2\beta \sigma_1} \end{equation*} \notag $$
(see (3.1)), and since $\sigma\geqslant h(\sigma_1)$, it does not lie below the tangent. Hence, as $s$ is concave, for all $x\geqslant0$ we have the inequality $\lambda_1+\lambda_2x\geqslant s(x)$. This yields condition (3.21). In the other three cases the lines $y=\lambda_1+\lambda_2x$ are tangent to $s$, and condition (3.21) holds for the same reasons.

(3) Let $k=0$, $\sigma_1>0$, $\sigma\geqslant0$ and $\sigma\leqslant\sigma_1$. As shown above, the functions $f_n$ defined by (3.13) are admissible in problem (3.11). Hence

$$ \begin{equation*} \begin{aligned} \, &E^2(D^0,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta) \\ &\qquad\geqslant\frac1{2\pi}\int_{\mathbb R}|Ff_n(t)|^2\,dt \\ &\qquad =\frac1{2\pi}\int_{\sigma-1/n}^\sigma\delta^2n\,dt +\frac n{(\sigma_1+1/n)^{2r}\cosh(2\beta(\sigma_1+1/n))}\int_{\sigma_1}^{\sigma_1+1/n}\,dt \\ &\qquad =\frac{\delta^2}{2\pi}+\frac1{(\sigma_1+1/n)^{2r}\cosh2\beta(\sigma_1+1/n)}. \end{aligned} \end{equation*} \notag $$
Taking the limit as $n\to\infty$ we obtain
$$ \begin{equation} E^2(D^0,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)\geqslant\frac{\delta^2}{2\pi}+\widetilde\lambda,\qquad\widetilde\lambda= \frac1{\sigma_1^{2r}\cosh2\beta\sigma_1}. \end{equation} \tag{3.22} $$

We look for optimal recovery methods $m_a\colon L_2(\Delta_{\sigma_1})\to L_2(\mathbb R)$ among the maps with representation (3.14) for $k=0$ in terms of Fourier transforms. Following the above scheme we assume that $a\equiv1$ on $\Delta_\sigma$ and estimate the functional maximized in (3.16) (for $k=0$) by representing it as a sum of three terms:

$$ \begin{equation*} \begin{aligned} \, I_1=&\frac1{2\pi}\int_{-\sigma}^\sigma|Ff(t)-y(t)|^2\,dt, \\ I_2=&\frac1{2\pi}\int_{\sigma<|t|\leqslant\sigma_1}|Ff(t)-a(t)y(t)|^2\,dt, \\ I_3=&\frac 1{2\pi}\int_{|t|>\sigma_1}|Ff(t)|^2\,dt. \end{aligned} \end{equation*} \notag $$
We estimate $I_2$. Using the Cauchy–Schwarz–Bunyakovsky inequality we obtain
$$ \begin{equation} \begin{aligned} \, \notag &|Ff(t)-a(t)y(t)|^2 \\ \notag &\qquad=|(1-a(t))Ff(t)+a(t)(Ff(t)-y(t))|^2 \\ &\qquad \leqslant\biggl(\frac{|1-a(t)|^2} {\widetilde\lambda t^{2r}\cosh2\beta t}+|a(t)|^2\biggr)\bigl(\widetilde\lambda t^{2r} |Ff(t)|^2\cosh2\beta t+|Ff(t)-y(t)|^2\bigr). \end{aligned} \end{equation} \tag{3.23} $$
Set
$$ \begin{equation*} \widetilde S_a=\operatorname*{ess\,max}_{\sigma<|t|\leqslant\sigma_1}\biggl(\frac{|1-a(t)|^2} {\widetilde\lambda t^{2r}\cosh2\beta t}+|a(t)|^2\biggr). \end{equation*} \notag $$
Then integrating (3.23) we arrive at the following estimate for $I_2$:
$$ \begin{equation*} I_2\leqslant\frac{\widetilde S_a}{2\pi}\int_{\sigma<|t|\leqslant\sigma_1}(\widetilde\lambda t^{2r} |Ff(t)|^2\cosh2\beta t+|Ff(t)-y(t)|^2)\,dt. \end{equation*} \notag $$
For $I_3$ we have
$$ \begin{equation*} I_3\leqslant\frac{\widetilde\lambda}{2\pi}\int_{|t|>\sigma_1}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt. \end{equation*} \notag $$

Assume that for the function $a$ we have $\widetilde S_a\leqslant1$. Then taking the estimates for $I_2$ and $I_3$ into account we obtain the following estimate for the functional in (3.16) (for $k=0$):

$$ \begin{equation*} \begin{aligned} \, &\frac1{2\pi}\int_{-\sigma}^\sigma|Ff(t)-y(t)|^2\,dt +\frac 1{2\pi}\int_{\sigma<|t|\leqslant\sigma_1}\!(\widetilde\lambda t^{2r} |Ff(t)|^2\cosh2\beta t+|Ff(t)-y(t)|^2)\,dt \\ &\qquad\qquad +\frac{\widetilde\lambda}{2\pi}\int_{|t|>\sigma_1}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt \\ &\qquad =\frac1{2\pi}\int_{-\sigma_1}^{\sigma_1}|Ff(t)-y(t)|^2\,dt +\frac{\widetilde\lambda}{2\pi} \int_{|t|>\sigma}t^{2r}|Ff(t)|^2\cosh2\beta t\,dt \\ &\qquad\leqslant\frac{\delta^2}{2\pi}+\widetilde\lambda. \end{aligned} \end{equation*} \notag $$
Hence
$$ \begin{equation*} e(D^0,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta,m_a)\leqslant\sqrt{\frac{\delta^2}{2\pi}+\widetilde\lambda}. \end{equation*} \notag $$
Taking (3.22) into account we obtain
$$ \begin{equation*} E(D^0,H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R),I_{\sigma_1},\delta)=\sqrt{\frac{\delta^2}{2\pi}+\widetilde\lambda}, \end{equation*} \notag $$
and the methods $m_a$ are optimal.

The condition $\widetilde S_a\leqslant1$ is equivalent to the following one: for almost all ${\sigma\,{<}\,|t|\,{\leqslant}\,\sigma_1}$ we have the inequality

$$ \begin{equation*} \biggl|a(t)-\frac1{1+\widetilde\lambda t^{2r}\cosh2\beta t}\biggr|\leqslant \frac{\widetilde\lambda t^{2r}\cosh2\beta t}{1+\widetilde\lambda t^{2r}\cosh2\beta t}. \end{equation*} \notag $$
It is obvious that such $a$ exist and are described by (3.6).

The proof is complete.

§ 4. Discussion of optimal methods

When we recover $f^{(k)}$ on the class $H_2^{r,\beta}+\mathcal B_{\sigma,2}(\mathbb R)$ from an inaccurately given Fourier transform of the function $f$ on the interval $[-\sigma_1,\sigma_1]$, the following two questions arise:

In other words, the question is whether part of the information on the function $f$ that we obtain is excessive and among the family of optimal methods we can find one that is exact on a wider subspace and does not increase the optimal recovery error. We look at the case $k\geqslant1$. The answers to the above questions depend on the domain $\Sigma_j$, $j= 1,2,3,4$, in which the point $(\sigma_1,\sigma)$ occurs.

When $(\sigma_1,\sigma)\in\Sigma_1$, it is clear from (3.4) that, as $\sigma_1$ decreases or $\sigma$ increases the optimal recovery error grows. Thus, the answers to the questions are in the negative in this case.

If $(\sigma_1,\sigma)\in\Sigma_2$, then the optimal recovery error does not change for the point $(\sigma_1,h(\sigma_1)$. This means that we can extend the original subspace $\mathcal B_{\sigma,2}(\mathbb R)$ to $\mathcal B_{h(\sigma_1),2}(\mathbb R)$ without increasing the optimal recovery error.

For $(\sigma_1,\sigma)\in\Sigma_4$ we can reduce the interval on which the information about $f$ is set to the interval $[-\sigma_1',\sigma_1']$, where $\sigma_1'$ is such that $h(\sigma_1')=\sigma$.

Finally, if $(\sigma_1,\sigma)\in\Sigma_3$, then we can both reduce the interval on which the information on $f$ is prescribed to $[-\widehat\sigma_1,\widehat\sigma_1]$ and extend the subspace to $\mathcal B_{\widehat\sigma,2}(\mathbb R)$.

We show these transitions schematically in Figure 4.


Bibliography

1. S. M. Nikol'skii, Quadrature formulae, 4th ed., Nauka, Moscow, 1988, 256 pp.  mathscinet  zmath; Spanish translation of the 3d ed., S. Nikolski, Fórmulas de cuadratura, Editorial Mir, Moscow, 1990, 293 pp.  mathscinet  zmath
2. S. M. Nikol'skii, “On estimates for approximations by quadrature formulae”, Uspekhi Mat. Nauk, 5:2(36) (1950), 165–177 (Russian)  mathnet  mathscinet  zmath
3. G. G. Magaril-Il'yaev and K. Yu. Osipenko, “Exactness and optimality of methods for recovering functions from their spectrum”, Proc. Steklov Inst. Math., 293 (2016), 194–208  mathnet  crossref  mathscinet  zmath
4. E. A. Balova and K. Yu. Osipenko, “Optimal recovery methods for solutions of the Dirichlet problem that are exact on subspaces of spherical harmonics”, Math. Notes, 104:6 (2018), 781–788  mathnet  crossref  mathscinet  zmath
5. S. A. Unuchek, “Optimal recovery methods exact on trigonometric polynomials for the solution of the heat equation”, Math. Notes, 113:1 (2023), 116–128  mathnet  crossref  mathscinet  zmath
6. C. A. Micchelli and T. J. Rivlin, “A survey of optimal recovery”, Optimal estimation in approximation theory (Freudenstadt 1976), The IBM Research Symposia Series, Plenum, New York, 1977, 1–54  crossref  mathscinet  zmath
7. J. F. Traub and H. Woźniakowski, A general theory of optimal algorithms, ACM Monogr. Ser., Academic Press, Inc., New York–London, 1980, xiv+341 pp.  mathscinet  zmath
8. L. Plaskota, Noisy information and computational complexity, Cambridge Univ. Press, Cambridge, 1996, xii+308 pp.  crossref  mathscinet  zmath  adsnasa
9. K. Yu. Osipenko, Optimal recovery of analytic functions, Nova Science Publ., Inc., Huntington, NY, 2000, 220 pp.
10. K. Yu. Osipenko, Introduction to optimal recovery theory, Lan', St Petersburg, 2022, 388 pp. (Russian)
11. K. Yu. Osipenko, “The Hardy–Littlewood–Pólya inequality for analytic functions in Hardy–Sobolev spaces”, Sb. Math., 197:3 (2006), 315–334  mathnet  crossref  mathscinet  zmath  adsnasa
12. E. M. Stein and G. Weiss, Introduction to Fourier analysis on Euclidean spaces, Princeton Math. Ser., 32, Princeton Univ. Press, Princeton, NJ, 1971, x+297 pp.  mathscinet  zmath

Citation: K. Yu. Osipenko, “Recovery of analytic functions that is exact on subspaces of entire functions”, Mat. Sb., 215:3 (2024), 100–118; Sb. Math., 215:3 (2024), 383–400
Citation in format AMSBIB
\Bibitem{Osi24}
\by K.~Yu.~Osipenko
\paper Recovery of analytic functions that is exact on subspaces of entire functions
\jour Mat. Sb.
\yr 2024
\vol 215
\issue 3
\pages 100--118
\mathnet{http://mi.mathnet.ru/sm9976}
\crossref{https://doi.org/10.4213/sm9976}
\mathscinet{http://mathscinet.ams.org/mathscinet-getitem?mr=4774065}
\adsnasa{https://adsabs.harvard.edu/cgi-bin/bib_query?2024SbMat.215..383O}
\transl
\jour Sb. Math.
\yr 2024
\vol 215
\issue 3
\pages 383--400
\crossref{https://doi.org/10.4213/sm9976e}
\isi{https://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=Publons&SrcAuth=Publons_CEL&DestLinkType=FullRecord&DestApp=WOS_CPL&KeyUT=001283662800006}
\scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85199268691}
Linking options:
  • https://www.mathnet.ru/eng/sm9976
  • https://doi.org/10.4213/sm9976e
  • https://www.mathnet.ru/eng/sm/v215/i3/p100
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    �������������� ������� Sbornik: Mathematics
    Statistics & downloads:
    Abstract page:212
    Russian version PDF:5
    English version PDF:12
    Russian version HTML:18
    English version HTML:87
    References:22
    First page:5