- Research
- Open access
- Published:
Convergence theorems for split feasibility problems on a finite sum of monotone operators and a family of nonexpansive mappings
Journal of Inequalities and Applications volume 2018, Article number: 205 (2018)
Abstract
In this paper, we present two iterative algorithms for approximating a solution of the split feasibility problem on zeros of a sum of monotone operators and fixed points of a finite family of nonexpansive mappings. Weak and strong convergence theorems are proved in the framework of Hilbert spaces under some mild conditions. We apply the obtained main result for the problem of finding a common zero of the sum of inverse strongly monotone operators and maximal monotone operators, for finding a common zero of a finite family of maximal monotone operators, for finding a solution of multiple sets split common null point problem, and for finding a solution of multiple sets split convex feasibility problem. Some applications of the main results are also provided.
1 Introduction
A very common problem in different areas of mathematics and physical sciences consists of finding a point in the intersection of convex sets and is formulated as finding a point \(z\in H\) satisfying the property
where \(C_{i}\), \(i=1, \ldots, M\), are nonempty, closed, and convex subsets of a Hilbert space H. This problem is called the convex feasibility problem (CFP). There are various applications of CFP in many applied disciplines as diverse as applied mathematics, approximation theory, image recovery and signal processing, control theory, biomedical engineering, communications, and geophysics (see [1–7] and the references therein).
The problem of finding \(z\in H_{1}\) such that \(z\in C\) and \(Lz\in D\) is called the split feasibility problem (SFP), where C and D are nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and \(L:H_{1}\rightarrow H_{2}\) is a bounded linear operator. Let \(L^{-1}(D)=\{x: Lx\in D\}\), then the SFP can be viewed as a special case of the CFP since it can be rewritten as \(z\in C\cap L^{-1}D\). However, the methodologies for studying the SFP are actually different from those for the CFP; see [8–14].
The theory of monotone operators has appeared as a powerful and effective tool for studying a wide class of problems arising in different branches of social, engineering, and pure sciences in a unified and general framework. There is a notion about monotone operators and it is one of generalized sums of two monotone operators; see [15, 16] and the references therein. In recent years, monotone operators have received a lot of attention for treating zero points of monotone operators and fixed point of mappings which are Lipschitz continuous; see [17–22] and the references therein. The first algorithm for approximating the zero points of the maximal monotone operator was introduced by Martinet [23]. He considered the proximal point algorithm for finding zero points of a maximal monotone operator. Then, Passty [24] introduced a forward-backward algorithm method for finding zero points of the sum of two operators. There are various applications of the problem of finding zero points of the sum of two operators; see [25–29] for example and the references therein.
Therefore, there are some generalizations of the CFP, which can be formulated in various ways such as: finding a common fixed point of nonexpansive operators, finding a common minimum of convex functionals, finding a common zero of maximal monotone operators, solving a system of variational inequalities, and solving a system of convex inequalities. Surveys of methods for solving such problems can be found in [2, 4].
Recently, some authors introduced and studied algorithms to get a common solution to inclusion problems and fixed point problems in the framework of Hilbert spaces; see [30–32]. Cho et al. [30] considered the problem of finding a common solution to the zero point problems involving two monotone operators and fixed point problems involving asymptotically strictly pseudocontractive mappings based on a one-step iterative method and proved the weak convergence theorems in the framework of Hilbert spaces.
In this paper, motivated and inspired by the above literature, we consider an iterative algorithm for finding a solution of split feasibility problem for a point in zeros of a finite sum of α-inverse strongly monotone operators and maximal monotone operators and fixed points of nonexpansive mappings. That is, we are going to consider the following problem: Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(A_{i}:H_{1}\rightarrow H_{1}\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators and \(B_{i}:H_{1} \rightarrow 2^{H_{1}}\), \(i=1, \ldots, M\), be maximal monotone operators, \(T_{j}:H_{2}\rightarrow H_{2}\), \(j=1, \ldots, N\), be nonexpansive mappings, \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator. We are interested in considering the problem of finding a solution \(p\in H_{1}\) such that
where \(\mathcal{F}\neq \emptyset \). Weak and strong convergence theorems will be provided under some mild conditions.
The paper is organized as follows. Section 2 gathers some definitions and lemmas of geometry of Hilbert spaces and monotone operators, which will be needed in the remaining sections. In Sect. 3, we prepare an iterative algorithm and prove the weak and strong convergence theorems. Finally, in Sect. 4, the results of Sect. 3 are applied to solve CFP, multiple-set null point problems, variational inequality problems, fixed point problems, and equilibrium problems.
2 Preliminaries
Throughout this paper, H will be a Hilbert space with norm \(\|\cdot \|\) and inner product \(\langle \cdot,\cdot \rangle \), respectively. We now provide some basic concepts, definitions, and lemmas which will be used in the sequel. We write \(x_{n} \rightarrow x\) to indicate that the sequence \(\{x_{n}\}\) converges strongly to x and \(x_{n} \rightharpoonup x\) to indicate that \(\{x_{n}\}\) converges weakly to x.
Let \(T:H\rightarrow H\) be a mapping. We say that T is a Lipschitz mapping if there exists \(L\geq 0\) such that
The number L, associated with T, is called a Lipschitz constant. If \(L=1\), we say that T is a nonexpansive mapping, that is,
We will say that T is firmly nonexpansive if
The set of fixed points of T will be denoted by \(F(T)\), that is, \(F(T)=\{x\in H : Tx=x\}\). It is well known that if T is nonexpansive, then \(F(T)\) is closed and convex. Moreover, every nonexpansive operator \(T:H\rightarrow H\) satisfies the following inequality:
Therefore, for all \(x\in H\) and \(y\in F(T)\),
Lemma 2.1
([35])
Let H be a real Hilbert space and \(T:H\rightarrow H\) be a nonexpansive mapping with \(F(T)\neq \emptyset \). Then the mapping \(I -T\) is demiclosed at zero, that is, if \(\{x_{n}\}\) is a sequence in H such that \(x_{n}\rightharpoonup x\) and \(\|x_{n} -Tx_{n}\|\rightarrow 0\), then \(x \in F(T)\).
A mapping \(T :H \rightarrow H\) is called α-averaged if there exists \(\alpha \in (0, 1)\) such that \(T=(1-\alpha)I+\alpha S\), where S is a nonexpansive mapping of H into H. It should be observed that firmly nonexpansive mappings are \(\frac{1}{2}\)-averaged mappings.
We now recall the concepts and facts on the class of monotone operators, for both single and multi-valued operators.
An operator \(A:H\rightarrow H\) is called α-inverse strongly monotone (α-ism) for a positive number α if
Lemma 2.2
([21])
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let the mapping \(A:C \rightarrow H\) be α-inverse strongly monotone and \(r >0\) be a constant. Then we have
for all \(x,y\in C\). In particular, if \(0< r\leq 2\alpha \), then \(I-rA\) is nonexpansive.
We have the following properties from [36, 37].
Lemma 2.3
We have
-
(a)
The composite of finitely many averaged mappings is averaged. In particular, if \(T_{i}\) is \(\alpha_{i}\)-averaged, where \(\alpha_{i} \in (0,1)\) for \(i=1,2\), then the composite \(T_{1}T_{2}\) is α-averaged, where \(\alpha =\alpha_{1}+\alpha_{2}-\alpha_{1} \alpha_{2}\).
-
(b)
If A is β-ism and \(r\in (0,\beta ]\), then \(T := I-rA\) is firmly nonexpansive.
A multifunction \(B:H\rightarrow 2^{ H}\) is called a monotone operator if, for every \(x,y\in H\),
A monotone operator \(B:H\rightarrow 2^{ H}\) is said to be maximal monotone, when its graph is not properly included in the graph of any other monotone operators on the same space. For a maximal monotone operator B on H, and \(\lambda >0\), we define the single-valued resolvent \(J_{\lambda }^{B} : H \rightarrow D(B)\) by \(J_{\lambda }^{B}=(I + \lambda B)^{-1}\). It is well known that \(J_{\lambda }^{B}\) is firmly nonexpansive, and \(F(J_{\lambda }^{B})=B ^{-1}(0)\).
Next, we collect some useful facts on monotone operators that will be used in our proof.
Lemma 2.4
([38])
Let C be a nonempty, closed, and convex subset of a real Hilbert space H and \(A:C \rightarrow H\) be an operator. If \(B:H\rightarrow 2^{ H}\) is a maximal monotone operator, then \(F(J_{\lambda }^{B}(I-\lambda A ))=(A+B)^{-1}(0)\).
Lemma 2.5
([39])
Let \(B:H\rightarrow 2^{ H}\) be a maximal monotone operator. For \(\lambda >0\), \(\mu >0\), and \(x\in H\),
For each sequence \(\{x_{n}\}\subset H\), we put
The following lemma plays an important role in concluding our results.
Lemma 2.6
([37])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(\{x_{n}\}\) be a sequence in H satisfying the properties:
-
(i)
\(\lim_{n\rightarrow \infty }\|x_{n}-u\|\) exists for each \(u\in C\);
-
(ii)
\(\omega_{w}(x_{n})\subset C\).
Then \(\{x_{n}\}\) converges weakly to a point in C.
3 Parallel algorithm
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(A_{i}:H_{1} \rightarrow H_{1}\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators and \(B_{i}:H_{1}\rightarrow 2^{H_{1}}\), \(i=1, \ldots, M\), be maximal monotone operators, \(T_{j}:H_{2}\rightarrow H _{2}\), \(j=1, \ldots, N\), be nonexpansive mappings, \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator. We will denote by \(L^{\ast }\) the adjoint operator of L. Let \(\{\beta_{n}\}\) and \(\{\lambda_{n}\}\) be sequences of positive real numbers. For \(x_{1}\in H_{1}\), we introduce the following parallel algorithm:
We start by some lemmas.
Lemma 3.1
Let \(\alpha =\min \{\alpha_{1}, \ldots,\alpha_{M}\}\). If
-
(i)
\(\{\beta_{n}\}\subset (0, 2\alpha)\) and
-
(ii)
\(\{\lambda_{n}\}\subset ( a, \frac{1}{\|L\|^{2}} ) \) for some \(a>0\),
then the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) generated by (3.1) are bounded.
Proof
Let \(u\in \mathcal{F}\). We have
By (2.1), we get
It follows from (3.2) and (3.3) that
Hence, from Lemma 2.2, Lemma 2.4, and the control conditions on \(\{\beta_{n}\}\) and \(\{\lambda_{n}\}\), we have
This means that \(\|x_{n}-u\|\) is a nonincreasing sequence of nonnegative real numbers, so it follows that it is a convergent sequence. Also, from the above inequality, we have \(\|x_{n}-u\|\) and \(\|y_{n}-u\|\) converge to the same limit point. These imply that the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded, and the proof is completed. □
Lemma 3.2
If \(0< a\leq \lambda_{n}\leq b<\frac{1}{2\|L\|^{2}}\), then \(\omega_{w}(Lx _{n})\subset \bigcap_{j=1}^{N}F(T_{j})\).
Proof
By (3.4) we have
and hence,
Therefore, from (3.1), we get
for each \(j=1, \ldots, N\), which implies that
From (2.1), we have
for each \(j=1, \ldots, N\). Thus, by (3.6) and the assumption of \(\{\lambda_{n}\}\), we have
for each \(j=1, \ldots, N\). From Lemma 2.1, we obtain \(\omega_{w}(Lx_{n})\subset F(T_{j})\) for each \(j=1, \ldots, N\). This completes the proof. □
Lemma 3.3
Let \(\alpha =\min \{\alpha_{1}, \ldots,\alpha_{M}\}\). If \(\{\beta _{n}\}\subset (0, 2\alpha)\). Then, for each \(i=1, \ldots, M\), we have \(\|x_{n}-z_{i,n}\|\rightarrow 0\).
Proof
Since \(J_{\beta_{n}}^{B_{i}}\) and \(I-\beta_{n}A_{i}\) are firmly nonexpansive, they are both \(\frac{1}{2}\)-averaged and hence \(T_{i,n}:=J_{\beta_{n}}^{B_{i}}(I-\beta_{n}A_{i})\) is \(\frac{3}{4}\)-averaged by Lemma 2.3. Thus, for each \(n\in \mathbb{N}\) and \(1\leq i\leq M\), we can write
where \(S_{i,n}\) is a nonexpansive mapping and \(F(S_{i,n})=F(T_{i,n})=F(J _{\beta_{n}}^{B_{i}}(I-\beta_{n}A_{i}))=(A_{i}+B_{i})^{-1}(0)\) for each \(n\in \mathbb{N}\) and \(1\leq i\leq M\). Then we can rewrite \(x_{n+1}\) as
Let \(u\in \bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0)\), we have
and hence,
Then
From (3.9),
By (3.5), we get
Now, from (3.1), (3.10), and (3.11), we obtain
□
Lemma 3.4
Assume that \(\beta_{n}\rightarrow \beta \) for some positive real number β. Then, for each \(i=1, \ldots, M\), we have \(\|x_{n}-J_{ \beta }^{B_{i}}(I-\beta A_{i})x_{n}\|\rightarrow 0\), \(n\rightarrow \infty\).
Proof
Set \(w_{i,n}=(I-\beta_{n} A_{i})y_{n}\), so \(z_{i,n}=J_{\beta_{n}}^{B _{i}} w_{i,n}\). By Lemma 2.5, we have
On the other hand, we have
Since \(A_{i}\) is inverse strongly monotone, \(\{y_{n}\}\) is bounded, (3.11) and (3.12) we know that \(\lbrace \|J_{\beta_{n}} ^{B_{i}} w_{i,n}- w_{i,n}\| \rbrace \) is bounded. It follows from \(\beta_{n}\rightarrow \beta \) and (3.13) that
We also have
It follows form (3.12), (3.14), and (3.15) that
for each \(i=1, \ldots, M\). This completes the proof of the lemma. □
Now, the weak convergence of algorithm (3.1) is given by the following theorem.
Theorem 3.5
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(T_{j}:H_{2} \rightarrow H_{2}\), \(j=1, \ldots, N\), be nonexpansive mappings, \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator, \(A_{i}:H_{1} \rightarrow H_{1}\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators, and \(B_{i}:H_{1}\rightarrow 2^{H_{1}}\), \(i=1, \ldots, M\), be maximal monotone operators such that \(\mathcal{F}= ( \bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0) ) \cap L ^{-1} ( \bigcap_{j=1}^{N}F(T_{j}) ) \neq \emptyset \). Let \(\alpha =\min \{\alpha_{1}, \ldots,\alpha_{M}\}\), \(\beta_{n}\in (0, 2 \alpha)\) for each \(n\in \mathbb{N}\) and \(0< a\leq \lambda_{n}\leq b<\frac{1}{2 \|L\|^{2}}\), then the sequence \(\{x_{n}\}\) generated by (3.1) converges weakly to a point \(p\in \mathcal{F}\).
Proof
In Lemma 3.1, we show that \(\lim_{n\rightarrow \infty }\|x_{n}-u\|\) exists for each \(u\in \mathcal{F}\). From Lemmas 3.2 and 3.4 we imply that \(\omega_{w}(x_{n})\subset \mathcal{F}\). Then it follows from Lemma 2.6 that \(\{x_{n}\}\) converges weakly to a point \(p\in \mathcal{F}\). □
Recall that for a subset C of H, a mapping \(T:C\rightarrow C\) is said to be semi-compact if for any bounded sequence \(\{x_{n}\}\subset C\) such that \(\|x_{n} -Tx_{n}\|\rightarrow 0\) (\(n\rightarrow \infty \)), there exists a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) such that \(\{x_{n_{j}}\}\) converges strongly to \(x \in C\).
Strong convergence of algorithm (3.1), under the concept of semi-compact assumption, is given by the following theorem.
Theorem 3.6
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(T_{j}:H_{2} \rightarrow H_{2}\), \(j=1, \ldots, N\), be nonexpansive mappings, \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator, \(A_{i}:H_{1} \rightarrow H_{1}\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators, and \(B_{i}:H_{1}\rightarrow 2^{H_{1}}\), \(i=1, \ldots, M\), be maximal monotone operators such that \(\mathcal{F}= ( \bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0) ) \cap L ^{-1} ( \bigcap_{j=1}^{N}F(T_{j}) ) \neq \emptyset \). Let \(\alpha =\min \{\alpha_{1}, \ldots,\alpha_{M}\}\), \(\beta_{n}\in (0, 2 \alpha)\) for each \(n\in \mathbb{N}\) and \(0< a\leq \lambda_{n}\leq b<\frac{1}{2 \|L\|^{2}}\). If at least one of the maps \(T_{j}\) is semi-compact, then the sequence \(\{x_{n}\}\) generated by (3.1) converges strongly to a point \(p\in \mathcal{F}\).
Proof
Let \(T_{j}\) be semi-compact for some fixed \(j\in \{1, \ldots, N\}\). Since \(\lim_{n\rightarrow \infty }\|T_{j}Lx_{n}-Lx_{n}\|=0\) by (4.7), there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) such that it converges strongly to q. Since \(\{x_{n}\}\) converges weakly to p, we get \(p=q\). On the other hand, \(\lim_{n\rightarrow \infty }\|x_{n}-p\|\) exists and \(\lim_{n\rightarrow \infty }\|x_{n_{k}}-p\|=0\), which show that \(\{x_{n}\}\) converges strongly to \(p\in \mathcal{F}\). This completes the proof of the theorem. □
3.1 Deduced results of parallel algorithm
One can obtain some results from Theorem 3.5. We give some of them in the following.
If we take \(M=N=1\), we have the following corollary.
Corollary 3.7
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping, \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator, \(A:H_{1}\rightarrow H_{1}\) be an α-inverse strongly monotone operator, and \(B:H_{1}\rightarrow 2^{H_{1}}\) be a maximal monotone operator such that \((A+B)^{-1}(0) \cap L^{-1}(F(T))\neq \emptyset \). Suppose that the sequence \(\{x_{n}\}\) is defined by the following algorithm:
where \(x_{1}\in H_{1}\), \(0< a\leq \lambda_{n}\leq b<\frac{1}{2\|L\| ^{2}}\), and \(\beta_{n}\in (0, 2\alpha)\) for each \(n\in \mathbb{N}\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(p\in (A+B)^{-1}(0) \cap L^{-1}(F(T))\). If T be semi-compact, then the convergence is strong.
From Theorem 3.5, we have the following corollary for the problem of finding a common zero of the sum of α-inverse strongly monotone operators and maximal monotone operators.
Corollary 3.8
Let H be a real Hilbert space, \(A_{i}:H\rightarrow H\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators, and \(B_{i}:H\rightarrow 2^{H}\), \(i=1, \ldots, M\), be maximal monotone operators such that \(\mathcal{F}=\bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0) \neq \emptyset \) and \(\alpha =\min \{\alpha_{1}, \ldots,\alpha_{M}\}\). Suppose that the sequence \(\{x_{n}\}\) is defined by the following algorithm:
where \(x_{1}\in H\) and \(\beta_{n}\in (0, 2\alpha)\) for each \(n\in \mathbb{N}\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(p\in \bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0)\).
In the following corollary, we have a result for finding a common zero of a finite family of maximal monotone operators.
Corollary 3.9
Let H be a real Hilbert space, \(B_{i}:H\rightarrow 2^{H}\), \(i=1, \ldots, M\), be maximal monotone operators such that \(\bigcap_{i=1} ^{M}B_{i}^{-1}(0)\neq \emptyset \). Suppose that the sequence \(\{x_{n}\}\) is defined by the following algorithm:
where \(x_{1}\in H\) and \(\beta_{n}>0\) for each \(n\in \mathbb{N}\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(p\in \bigcap_{i=1} ^{M}B_{i}^{-1}(0)\).
Corollary 3.10
Let H be a real Hilbert space, \(A_{i}:H\rightarrow H\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators such that \(\bigcap_{i=1}^{M}A_{i}^{-1}(0)\neq \emptyset \) and \(\alpha =\min \{ \alpha_{1}, \ldots,\alpha_{M}\}\). Suppose that the sequence \(\{x_{n}\}\) is defined by the following algorithm:
where \(x_{1}\in H\) and \(\beta_{n}\in (0, 2\alpha)\) for each \(n\in \mathbb{N}\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(p\in \bigcap_{i=1}^{M}A_{i}^{-1}(0)\).
Corollary 3.11
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces and \(T_{j}:H_{2}\rightarrow H_{2}\), \(j=1, \ldots, N\), be nonexpansive mappings and \(L:H_{1} \rightarrow H_{2}\) be a bounded linear operator such that \(\mathcal{F}= L^{-1} ( \bigcap_{j=1}^{N}F(T_{j}) ) \neq \emptyset \). Suppose that the sequence \(\{x_{n}\}\) is defined by the following algorithm:
where \(x_{1}\in H\) and \(0< a\leq \lambda_{n}\leq b< \frac{1}{2\|L\|^{2}}\). Then the sequence \(\{x_{n}\}\) converges weakly to a point \(p\in \bigcap_{j=1}^{N}F(T_{j})\). If \(T_{j}\) is semi-compact for some \(1\leq j \leq N\), then the convergence is strong.
4 Parallel hybrid algorithm
Notice that, in order to guarantee the strong convergence theorem of the introduced algorithm (3.1), we proposed an additional assumption to one of the operators \(T_{j}\), as a semi-compact assumption (see Theorem 3.6). Next, we propose the following hybrid algorithm to obtain a strong convergence theorem for finding a point in zeros of a finite family of sums of α-inverse strongly monotone operators and maximal monotone operators and nonexpansive mappings. Of course, the strong convergence theorems of the following algorithm will be guaranteed without any additional assumptions on the considered operators. To do this, we recall some necessary concepts and facts: let C be a closed and convex subset of a Hilbert space H. The operator \(P_{C}\) is called a metric projection operator if it assigns to each \(x\in H\) its nearest point \(y\in C\) such that
An element y is called the metric projection of H onto C and is denoted by \(P_{C}x\). It exists and is unique at any point of the Hilbert space. It is known that the metric projection operator \(P_{C}\) is a firmly nonexpansive mapping. Also, the following characterization is very useful in our proof.
Lemma 4.1
Let H be a Hilbert space and C be a nonempty, closed, and convex subset of H. Then, for all \(x\in H\), the element \(z=P_{C}x\) if and only if
Now we are in a position to introduce the aforementioned algorithm: Let \(x_{1}\in C_{1}=H_{1}\) and \(\{x_{n}\}\) be a sequence generated by the following algorithm:
Theorem 4.2
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(T_{j}:H_{2} \rightarrow H_{2}\), \(j=1, \ldots, N\), be nonexpansive mappings, \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator, \(A_{i}:H_{1} \rightarrow H_{1}\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators, and \(B_{i}:H_{1}\rightarrow 2^{H_{1}}\), \(i=1, \ldots, M\), be maximal monotone operators such that \(\mathcal{F}= ( \bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0) ) \cap L ^{-1} ( \bigcap_{j=1}^{N}F(T_{j}) ) \neq \emptyset \). Let \(\alpha =\min \{\alpha_{1}, \ldots,\alpha_{M}\}\), \(\beta_{n}\in (0, 2 \alpha)\) for each \(n\in \mathbb{N}\) and \(0< a\leq \lambda_{n}\leq b<\frac{1}{2 \|L\|^{2}}\). Then the sequence \(\{x_{n}\}\) generated by (4.1) converges strongly to \(q=P_{\mathcal{F}}(x_{1})\).
Proof
We prove that the sequence \(\{x_{n}\}\) generated by (4.1) is well defined. We first show that \(C_{n}\) is closed and convex for each \(n\in \mathbb{N}\). \(C_{1}=H_{1}\) is closed and convex and suppose that \(C_{n}\) is closed and convex for some \(n>1\). Set
then \(C_{n+1}=C_{n}\cap C_{n}^{1}\cap C_{n}^{2}\). For each \(p\in C _{n}^{1}\), we obtain
This implies that \(C_{n}^{1}\) is closed and convex. In a similar manner, \(C_{n}^{2}\) is closed and convex and so is \(C_{n+1}=C_{n}\cap C_{n} ^{1}\cap C_{n}^{2}\). By the induction, \(C_{n}\) is closed and convex for each \(n\geq 1\).
We show that \(\mathcal{F}\subset C_{n}\) for each \(n\geq 1\). Let \(p\in \mathcal{F}\). From Lemmas 2.2 and 2.4 and (4.1), we have
This together with (3.4) implies that \(p\in C_{n+1}\). Then \(\{x_{n}\}\) is well defined.
Since \(\mathcal{F}\) is nonempty, closed, and convex, there exists a unique element \(q\in \mathcal{F}\subset C_{n}\) such that \(q=P_{ \mathcal{F}}x_{1}\). From \(x_{n+1}=P_{C_{n+1}} (x_{1})\), we get
Since again \(x_{n}=P_{C_{n}} (x_{1})\) and \(x_{n+1}=P_{C_{n+1}} (x_{1}) \in C_{n+1}\subset D_{n} \), we get
Thus, the sequence \(\{\|x_{n}-x_{1}\|\}\) is a bounded above and nondecreasing sequence, so \(\lim_{n\rightarrow \infty } \|x _{n}-x_{1}\|\) exists, and the sequence \(\{x_{n}\}\) is bounded. By (3.4) the sequence \(\{y_{n}\}\) is bounded too.
We show that \(\|x_{n+1}-x_{n}\|\rightarrow 0\), \(\|x_{n}-y_{n}\|\rightarrow 0\), and \(\|y_{n}-z_{n}\|\rightarrow 0\). From \(x_{n}=P_{C_{n}} (x_{1})\), \(x_{n+1}=P_{C_{n+1}} (x_{1})\in C_{n+1}\subset C_{n} \), and Lemma 4.1, we obtain
Then we get
and hence,
By \(x_{n+1}=P_{C_{n+1}} (x_{1})\in C_{n+1}\subset C_{n} \) and the definition of \(C_{n}\), we obtain
and then
which implies that
Also, we have
therefore,
Now, we show that \(\omega_{w}(x_{n})\subset \mathcal{F}\). From (3.5), (3.7), and (4.4), we get
for each \(j=1, \ldots, N\). It follows from Lemma 2.1 that \(\omega_{w}(Lx_{n})\subset \bigcap_{j=1}^{N}F(T_{j})\). By arguing similarly to the proof of Lemma 3.4, (4.4), and (4.6), we conclude \(\omega_{w}(x_{n})\subset F(J_{\beta }^{B_{i}}(I-\beta A_{i}))= \bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0)\). Therefore,
Finally, we show that the sequence \(\{x_{n}\}\) generated by (4.1) converges strongly to \(q=P_{\mathcal{F}}(x_{1})\). Since \(x_{n}=P_{C_{n}} (x_{1})\) and \(q \in \mathcal{F}\subset C_{n}\), we get
Let \(\{x_{n_{k}}\}\) be an arbitrary subsequence of \(\{x_{n}\}\) converging weakly to \(p\in H_{1}\). Then \(p\in \mathcal{F}\) by (4.8) and hence it follows from the lower semi-continuity of the norm that
Thus, we obtain that \(\lim_{k\rightarrow \infty } \|x_{n_{k}}-x _{1}\|=\|p-x_{1}\|=\|q-x_{1}\|\). Using the Kadec–Klee property of \(H_{1}\), we get that \(\lim_{k\rightarrow \infty } x_{n_{k}}=p=q\). Since \(\{x_{n_{k}}\}\) is an arbitrary weakly convergent subsequence of \(\{x_{n}\}\) and \(\lim_{n\rightarrow \infty } \|x_{n}-x_{1}\|\) exists, we can imply that \(\{x_{n}\}\) converges strongly to q. This completes the proof. □
4.1 Deduced results of the parallel hybrid algorithm
One can obtain some results from Theorem 4.2. We give some of them in the following.
If we take \(M=N=1\), we have the following corollary.
Corollary 4.3
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces. Let \(T:H_{2}\rightarrow H_{2}\) be a nonexpansive mapping, \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator, \(A:H_{1}\rightarrow H_{1}\) be an \(\alpha_{i}\)-inverse strongly monotone operator, and \(B:H_{1}\rightarrow 2^{H_{1}}\) be a maximal monotone operator such that \(\mathcal{F}=(A+B)^{-1}(0) \cap L^{-1}(F(T))\neq \emptyset \). Suppose that the sequence \(\{x_{n}\}\) is defined by the following algorithm:
where \(x_{1}\in C_{1}=H_{1}\), \(0< a\leq \lambda_{n}\leq b<\frac{1}{2\|L \|^{2}}\), and \(\beta_{n}\in (0, 2\alpha)\) for each \(n\in \mathbb{N}\). Then the sequence \(\{x_{n}\}\) converges strongly to \(q=P_{\mathcal{F}}(x _{1})\).
From Theorem 4.2, we have the following corollary for the problem of finding a common zero of the sum of α-inverse strongly monotone operators and maximal monotone operators.
Corollary 4.4
Let H be a real Hilbert space, \(A_{i}:H\rightarrow H\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators, and \(B_{i}:H\rightarrow 2^{H}\), \(i=1, \ldots, M\), be maximal monotone operators such that \(\mathcal{F}=\bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0) \neq \emptyset \) and \(\alpha =\min \{\alpha_{1}, \ldots,\alpha_{M}\}\). Suppose that the sequence \(\{x_{n}\}\) is defined by the following algorithm:
where \(x_{1}\in H\) and \(\beta_{n}\in (0, 2\alpha)\) for each \(n\in \mathbb{N}\). Then the sequence \(\{x_{n}\}\) converges strongly to \(q=P_{\mathcal{F}}(x_{1})\).
5 Applications
5.1 Zeros of maximal monotone operators
In this section, we discuss some applications of the main theorems. Let \(M_{j}:H_{2}\rightarrow 2^{H_{2}}\), \(j=1, \ldots, N\), be maximal monotone operators. Set \(T_{j}=J_{r}^{M_{j}}\), where \(r>0\) and \(j=1, \ldots, N\). We know that \(T_{j}\) is nonexpansive and \(F(T_{j})=M_{j}^{-1}(0)\) for each \(j=1, \ldots, N\). By applying Theorem 3.5, we can get the following results.
Theorem 5.1
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, \(A_{i}:H_{1}\rightarrow H_{1}\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators, \(B_{i}:H_{1}\rightarrow 2^{H_{1}}\), \(i=1, \ldots, M\), and \(M_{j}:H_{2}\rightarrow 2^{H_{2}}\), \(j=1, \ldots, N\), be maximal monotone operators, and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(\mathcal{F}= ( \bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0) ) \cap L^{-1} ( \bigcap_{j=1}^{N}M_{j}^{-1}(0) ) \neq \emptyset \). Let \(x_{1}\in H_{1}\) and the sequence \(\{x_{n}\}\) be generated by the following algorithm:
If \(\alpha =\min \{\alpha_{1}, \ldots,\alpha_{M}\}\), \(\beta_{n} \in (0, 2\alpha)\), and \(0< a\leq \lambda_{n}\leq b<\frac{1}{2\|L\| ^{2}}\) for each \(n\in \mathbb{N}\), then \(\{x_{n}\}\) converges weakly to a point \(p\in \mathcal{F}\).
By Theorem 5.1, we have the following corollary for multiple sets split null point problems.
Corollary 5.2
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, \(B_{i}:H_{1}\rightarrow 2^{H_{1}}\), \(i=1, \ldots, M\), \(M_{j}:H_{2}\rightarrow 2^{H_{2}}\), \(j=1, \ldots, N\), be maximal monotone operators, and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(( \bigcap_{i=1}^{M}B _{i}^{-1}(0) ) \cap L^{-1} ( \bigcap_{j=1}^{N}M_{j}^{-1}(0) ) \neq \emptyset \). Let \(x_{1}\in H_{1}\) and the sequence \(\{x_{n}\}\) be generated by the following algorithm:
If \(\beta_{n}>0\) and \(0< a\leq \lambda_{n}\leq b<\frac{1}{2\|L\|^{2}}\) for each \(n\in \mathbb{N}\), then \(\{x_{n}\}\) converges weakly to a point \(p\in ( \bigcap_{i=1}^{M}B_{i}^{-1}(0) ) \cap L^{-1} ( \bigcap_{j=1}^{N}M_{j}^{-1}(0) ) \).
By applying Theorem 4.2, we have the following theorem.
Theorem 5.3
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, \(A_{i}:H_{1}\rightarrow H_{1}\), \(i=1, \ldots, M\), be \(\alpha_{i}\)-inverse strongly monotone operators, \(B_{i}:H_{1}\rightarrow 2^{H_{1}}\), \(i=1, \ldots, M\), and \(M_{j}:H_{2}\rightarrow 2^{H_{2}}\), \(j=1, \ldots, N\), be maximal monotone operators, and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(\mathcal{F}= ( \bigcap_{i=1}^{M}(A_{i}+B_{i})^{-1}(0) ) \cap L^{-1} ( \bigcap_{j=1}^{N}M_{j}^{-1}(0) ) \neq \emptyset \). Let \(x_{1}\in H_{1}\) and the sequence \(\{x_{n}\}\) be generated by the following algorithm:
If \(\alpha =\min \{\alpha_{1}, \ldots,\alpha_{M}\}\), \(\beta_{n} \in (0, 2\alpha)\), and \(0< a\leq \lambda_{n}\leq b<\frac{1}{2\|L\| ^{2}}\) for each \(n\in \mathbb{N}\), then \(\{x_{n}\}\) converges strongly to \(q=P_{\mathcal{F}}(x_{1})\).
5.2 Multiple set split convex feasibility problems
Let \(f\colon H\rightarrow \mathbb{R}\cup \{+\infty \}\) be a proper, convex, and lower semi-continuous function. It is well known that the subdifferential \(\partial f\colon H \rightarrow 2^{H}\), which is defined as
is a maximal monotone operator. In particular, let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let us consider the indicator function of C, denoted by \(\iota_{C}\), which is defined as
We know that \(\iota_{C}\) is a proper, convex, and lower semi-continuous function on H, and it follows that the subdifferential \(\partial \iota_{C}\) of \(\iota_{C}\) is a maximal monotone operator. Furthermore, we get \(z=J_{r}^{\partial \iota_{C}}x\) if and only if \(z=P_{C}(x)\), where \(x\in H\) and \(J_{r}^{\partial \iota_{C}}=(I+r\partial \iota_{C})^{-1}\) for each \(r>0\). Using these facts, by Theorems 3.5 and 4.2, we have the following corollaries for the multiple set split convex feasibility problem in Hilbert spaces.
Corollary 5.4
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, \(C_{i}\subset H_{1}\), \(i=1, \ldots, M\), \(D_{j}\subset H_{2}\), \(j=1, \ldots, N\), be nonempty, closed, and convex, and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(( \bigcap_{i=1}^{M}C_{i} ) \cap L^{-1} ( \bigcap_{j=1}^{N}D_{j} ) \neq \emptyset \). Let \(x_{1}\in H_{1}\) and the sequence \(\{x_{n}\}\) be generated by the following algorithm:
If \(0< a\leq \lambda_{n}\leq b<\frac{1}{2\|L\|^{2}}\) for each \(n\in \mathbb{N}\), then \(\{x_{n}\}\) converges weakly to a point \(p\in ( \bigcap_{i=1}^{M}C_{i} ) \cap L^{-1} ( \bigcap_{j=1}^{N}D _{j} ) \).
Corollary 5.5
Let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces, \(C_{i}\subset H_{1}\), \(i=1, \ldots, M\), \(D_{j}\subset H_{2}\), \(j=1, \ldots, N\), be nonempty, closed, and convex, and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator such that \(\mathcal{F}= ( \bigcap_{i=1}^{M}C_{i} ) \cap L^{-1} ( \bigcap_{j=1}^{N}D_{j} ) \neq \emptyset \). Let \(x_{1}\in H_{1}\) and the sequence \(\{x_{n}\}\) be generated by the following algorithm:
If \(0< a\leq \lambda_{n}\leq b<\frac{1}{2\|L\|^{2}}\) for each \(n\in \mathbb{N}\), then \(\{x_{n}\}\) converges strongly to \(q=P_{ \mathcal{F}}(x_{1})\).
5.3 Multiple sets split equilibrium problems
Now, we apply Theorem 3.5 for getting a common solution of multiple sets split equilibrium problems. In this respect, let C be a nonempty closed convex subset of a Hilbert space \(H_{1}\) and \(F\colon C\times C\rightarrow \mathbb{R}\) be a bifunction. The equilibrium problem for bifunction F is the problem of finding a point \(z\in H_{1}\) such that
The set of solutions of equilibrium problem (5.2) is denoted by \(EP(F)\). The bifunction \(F\colon C\times C\rightarrow \mathbb{R}\) is called monotone if \(F(x,y)+F(y,x)\leq 0\) for all \(x,y\in C\). For finding a solution of equilibrium problem (5.2), we assume that F satisfies the following properties:
-
(A1)
\(F(x, x)=0\) for all \(x\in C\);
-
(A2)
F is monotone;
-
(A3)
for each \(x, y, z \in C\), \(\limsup_{t\downarrow 0} F(tz+(1-t)x, y) \leq F(x, y)\);
-
(A4)
for each \(x\in C\), \(y\mapsto F(x,y)\) is convex and lower semi-continuous.
Then we have the following lemma which can be found in [40, 41].
Lemma 5.6
Let C be a nonempty closed convex subset of a Hilbert space \(H_{1}\) and \(F\colon C\times C\rightarrow \mathbb{R}\) be a bifunction satisfying properties (A1)–(A4). Let r be a positive real number and \(x\in H_{1}\). Then there exists \(z\in C\) such that
Further, define
for all \(r >0\) and \(x\in H_{1}\). Then the following hold:
-
(a)
\(T_{r}\) is single-valued;
-
(b)
\(T_{r}\) is firmly nonexpansive; that is,
$$\begin{aligned} \Vert T_{r}x-T_{r}y \Vert ^{2}\leq \langle T_{r}x-T_{r}y, x-y\rangle, \quad \forall x,y\in H_{1}; \end{aligned}$$ -
(c)
\(F(T_{r}) = EP (F)\);
-
(d)
\(EP(F)\) is closed and convex.
Let \(C_{i}\), \(i=1, \ldots, M\), and \(D_{j}\), \(j=1, \ldots, N\), be nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(f_{i}\colon C_{i}\times C_{i} \rightarrow \mathbb{R}\), \(i=1, \ldots, M\), and \(g_{j}\colon D_{j} \times D_{j}\rightarrow \mathbb{R}\), \(j=1, \ldots, N\), be bifunctions which satisfy properties (A1)–(A4), and \(L:H_{1}\rightarrow H_{2}\) be a bounded linear operator. From Lemma 5.6 there exist the sequences \(\{z_{i,n}\}\) of \(H_{1}\) and \(\{u_{j,n}\}\) of \(H_{2}\) satisfying
Therefore, by applying Theorem 3.5, we have the following theorem for multiple sets split equilibrium problem.
Theorem 5.7
Let \(C_{i}\), \(i=1, \ldots, M\), and \(D_{j}\), \(j=1, \ldots, N\), be nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(f_{i}\colon C_{i}\times C_{i} \rightarrow \mathbb{R}\), \(i=1, \ldots, M\), and \(g_{j}\colon D_{j} \times D_{j}\rightarrow \mathbb{R}\), \(j=1, \ldots, N\), be bifunctions which satisfy properties (A1)–(A4). Suppose that \(L:H_{1}\rightarrow H_{2}\) is a bounded linear operator such that \(\mathcal{F}= ( \bigcap_{i=1} ^{M}EP(f_{i}) ) \cap L^{-1} ( \bigcap_{j=1}^{N}EP(F_{j}) ) \neq \emptyset \). If \(\beta_{n}>0\), \(0< a\leq \lambda_{n}\leq b<\frac{1}{2 \|L\|^{2}}\) for each \(n\in \mathbb{N}\) and r is a positive real number, then the sequence \(\{x_{n}\}\) generated by (5.3) converges weakly to a solution of multiple sets split equilibrium problem.
We also have the following strong convergence theorem for finding a solution of multiple sets split equilibrium problem.
Theorem 5.8
Let \(C_{i}\), \(i=1, \ldots, M\), and \(D_{j}\), \(j=1, \ldots, N\), be nonempty, closed, and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(f_{i}\colon C_{i}\times C_{i} \rightarrow \mathbb{R}\), \(i=1, \ldots, M\), and \(g_{j}\colon D_{j} \times D_{j}\rightarrow \mathbb{R}\), \(j=1, \ldots, N\), be bifunctions which satisfy properties (A1)–(A4). Suppose that \(L:H_{1}\rightarrow H_{2}\) is a bounded linear operator such that \(\mathcal{F}= ( \bigcap_{i=1} ^{M}EP(f_{i}) ) \cap L^{-1} ( \bigcap_{j=1}^{N}EP(F_{j}) ) \neq \emptyset \). Suppose that \(x_{1}\in C_{1}=H_{1}\) and the sequence \(\{x_{n}\}\) is generated by the following algorithm:
If \(\beta_{n}>0\), \(0< a\leq \lambda_{n}\leq b<\frac{1}{2\|L\|^{2}}\) for each \(n\in \mathbb{N}\) and r is a positive real number, then the sequence \(\{x_{n}\}\) converges strongly to \(q=P_{\mathcal{F}}(x_{1})\).
6 Numerical experiments
In this section, we show some numerical examples and discuss the possible good choices of step size parameters \(\beta_{n}\) and \(\lambda_{n}\), which satisfy the control conditions in Theorem 3.5.
Let \(H_{1}=\mathbb{R}^{2}\) and \(H_{2}=\mathbb{R}^{3}\) be equipped with the Euclidean norm. Let \(a_{1}:= \Bigl ({\scriptsize\begin{matrix}{} -\frac{2}{\sqrt{5}} \cr -\frac{1}{\sqrt{5}} \end{matrix}} \Bigr ) \), \(a_{2}:= \Bigl ({\scriptsize\begin{matrix}{} -\frac{1}{\sqrt{2}} \cr -\frac{1}{\sqrt{2}} \end{matrix}} \Bigr ) \), and \(u:= \Bigl ({\scriptsize\begin{matrix}{} -1 \cr -1 \end{matrix}} \Bigr ) \) be fixed in \(H_{1}\), and \(\gamma_{1}:=\cos \frac{7\pi }{18}\) and \(\gamma_{2}:=\cos \frac{\pi }{3}\) be scalars. Set \(\tilde{C}_{1}:=C _{1}+u\) and \(\tilde{C}_{2}:=C_{2}+u\), where \(C_{1}\) and \(C_{2}\) are the following closed convex ice-cream cones in \(H_{1}\):
We will consider 1-ism operators \(P_{\tilde{C}_{1}}\) and \(P_{ \tilde{C}_{2}}\), where \(\tilde{C}_{1}\) and \(\tilde{C}_{2}\) are defined by the above settings.
Next, for each \(x:= \Bigl ({\scriptsize\begin{matrix}{} x_{1} \cr x_{2} \end{matrix}} \Bigr ) \in H_{1} \), we are also concerned with the following two norms:
Consider a function \(f:H_{1}\rightarrow \mathbb{R}\), which is defined by
We know that f is a convex function and subdifferential of f is
Moreover, since f is a convex function, we know that \(\partial f( \cdot)\) must be a maximal monotone operator, and for each \(\lambda >0\), we have
where \(\operatorname{sgn}(\cdot)\) is denoted for the signum function.
On the other hand, let \(\tilde{x}_{1}:= \Bigl ({\scriptsize\begin{matrix}{} 1 \cr 2 \cr -1 \end{matrix}} \Bigr ) \), \(\tilde{x}_{2}:= \Bigl ({\scriptsize\begin{matrix}{} -1 \cr 1 \cr -1 \end{matrix}} \Bigr ) \), and \(\tilde{x}_{3}:= \Bigl ({\scriptsize\begin{matrix}{} 0 \cr -1 \cr 0 \end{matrix}} \Bigr ) \) be three fixed vectors in \(H_{2}\). We consider a nonempty convex subset \(Q_{1}\cap Q_{2}\cap Q_{3}\) of \(H_{2}\), where \(Q_{1}:=\{x \in H_{2}:\Vert \tilde{x}_{1}-x\Vert \leq 5\}\), \(Q_{2}:=\{x\in H_{2}: \langle \tilde{x}_{2},x\rangle \leq 1\}\), and \(Q_{3}:= \{x\in H _{2}:\langle \tilde{x}_{3},x\rangle \leq -\frac{1}{2} \}\). We notice that \(F(P_{Q_{1}})\cap F(P_{Q_{2}})\cap F(P_{Q_{3}})= Q_{1}\cap Q_{2} \cap Q_{3}\).
Now, let us consider a \(3\times 2\) matrix \(L:= \Bigl [{\scriptsize\begin{matrix}{} 1 & 0 \cr 2 & -2 \cr 0 & 2 \end{matrix}} \Bigr ] \). We see that L is a bounded linear operator on \(H_{1}\) into \(H_{2}\) with \(\Vert L\Vert =3.282073\).
Based on the above settings, we will present some numerical experiments to show the efficiency of the constructed algorithm (3.1). That is, we are going to show that algorithm (3.1) converges to a point \(p\in H_{1}\) such that
and in this experiment, we consider the stopping criterion by \(\frac{\|x_{n+1}-x_{n}\|}{\max \{1,\|x_{n}\|\}}\leq 1.0e^{-06}\).
We will consider the following cases of the step size parameters \(\beta_{n}\) and \(\lambda_{n}\) with the initial vectors \(\Bigl ({\scriptsize\begin{matrix}{} 0 \cr 0 \end{matrix}} \Bigr ) \), \(\Bigl ({\scriptsize\begin{matrix}{} 1 \cr 1 \end{matrix}} \Bigr ) \), \(\Bigl ({\scriptsize\begin{matrix}{} 1 \cr -1 \end{matrix}} \Bigr ) \), \(\Bigl ({\scriptsize\begin{matrix}{} -1 \cr 1 \end{matrix}} \Bigr ) \), and \(\Bigl ({\scriptsize\begin{matrix}{} -1 \cr -1 \end{matrix}} \Bigr ) \) in \(H_{1}\):
- Case 1.:
-
\(\beta_{n}=1.0e^{-03}+\frac{1}{100n}\), \(\lambda_{n}=1.0e ^{-03}+\frac{1}{100n}\).
- Case 2.:
-
\(\beta_{n}=1.0e^{-03}+\frac{1}{100n}\), \(\lambda_{n}=\frac{1}{4 \|L\|^{2}}\).
- Case 3.:
-
\(\beta_{n}=1.0e^{-03}+\frac{1}{100n}\), \(\lambda_{n}=0.046- \frac{1}{100n}\).
- Case 4.:
-
\(\beta_{n}=1\), \(\lambda_{n}=1.0e^{-03}+\frac{1}{100n}\).
- Case 5.:
-
\(\beta_{n}=1\), \(\lambda_{n}=\frac{1}{4\|L\|^{2}}\).
- Case 6.:
-
\(\beta_{n}=1\), \(\lambda_{n}=0.046-\frac{1}{100n}\).
- Case 7.:
-
\(\beta_{n}=1.999-\frac{1}{100n}\), \(\lambda_{n}=1.0e^{-03}+ \frac{1}{100n}\).
- Case 8.:
-
\(\beta_{n}=1.999-\frac{1}{100n}\), \(\lambda_{n}=\frac{1}{4 \|L\|^{2}}\).
- Case 9.:
-
\(\beta_{n}=1.999-\frac{1}{100n}\), \(\lambda_{n}=0.046- \frac{1}{100n}\).
From Tables 1, 2, and 3, we may suggest that, for each initial point, the step size of the parameters \(\lambda_{n}=0.046-\frac{1}{100n}\) provides a faster convergence rate than other cases. While the step size parameters \(\beta_{n}\) seem to have less impact on the speed of algorithm (3.1) to a solution set (6.1).
7 Conclusions
In this paper, we present two iterative algorithms, (3.1) and (4.1), for approximating a solution of the split feasibility problem on zeros of a finite sum of monotone operators and fixed points of a finite family of nonexpansive mappings. Under some mild conditions, we show the convergence theorems of the mentioned algorithms. Subsequently, some corollaries and applications of those main results are provided. We point out that the construction of algorithm (3.1) seems to be less complicated than that of (4.1). However, algorithm (3.1) requires some additional assumptions in order to guarantee the strong convergence theorem, while algorithm (4.1) does not need them (see Theorem 3.6 and Theorem 4.2). This observation may lead to the future works that are to analyze and discuss the rate of convergence of these suggested algorithms.
References
Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367–426 (1996)
Censor, Y.: Iterative methods for the convex feasibility problem. North-Holl. Math. Stud. 87, 83–91 (1984)
Combettes, P.L.: The convex feasibility problem in image recovery. In: Hawkes, P. (ed.) Advances in Imaging and Electron Physics, vol. 95, pp. 155–270. Academic Press, New York (1996)
Combettes, P.L.: The foundations of set theoretic estimation. Proc. IEEE 81(2), 182–208 (1993)
Deutsch, F.: The method of alternating orthogonal projections. In: Singh, S.P. (ed.) Approximation Theory, Spline Functions and Applications, pp. 105–121. Kluwer Academic, The Netherlands (1992)
Rockafellar, R.T.: Maximal monotone operators and proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)
Stark, H. (ed.): Image Recovery Theory and Applications. Academic Press, Orlando (1987)
Yao, Y., Liou, Y.C., Postolache, M.: Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization https://doi.org/10.1080/02331934.2017.1390747
Yao, Y., Postolache, M., Qin, X., Yao, J.C.: Iterative algorithms for the proximal split feasibility problem. U. Politeh. Buch. Ser. A. (in printing)
Yao, Y., Leng, L., Postolache, M., Zheng, X.: Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 18(5), 875–882 (2017)
Yao, Y., Agarwal, R.P., Postolache, M., Liu, Y.C.: Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014, Article ID 183 (2014)
Yao, Y., Postolache, M., Liou, Y.C.: Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, Article ID 201 (2013)
Ansari, Q.H., Nimana, N., Petrot, N.: Split hierarchical variational inequality problems and related problems. Fixed Point Theory Appl. 2014, Article ID 208 (2014)
Suwannaprapa, M., Petrot, N., Suantai, S.: Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces. Fixed Point Theory Appl. 2017, 6 (2017)
Moudafi, A.: On the regularization of the sum of two maximal monotone operators. Nonlinear Anal., Theory Methods Appl. 42(7), 1203–1208 (2000)
Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155(2), 447–454 (2003)
Chang, S.S., Lee, H.J., Chan, C.K.: A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal., Theory Methods Appl. 70(9), 3307–3319 (2009)
Dadashi, V.: Shrinking projection algorithms for the split common null point problem. Bull. Aust. Math. Soc. 96, 299–306 (2017)
Kang, S., Cho, S., Liu, Z.: Convergence of iterative sequences for generalized equilibrium problems involving inverse-strongly monotone mappings. J. Inequal. Appl. 2010(1), 827082 (2010)
Lv, S.: Generalized systems of variational inclusions involving (A, η)-monotone mappings. Adv. Fixed Point Theory 1(1), 15 (2011)
Nadezhkina, N., Takahashi, W.: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128(1), 191–201 (2006)
Qin, X., Cho, Y.J., Kang, S.M.: Convergence theorems of common elements for equilibrium problems and fixed point problems in Banach spaces. J. Comput. Appl. Math. 225(1), 20–30 (2009)
Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. Rev. Fr. Inform. Rech. Oper. 3, 154–158 (1970)
Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979)
Dadashi, V., Khatibzadeh, H.: On the weak and strong convergence of the proximal point algorithm in reflexive Banach spaces. Optimization 66(9), 1487–1494 (2017)
Dadashi, V., Postolache, M.: Hybrid proximal point algorithm and applications to equilibrium problems and convex programming. J. Optim. Theory Appl. 174, 518–529 (2017)
Moudafi, A., Thera, M.: Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 94(2), 425–448 (1997)
Qin, X., Cho, S.Y., Wang, L.: A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, Article ID 75 (2014)
Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)
Cho, S.Y., Li, W., Kang, S.M.: Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013(1), 199 (2013)
Wu, C., Liu, A.: Strong convergence of a hybrid projection iterative algorithm for common solutions of operator equations and of inclusion problems. Fixed Point Theory Appl. 2012(1), 90 (2012)
Zhang, M.: Iterative algorithms for common elements in fixed point sets and zero point sets with applications. Fixed Point Theory Appl. 2012(1), 21 (2012)
Shimoji, K., Takahashi, W.: Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwan. J. Math. 5(2), 387–404 (2001)
Suzuki, T.: Strong convergence theorems for an infinite family of nonexpansive mappings in general Banach spaces. Fixed Point Theory Appl. 1, 103–123 (2005)
Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)
Boikanyo, O.A.: The viscosity approximation forward-backward splitting method for zeros of the sum of monotone operators. Abstr. Appl. Anal. 2016, Article ID 2371857 (2016)
Xu, H.K.: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011)
Aoyama, K., Kimura, Y., Takahashi, W., Toyoda, M.: On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 8, 471–489 (2007)
Bruck, R.E., Passty, G.B.: Almost convergence of the infinite product of resolvents in Banach spaces. Nonlinear Anal. 3, 279–282 (1979)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005)
Acknowledgements
The authors thank the anonymous referees for their remarkable comments and suggestions to improve this paper.
Funding
This work is partially supported by the Thailand Research Fund under the project RSA5880028.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Petrot, N., Suwannaprapa, M. & Dadashi, V. Convergence theorems for split feasibility problems on a finite sum of monotone operators and a family of nonexpansive mappings. J Inequal Appl 2018, 205 (2018). https://doi.org/10.1186/s13660-018-1799-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1799-3