Two hybrid algorithms for solving split equilibrium ...

5 downloads 0 Views 411KB Size Report
Jan 3, 2017 - The paper considers split equilibrium problems in Hilbert spaces and .... Moreover, the step-size in the second projection is chosen in a. 3 ...
January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

To appear in the International Journal of Computer Mathematics Vol. 00, No. 00, Month 20XX, 1–22

Two hybrid algorithms for solving split equilibrium problems Dang Van Hieu∗ Department of Mathematics, Vietnam National University, Hanoi, 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam (Received 00 Month 20XX; accepted 00 Month 20XX) The paper considers split equilibrium problems in Hilbert spaces and proposes two hybrid algorithms for finding their solution aproximations. Three methods including the diagonal subgradient method, the projection method and the proximal method have been used to design the algorithms. Using the diagonal subgradient method for equilibrium problems has allowed us to reduce complex computations on bifunctions and feasible sets. The first algorithm is designed with two projections on feasible set and with the prior knowledge of operator norm while the second algorithm is simpler in computations where only one projection on feasible set needs to be implemented and the information of operator norm is not necessary to construct solution approximations. The strongly convergent theorems are established under suitable assumptions imposed on equilibrium bifunctions. The computational performance of the proposed algorithms over existing methods is also illustrated by several preliminary numerical experiments. Keywords: Split equilibrium problem; Split inverse problem; Projection method; Diagonal subgradient method; Proximal method. 2010 AMS Subject Classification: 90C33; 68W10; 65K10

1.

Introduction

The Split Feasibility Problem (SFP) was first introduced by Censor and Elving [4] in Euclidean spaces for modelling a class of inverse problems which arise in signal processing, specifically in phase retrieval and other image restoration problems, see, e.g., [22, 39]. After that, it was found that the SFP can be used to model the intensity-modulated radiation therapy [5], and many other fields [2, 3, 6]. An archetypal model presented in [7, Sect. 2] is the Split Inverse Problem (SIP) where there are a bounded linear operator A from a space X to another space Y and two inverse problems IP1 and IP2 installed in X and Y , respectively. Mathematically, the SIP is stated as follows:  ∗  find a point x ∈ X that solves IP1 such that   the point y ∗ = Ax∗ ∈ Y that solves IP2.

(SIP)

In this framework, many models of inverse problems can be predicted by choosing different inverse problems for IP1 and IP2. Notable SIPs can be the split fixed point problem, the split optimization problem and the split variational inequality problem where IP1 and IP2 are, respectively, fixed point problems, optimization problems or variational inequal∗ Corresponding

author. Email: [email protected]

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

ity problems. These problems have been widely studied in recent years both theoretically and algorithmically in Euclidean spaces as well as in infinite-dimensional Hilbert spaces, see, e.g., [7, 13, 32–34, 40]. It is natural to study the SIP with other inverse models for IP1 and IP2 which appear frequently in mathematical fields, for instance, equilibrium problems [1, 36]. This paper considers the SIP with IP1 and IP2 being equilibrium problems in a Hilbertian framework, and so, called the Split Equilibrium Problem (SEP) [14–16, 32]. The SEP is stated as follows: Problem SEP. Let H1 , H2 be two real Hilbert spaces and C, Q be two nonempty closed convex subsets of H1 , H2 , respectively. Let A : H1 → H2 be a bounded linear operator. Let f : C × C → ℜ and F : Q × Q → ℜ be two bifunctions with f (x, x) = 0 for all x ∈ C and F (u, u) = 0 for all u ∈ Q. The SEP is: ( Find x∗ ∈ C such that f (x∗ , y) ≥ 0, ∀y ∈ C, and u∗ = Ax∗ ∈ Q solves F (u∗ , u) ≥ 0, ∀u ∈ Q.

(SEP)

In the case, when F = 0 and Q = H2 then the SEP becomes the following equilibrium problem (EP) [1, 36]: Find x∗ ∈ C such that f (x∗ , y) ≥ 0, ∀y ∈ C.

(EP)

The solution set of an EP for a bifunction f on C is denoted by EP (f, C). It has been well known that the EP is a generalization of many mathematical models such as fixed point problems, optimization problems and variational inequality problems, see, e.g., [1, 36]. The EP has been studied widely in recent years and many algorithms have been also proposed for solving it, for instance, in [23–26, 28–30, 43]. The following can be considered as an extended simple model relative to the SEP. This model has been presented in [16]. Suppose that there are m companies that produce a commodity. Let x denote the vector whose entry xj stands for the quantity of the commodity producing by company j and Cj be the strategy set of company j. Then the strategy set of the model is C := CP 1 ×C2 × ...×Cm . Assume that the price pj (s) is a decreasing affine function of s with s = m j=1 xj , i.e., pj (s) = αj − βj s, where αj > 0, βj > 0. Then the profit made by company j is given by fj (x) = pj (s)xj − cj (xj ), where cj (xj ) is the tax for generating xj . In fact, each company seeks to maximize its profit by choosing the corresponding production level under the presumption that the production of the other companies is a parametric input. A commonly used approach to this model is based upon the famous Nash equilibrium concept. We recall that a point x∗ ∈ C = C1 × C2 × · · · × Cm is an equilibrium point of the model if fj (x∗ ) ≥ fj (x∗ [xj ]) ∀xj ∈ Cj , ∀j = 1, 2, . . . , m,

2

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

where the vector x∗ [xj ] stands for the vector obtained from x∗ by replacing x∗j with xj . Define the bifunction f by f (x, y) := ψ(x, y) − ψ(x, x) P with ψ(x, y) := − m j=1 fj (x[yj ]). The problem of finding a Nash equilibrium point of the model can be formulated as: Find

x∗ ∈ C

such that f (x∗ , y) ≥ 0 ∀y ∈ C.

(EP1)

Numerical solutions to this equilibrium model have been studied and presented in an early work [10] as well as in several recent works, for instance [16, 18, 19, 21, 37]. Now, we extend this equilibrium model to a split equilibrium model by additionally assuming that to produce electricity the generating units use some materials. Let alj denote the quantity of material l (l = 1, . . . , k) for producing one unit electricity by the generating P unit j (j = 1, . . . , m). Thus, the quality total of material l for proceducing x is ul = m j=1 alj xj . In practise, the quality total of material l is in a certain strategy set Ql , i.e., ul ∈ Ql and using material l for production may cause pollution to environment for which companies have to pay an environmental fee gl (ul ). Then, the total environment fee of the model P is g(u) = kl=1 gl (ul ) where u = (u1 , . . . , uk )T ∈ ℜk . Now, we set Q = Q1 × . . . × Qk , A = (alj )k×m (and so u = Ax), and F (u, v) = g(v) − g(u) for all u, v ∈ Q. Of course, we would like to minimize the total environment fee which the companies have to pay. This means that Find

u∗ ∈ Q

such that

F (u∗ , v) ≥ 0 ∀v ∈ Q.

(EP2)

The problem of finding a solution point x∗ of EP1 such that u∗ = Ax∗ solves EP2 is relative to the SEP considered in this paper. It is clear that the SEP is to split equilibrium solutions between two different subsets of spaces in which the image of a solution point of one problem, under a given bounded linear operator, is a solution point of another problem. In recent years, the SEP has received an attention by several authors, for instance, [11, 12, 14–16, 27, 32]. Almost proposed algorithms have used the proximal method (i.e., using the resolvent of a monotone bifunction [9]) to construct solution approximations to SEPs. It is also emphasized that the proximal method is often used for monotone equilibrium problems. Very recently, the author in [16] has introduced some algorithms for SEPs in Hilbert spaces which combine the extragradient-like method with the proximal method. The main advantages of the extragradient-like method are that it can be used for pseudomonotone bifunctions and subproblems can be numerically solved more easily than subproblems in the proximal method. This can be found, for instance, in [16–21]. However, the problems of solving optimization subproblems and of finding shrinking projections in [16] can be still costly if bifunctions and feasible sets have complex structures. In this paper, we first propose an algorithm, called the projected subgradient-proximal algorithm, for solving SEPs in Hilbert spaces. The algorithm combines the diagonal subgradient method, the projection method and the proximal method. Comparing with the algorithms in [16], the proposed algrithm has an elegant and simple structure, and the metric projection, in general, is simpler than optimization programs on a same feasible set. However, this algorithm requries that two projections on feasible set need to be computed per each iteration. Moreover, the step-size in the second projection is chosen in a 3

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

real interval which depends on the norm of operator. The problem of finding or at least an estimate of the norm of an operator, in general, is not an easy task in Hilbert spaces. Then, we propose a modification of the first algorithm where the second projection is performed on feasible set while the first projection is on a specifically constructed half-space, and thus, it is explicit. Besides, a way of selecting an adaptive step-size in the second projection has allowed us to avoid the prior knowledge of operator norm. The strongly convergent theorems are proved under suitable assumptions. The results in this paper have been developed from the results presented in [16, 31, 35, 38, 41, 44]. Finally, several of preliminary numerical experiments are performed to illustrate the computational performance of the proposed algorithms in comparison with the extragradient-proximal algorithms in [16]. This paper is organized as follows: In Sect. 2, we recall some definitions and preliminary results for the further use. Sects. 3 and 4 deal with proposing the algorithms and proving their convergence. Finally, in Sect. 5 we perform several numerical experiments to illustrate the efficiency of our algorithms in comparison with others.

2.

Preliminaries

Let C be a nonempty closed convex subset of a real Hilbert space H. The metric projection PC : H → C is defined by PC (x) = arg min {ky − xk : y ∈ C} . Since C is nonempty, closed and convex, PC (x) exists and is unique. From the definition of PC , it is easy to show that PC has the following characteristic properties. Lemma 2.1

Let PC : H → C be the metric projection from H onto C. Then

(i) PC is firmly nonexpansive, i.e., hPC (x) − PC (y), x − yi ≥ kPC (x) − PC (y)k2 , ∀x, y ∈ H. (ii) For all x ∈ C, y ∈ H, kx − PC (y)k2 + kPC (y) − yk2 ≤ kx − yk2 . (iii) z = PC (x) if and only if hx − z, y − zi ≤ 0,

∀y ∈ C.

Next, we present some concepts of monotone of a bifunction, see [1, 36]. Definition 2.2

A bifunction f : C × C → ℜ is said to be

(i) strongly monotone on C if there exists a constant γ > 0 such that f (x, y) + f (y, x) ≤ −γ||x − y||2 , ∀x, y ∈ C; (ii) monotone on C if f (x, y) + f (y, x) ≤ 0, ∀x, y ∈ C;

4

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

(iii) pseudomonotone on C if f (x, y) ≥ 0 =⇒ f (y, x) ≤ 0, ∀x, y ∈ C. From above definitions, it is clear that a strongly monotone bifunction is monotone and a monotone bifunction is pseudomonotone. A function ϕ : C → ℜ is said to be convex on C if for all x, y ∈ C and t ∈ [0, 1], ϕ(tx + (1 − t)y) ≤ tϕ(x) + (1 − t)ϕ(y). The subdifferential of ϕ at x ∈ C is defined by ∂ϕ(x) = {w ∈ H : ϕ(y) − ϕ(x) ≥ hw, y − xi , ∀y ∈ C} . An enlargement of subdifferential is ǫ-subdifferential. The ǫ-subdifferential of ϕ at x ∈ C is defined by ∂ǫ ϕ(x) = {w ∈ H : ϕ(y) − ϕ(x) + ǫ ≥ hw, y − xi , ∀y ∈ C} . It is clear that the 0-subdifferential coincides the subdifferential. Let f : C × C → ℜ be a bifunction. Throughout this paper, ∂ǫ f (x, .)(x) is called ǫ-diagonal subdifferential of f at x ∈ C. Let F : C × C → ℜ be a bifunction, we consider the following conditions, see [9]. ¯ Condition A ¯ F is monotone on C and F (x, x) = 0 for all x ∈ C. A1. ¯ For all x, y, z ∈ C, A2. lim sup F (tz + (1 − t)x, y) ≤ F (x, y);

t→0+

¯ For all x ∈ C, F (x, .) is convex and lower semicontinuous. A3. The following results concern with a monotone bifunction F : C × C → ℜ. Lemma 2.3 [9, Lemma 2.12] Let C be a nonempty, closed and convex subset of a Hilbert ¯ and let r > 0, x ∈ H. space H, F be a bifunction from C × C to ℜ satisfying Condition A Then, there exists z ∈ C such that 1 F (z, y) + hy − z, z − xi ≥ 0, r

∀y ∈ C.

Lemma 2.4 [9, Lemma 2.12] Let C be a nonempty, closed and convex subset of a Hilbert ¯ For all r > 0 and space H, F be a bifunction from C × C to ℜ satisfying Condition A. x ∈ H, define the mapping (also called the resolvent of F ), 1 TrF x = {z ∈ C : F (z, y) + hy − z, z − xi ≥ 0, r

∀y ∈ C}.

Then the followings hold: (C1) TrF is single-valued; (C2) TrF is a firmly nonexpansive, i.e., for all x, y ∈ H, ||TrF x − TrF y||2 ≤ hTrF x − TrF y, x − yi; (C3) F ix(TrF ) = EP (F, C), where F ix(TrF ) is the fixed point set of TrF ; (C4) EP (F, C) is closed and convex. 5

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

Lemma 2.5 [15, Lemma 2.5] For r, s > 0 and x, y ∈ H. Under the assumptions of Lemma 2.4, then ||TrF (x) − TsF (y)|| ≤ ||x − y|| +

|s − r| F ||Ts (y) − y||. s

We need the following technical lemma to prove the convergence of the proposed algorithms. Lemma 2.6 [42] Let {νn } and {δ Pn } be two sequences of positive real numbers such that νn+1 ≤ νn +δn for all n ≥ 1 with n≥1 δn < +∞. Then, the sequence {νn } is convergent. 3.

Projected subgradient - proximal algorithm

In this section, we propose an algorithm for solving SEPs which combines three methods including the projection method, the proximal method and the diagonal subgradient method. Let us denote by Ω the solution set of the SEP, i.e., Ω = {x∗ ∈ EP (f, C) : Ax∗ ∈ EP (F, Q)}. The algorithm is designed as follows. Algorithm 3.1 (Projected subgradient - proximal algorithm for SEPs)

.

Initialization: Choose x0 ∈ C and the parameter sequences {ρn }, {βn }, {ǫn }, {rn }, {µn } such that Condition B below holds. Iterative Steps: Assume that xn ∈ C is known, calculate xn+1 as follows: Step 1. Select wn ∈ ∂ǫn f (xn , .)(xn ), and compute γn = max {ρn , ||wn ||} ,

αn =

βn , γn

yn = PC (xn − αn wn ).

 Step 2. Compute xn+1 = PC yn − µn A∗ (I − TrFn )Ayn . Remark 3.2 In the case, when ǫn = 0 for all n ≥ 0, i.e., wn is an exact subgradient of f (xn , .) at xn , if wn = 0 and xn+1 = yn then xn ∈ Ω. Indeed, from 0 = wn ∈ ∂f (xn , .)(xn ), f (xn , xn ) = 0, we have f (xn , y) = f (xn , y) − f (xn , xn ) ≥ hwn , y − xn i = 0, ∀y ∈ C. Thus, xn ∈ EP (f, C). Since wn = 0, yn = PC (xn −αn wn ) and xn ∈ C, we obtain yn = xn . It follows from xn+1 = yn and the definition of xn+1 that ||(I − TrFn )Ayn ||2 = 0 (see, relation (1) below). Thus, from Lemma 2.4(C3), we obtain Ayn ∈ F ix(TrFn ) = EP (F, Q), and so Axn = Ayn ∈ EP (F, Q). To establish the convergence of Algorithm 3.1, we assume that the bifunction F : Q×Q → ¯ in Sect. 2 and the bifunction f : C ×C → ℜ satisfies the following ℜ satisfies Condition A conditions. Some detailed comments on these assumptions can be found in [9, 38]. Condition A 6

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

A1. f (x, x) = 0, ∀x ∈ C; A2. f (x, .) is convex and lower semicontinuous on C and f (., y) is weakly upper semicontinuous C; A3. The ǫ-diagonal subdifferential of f is bounded on each bounded subset of C; A4. f is pseudomonotone and satisfies the following condition x ∈ EP (f, C), y ∈ C, f (y, x) = 0 =⇒ y ∈ EP (f, C); Moreover, take the parameter sequences in Algorithm 3.1 satisfying the conditions: Condition B B1. ρn ≥ ρ > 0, βn > 0, ǫn ≥ 0, rn ≥ r > 0. P βn ǫn P 2 P βn βn < +∞. B2. ρn = +∞, ρn < +∞, n≥1

n≥1

B3. 0 < a ≤ µn ≤ b
0, Algorithm 3.1 is well defined. If f is pseudomonotone and satisfies conditions A1 and A2 then EP (f, C) is convex and closed. From Lemma 2.4(C4), EP (F, Q) is also closed and convex, and so the solution set Ω of the SEP is convex and closed due to the linear property of the operator A. In this paper, the solution set Ω is assumed to be nonempty. We have the following lemma. Lemma 3.3

Let x∗ ∈ Ω, then the following holds for each n ≥ 0, ||xn+1 − x∗ ||2 ≤ ||xn − x∗ ||2 + 2αn f (xn , x∗ ) − Kn + δn ,

where Kn = µn (2 − µn ||A||2 )||(I − TrFn )Ayn ||2 + ||xn − yn ||2 and δn = 2 βρnnǫn + 2βn2 . Proof. It follows from the firm-nonexpansiveness of TrFn (Lemma 2.4(C2)) that 2||TrFn (Ayn ) − Ax∗ ||2 = 2||TrFn (Ayn ) − TrFn (Ax∗ )||2

≤ 2 TrFn (Ayn ) − TrFn (Ax∗ ), Ayn − Ax∗

= 2 TrFn (Ayn ) − Ax∗ , Ayn − Ax∗

= ||TrFn (Ayn ) − Ax∗ ||2 + ||Ayn − Ax∗ ||2 − ||TrFn (Ayn ) − Ayn ||2 .

Thus, ||TrFn (Ayn ) − Ax∗ ||2 − ||Ayn − Ax∗ ||2 ≤ −||TrFn (Ayn ) − Ayn ||2 , which, combines with the following equality,

2 A(yn − x∗ ), TrFn (Ayn ) − Ayn = ||TrFn (Ayn ) − Ax∗ ||2 − ||Ayn − Ax∗ ||2 − ||TrFn (Ayn ) − Ayn ||2 , we obtain



A(yn − x∗ ), TrFn (Ayn ) − Ayn ≤ −||TrFn (Ayn ) − Ayn ||2 . 7

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

This together with the definition of xn+1 and the nonexpansiveness of the metric projection implies that  ||xn+1 − x∗ ||2 = ||PC yn − µn A∗ (I − TrFn )Ayn − PC (x∗ )||2 ≤ ||yn − µn A∗ (I − TrFn )Ayn − x∗ ||2

= ||yn − x∗ ||2 + µ2n ||A∗ (I − TrFn )Ayn ||2 − 2µn yn − x∗ , A∗ (I − TrFn )Ayn )

≤ ||yn − x∗ ||2 + µ2n ||A∗ ||2 ||(I − TrFn )Ayn ||2 − 2µn A(yn − x∗ ), (I − TrFn )Ayn )

= ||yn − x∗ ||2 + µ2n ||A||2 ||(I − TrFn )Ayn ||2 + 2µn A(yn − x∗ ), TrFn (Ayn ) − Ayn ≤ ||yn − x∗ ||2 + µ2n ||A||2 ||||(I − TrFn )Ayn ||2 − 2µn ||TrFn (Ayn ) − Ayn ||2 = ||yn − x∗ ||2 − µn (2 − µn ||A||2 )||(I − TrFn )Ayn ||2 .

(1)

Now, we will estimate the first term in the right-hand side of inequality (1). From yn = PC (xn − αn wn ), Lemma 2.1(iii) and x∗ , xn ∈ C, we have hx∗ − yn , xn − αn wn − yn i ≤ 0, hxn − yn , xn − αn wn − yn i ≤ 0.

(2) (3)

From relation (2), we obtain hx∗ − yn , xn − yn i ≤ αn hwn , x∗ − yn i = αn hwn , x∗ − xn i + αn hwn , xn − yn i ≤ αn hwn , x∗ − xn i + αn ||wn ||||xn − yn ||.

(4)

It follows from relation (3) that ||xn − yn ||2 ≤ αn hxn − yn , wn i ≤ αn ||wn ||||xn − yn ||. Thus ||xn − yn || ≤ αn ||wn || which, from the definitions of αn and γn , implies that 2

αn ||wn ||||xn − yn || ≤ (αn ||wn ||) = =

βn2





βn ||wn || γn 2

||wn || max {ρn , ||wn ||}

2 ≤ βn2 .

(5)

From assumption A1, wn ∈ ∂ǫn f (xn , .)(xn ), the definition of the ǫn - diagonal subdifferential of f , we see that f (xn , x∗ ) + ǫn = f (xn , x∗ ) − f (xn , xn ) + ǫn ≥ hwn , x∗ − xn i .

(6)

Moreover, from the definition of αn and γn we obtain αn =

βn βn βn = ≤ . γn max {ρn , ||wn ||} ρn

Inequalities (4) - (7) imply that hx∗ − yn , xn − yn i ≤ αn f (xn , x∗ ) +

8

βn ǫn + βn2 , ρn

(7)

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

which combines with the following fact, 2 hx∗ − yn , xn − yn i = ||x∗ − yn ||2 + ||xn − yn ||2 − ||xn − x∗ ||2 , we obtain ||yn − x∗ ||2 ≤ ||xn − x∗ ||2 + 2αn f (xn , x∗ ) − ||xn − yn ||2 + 2

βn ǫn + 2βn2 . ρn

(8)

This together with relation (1) and the definitions of Kn , δn leads to ||xn+1 − x∗ ||2 ≤ ||xn − x∗ ||2 + 2αn f (xn , x∗ ) − ||xn − yn ||2 + 2

βn ǫn + 2βn2 ρn

−µn (2 − µn ||A||2 )||(I − TrFn )Ayn ||2 = ||xn − x∗ ||2 + 2αn f (xn , x∗ ) − Kn + δn . 

Lemma 3.4 hold.

Let {xn } be the sequence generated by Algorithm 3.1, then the followings

 (i) The sequence ||xn − x∗ ||2 is convergent for each x∗ ∈ Ω, and thus {xn } is bounded. (ii) limn→∞ sup f (xn , x∗ ) = 0 for each x∗ ∈ Ω, and

lim ||xn − yn ||2 = lim ||(I − TrFn )Ayn ||2 = 0.

n→∞

n→∞

Proof. (i) Since x∗ ∈ EP (f, C) and xn ∈ C, we have f (x∗ , xn ) ≥ 0. Thus f (xn , x∗ ) ≤ 0 because of the pseudomonotonicity of f . This together with Lemma 3.3, αn > 0, Kn ≥ 0 implies that ||xn+1 − x∗ ||2 ≤ ||xn − x∗ ||2 + δn , which combines with Lemma 2.6 and the fact conclusion.

P

n≥1 δn

(9)

< +∞, we obtain the desired

(ii) It follows from Lemma 3.3 that Kn − 2αn f (xn , x∗ ) ≤ ||xn − x∗ ||2 − ||xn+1 − x∗ ||2 + δn . Let N ≥ 1 be a fixed integer number. Applying the last inequality for n = 1, 2, . . . , N and summing up these inequalities, we obtain 0≤

N X

Kn + 2

N X

αn [−f (xn , x∗ )] ≤ ||x1 − x∗ ||2 − ||xN +1 − x∗ ||2 +

This is true for all N ≥ 1. Letting N → ∞ and using (i) and (S1)

∞ X

Kn < +∞,

δn .

n=1

n=1

n=1

N X

(S2)

∞ X

n=1

n=1

9

P

n≥1 δn

< +∞ , we obtain

αn [−f (xn , x∗ )] < +∞.

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

From (S1) and the definition of Kn (see, Lemma 3.3), we obtain limn→∞ ||xn − yn ||2 = 0 and lim µn (2 − µn ||A||2 )||(I − TrFn )Ayn ||2 = 0.

n→∞

The last equality and the fact µn (2 − µn ||A||2 ) ≥ a(2 − b||A||2 ) > 0 imply that lim ||(I − TrFn )Ayn ||2 = 0.

n→∞

Since {xn } is bounded, it follows from assumption A3 that {wn } is also bounded. Thus, there exists L ≥ ρ > 0 such that ||wn || ≤ L. Thus, from the definition of αn and B1, we obtain αn =

βn βn βn ρ βn = ≥ . = γn max {ρn , ||wn ||} ρn max {1, ||wn ||/ρn } ρn L

This together with (S2) and

ρ L

> 0 implies that ∞ X βn

ρ n=1 n

[−f (xn , x∗ )] < +∞.

Thus, from B2, we deduce limn→∞ inf [−f (xn , x∗ )] = 0 or limn→∞ sup f (xn , x∗ ) = 0.



Now, we prove the convergence of Algorithm 3.1. ¯ A, B Theorem 3.5 Assume that Problem (SEP) admits a solution and Conditions A, hold. Then, the whole sequence {xn } generated by Algorithm 3.1 converges strongly to a solution x† of (SEP). Moreover, x† = limn→∞ PΩ (xn ). Proof. From Lemma 3.4(i), we see that {xn } is bounded. Without loss of generality, we can assume that there exists a subsequence {xm } of {xn } converging weakly to x† ∈ C such that lim sup f (xn , x∗ ) = lim f (xm , x∗ ).

n→∞

m→∞

(10)

It follows from the weakly upper semicontiniuty of f (., x∗ ) and Lemma 3.4(ii) that f (x† , x∗ ) ≥ =

lim sup f (xm , x∗ ) = lim f (xm , x∗ )

m→∞

m→∞



lim sup f (xn , x ) = 0.

n→∞

Since x∗ ∈ EP (f, C) and x† ∈ C, we have f (x∗ , x† ) ≥ 0. Thus f (x† , x∗ ) ≤ 0 because of the pseudomonotonicity of f . This together with the last inequality implies that f (x† , x∗ ) = 0 which, from A4, follows x† ∈ EP (f, C). Now, we will show that Ax† ∈ EP (F, Q). Indeed, from Lemma 3.4(ii), we also have ym ⇀ x† , and then Aym ⇀ Ax† . It follows from / F ix(TsF ), Lemma 2.4 that EP (F, Q) = F ix(TsF ) for some s > 0. Assume that Ax† ∈ i.e., Ax† 6= TsF (Ax† ). By Opial’s condition in H2 , the triangle inequality, Lemma 3.4(ii)

10

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

and Lemma 2.5, we have lim inf ||Aym − Ax† || < lim inf ||Aym − TsF (Ax† )|| m→∞ m→∞ h i ≤ lim inf ||Aym − TrFm (Aym )|| + ||TrFm (Aym ) − TsF (Ax† )|| m→∞

= lim inf ||TrFm (Aym ) − TsF (Ax† )|| m→∞

= lim inf ||TsF (Ax† ) − TrFm (Aym )|| m→∞   |s − rm | F † ||Trm (Aym ) − Aym || ≤ lim inf ||Ax − Aym || + m→∞ rm = lim inf ||Ax† − Aym ||. m→∞

and so x† ∈ Ω. This together with This is contrary. Thus, Ax† ∈ F ix(TsF ) =EP (F, Q), † 2 Lemma 3.4(i) implies that the sequence ||xn − x || is convergent. Note that x† is a weak cluster point of {xn }. Thus, we can deduce that the whole sequence {xn } converges strongly to x† . To finish the proof of Theorem 3.5, we will imply that x† = limn→∞ PΩ (xn ). Indeed, from relation (9), we see that ||xn+1 − x∗ ||2 ≤ ||xn − x∗ ||2 + δn , ∀x∗ ∈ Ω.

(11)

This with x∗ = PΩ (xn ) ∈ Ω follows that ||xn+1 − PΩ (xn )||2 ≤ ||xn − PΩ (xn )||2 + δn .

(12)

Let an = ||xn − PΩ (xn )||2 . It follows from the definition of the metric projection that ||xn+1 − PΩ (xn+1 )||2 ≤ ||xn+1 − z||2 , ∀z ∈ Ω. This with z = PΩ (xn ) ∈ Ω implies that ||xn+1 − PΩ (xn+1 )||2 ≤ ||xn+1 − PΩ (xn )||2 .

(13)

Combining relations (12) and (13), we obtain ||xn+1 − PΩ (xn+1 )||2 ≤ ||xn − PΩ (xn )||2 + δn , P or an+1 ≤ an + δn . Thus, it follows from the fact ∞ n=1 δn < ∞ and Lemma 2.6 that there exists the limit of the sequence {an }. Let un = PΩ (xn ). From Lemma 2.1(ii) and

11

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

relation (11), for all m > n, we have ||un − um ||2 = ||PΩ (xn ) − PΩ (xm )||2 ≤ ||xm − PΩ (xn )||2 − ||xm − PΩ (xm )||2  ≤ ||xm−1 − PΩ (xn )||2 + δm−1 − ||xm − PΩ (xm )||2 ≤ ... ! m−1 X 2 ≤ ||xn − PΩ (xn )|| + δl − ||xm − PΩ (xm )||2 l=n

= an − am +

m−1 X

δl .

l=n

P Passing to the limit in the last inequality as m, n → ∞ with noting that ∞ n=1 δn < ∞, we obtain limm,n→∞ ||un − um ||2 = 0. Thus, {un } is a Cauchy sequence, or there exists the limit lim un = u ∈ Ω.

n→∞



It follows from un = PΩ (xn ), Lemma 2.1 (iii) and x† ∈ C that x† − un , xn − un ≤ 0. Passing to the limit in this inequality as n → ∞, we obtain ||x† −u||2 = x† − u, x† − u ≤ 0. Thus, x† = u or x† = lim PΩ (xn ). This completes the proof.  n→∞

4.

Modified projected subgradient - proximal algorithm

We can see that Algorithm 3.1 requries to compute two projections on the feasible set C and to know the information of operator norm ||A||. As mentioned in Sect. 1, the problem of finding a projection on C can be costly if C has a complex structure. Besides, the calculation or at least an estimate of the operator norm ||A||, in general, is not an easy task in infinite-dimensional Hilbert spaces. These, if happen, can affect the efficiency of the used method. In this section, we first modify the first projection on C in Algorithm 3.1 by a projection on a half-space constructed specifically. Secondly, motivated by the results in [31, 35, 41], we present a way of selecting a step-size in the second projection such that computations do not need any prior information about the operator norm. Instead of that, we consider additionally a sequence {νn } satisfying the following condition. B4. 0 < ν ≤ νn ≤ 4 − ν for some ν ∈ (0, 4). For the sake of simplicity, we fix the sequence {rn }, i.e., rn = r > 0. Let h(x) = 1 F 2 ∗ F 2 ||(I − Tr )Ax|| for all x ∈ H1 , and so ∇h(x) = A (I − Tr )Ax. The detailed algorithm is described as follows: Algorithm 4.1 (Modified projected subgradient - proximal algorithm for SEPs)

.

Initialization: Choose x0 ∈ C and the parameter sequences {ρn }, {βn }, {ǫn }, {νn } such

12

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

that Conditions B1, B2 and B4 above hold. Compute γ0 = max {ρ0 , ||w0 ||} ,

α0 =

( 0 µ0 = h(y0 ) ν0 ||∇h(y 2 0 )||

β0 , γ0

y0 = PC (x0 − α0 w0 ),

if ∇h(y0 ) = 0, otherwise,

 x1 = PC y0 − µ0 A∗ (I − TrF )Ay0 . Iterative Steps: Assume that xn ∈ C, yn−1 and µn−1 are known, calculate xn+1 as follows: Step 1. Select wn ∈ ∂ǫn f (xn , .)(xn ), and compute γn = max {ρn , ||wn ||} ,

αn =

βn , γn



Tn = z ∈ H1 : yn−1 − µn−1 A∗ (I − TrF )Ayn−1 − xn , z − xn ≤ 0 , yn = PTn (xn − αn wn ).  Step 2. Compute xn+1 = PC yn − µn A∗ (I − TrF )Ayn , where ( 0 µn = h(yn ) νn ||∇h(y 2 n )||

if ∇h(yn ) = 0, otherwise.

Remark 4.2 Since Tn is either a half space or the whole space, the projection yn in Step 1 is computed explicitly. Moreover, we have C ⊂ Tn for all n ≥ 0. Indeed, from xn = PC yn−1 − µn−1 A∗ (I − TrF )Ayn−1 and Lemma 2.1 (iii), we obtain

yn−1 − µn−1 A∗ (I − TrF )Ayn−1 − xn , z − xn ≤ 0, ∀z ∈ C

which, together with the definition of Tn , implies that C ⊂ Tn for all n ≥ 0. Let δn = 2 βρnnǫn + 2βn2 and ¯n = K

(

0 h(yn )2 νn (4 − νn ) ||∇h(y 2 n )||

We have the following lemma. 13

if ∇h(yn ) = 0 otherwise

January 3, 2017

International Journal of Computer Mathematics

Lemma 4.3

GCOM-R1

Let x∗ ∈ Ω, then the following holds for each n ≥ 0,

¯ n + δn , ||xn+1 − x∗ ||2 ≤ ||xn − x∗ ||2 − ||xn − yn ||2 + 2αn f (xn , x∗ ) − K Proof. Since TrF is firmly nonexpansive, I − TrF is also firmly nonexpansive. Thus



A∗ (I − TrF )Ayn , yn − x∗ = (I − TrF )Ayn , Ayn − Ax∗

= (I − TrF )Ayn − (I − TrF )Ax∗ , Ayn − Ax∗ ≥ ||(I − TrF )Ayn − (I − TrF )Ax∗ ||2

= ||(I − TrF )Ayn ||2 = 2h(yn ) in which the second equality is followed from the fact that Ax∗ = TrF (Ax∗ ) (see, Lemma 2.4(C3)) and the last equality is true because of the definition of h(yn ). This together with the definition of xn+1 and the nonexpansiveness of the projection implies that  ||xn+1 − x∗ ||2 = ||PC yn − µn A∗ (I − TrF )Ayn − PC (x∗ )||2 ≤ ||yn − µn A∗ (I − TrF )Ayn − x∗ ||2

= ||yn − x∗ ||2 + µ2n ||A∗ (I − TrF )Ayn ||2 − 2µn A∗ (I − TrF )Ayn , yn − x∗

≤ ||yn − x∗ ||2 + (µn ∇h(yn ))2 − 4µn h(yn )   = ||yn − x∗ ||2 − 4µn h(yn ) − (µn ∇h(yn ))2 .

(14)

Now, we will estimate the first term in the right-hand side of inequality (14). From Remark 4.2, wee see that x∗ , xn ∈ C ⊂ Tn . Thus, by yn = PTn (xn − αn wn ) and Lemma 2.1(iii), we have hx∗ − yn , xn − αn wn − yn i ≤ 0, hxn − yn , xn − αn wn − yn i ≤ 0.

(15) (16)

Now, by using two inequalities (15) and (16) and arguing similarly to the proof of relations (4)-(8), we obtain ||yn − x∗ ||2 ≤ ||xn − x∗ ||2 + 2αn f (xn , x∗ ) − ||xn − yn ||2 + 2

βn ǫn + 2βn2 . ρn

This together with relation (14) leads to βn ǫn ||xn+1 − x∗ ||2 ≤ ||xn − x∗ ||2 + 2αn f (xn , x∗ ) − ||xn − yn ||2 + 2 + 2βn2 ρn   − 4µn h(yn ) − (µn ∇h(yn ))2 . (17)

Note that from the definition of µn , we have ( 0 2 4µn h(yn ) − (µn ∇h(yn )) = h(yn )2 νn (4 − νn ) ||∇h(y 2 n )||

if ∇h(yn ) = 0, otherwise.

¯ n and δn , we obtain the desired Combining this with relation (17) and the definitions of K conclusion.  14

January 3, 2017

International Journal of Computer Mathematics

Lemma 4.4 hold.

GCOM-R1

Let {xn } be the sequence generated by Algorithm 4.1, then the followings

 (i) The sequence ||xn − x∗ ||2 is convergent for each x∗ ∈ Ω, and thus {xn } is bounded. (ii) limn→∞ sup f (xn , x∗ ) = 0 for each x∗ ∈ Ω, and

lim ||xn − yn ||2 = lim h(yn ) = 0.

n→∞

n→∞

Proof. The first statement is proved similarly to Lemma 3.4 (i). Now, we will prove statement (ii). It follows from Lemma 4.3 that ¯ n + ||xn − yn ||2 − 2αn f (xn , x∗ ) ≤ ||xn − x∗ ||2 − ||xn+1 − x∗ ||2 + δn . K Like in Lemma 3.4(ii), we also obtain limn→∞ sup f (xn , x∗ ) = limn→∞ ||xn − yn ||2 = 0 and X ¯ n < ∞. K n≥0

Without loss of generality, we can assume that ∇h(yn ) 6= 0 for all n. Thus, from the last ¯ n , we obtain relation and the definition of K X

νn (4 − νn )

n≥0

h(yn )2 < ∞. ||∇h(yn )||2

Thus, it follows from 0 < ν ≤ νn ≤ 4 − ν that X

n≥0

h(yn )2 < ∞. ||∇h(yn )||2

Since ||xn − yn || → 0 and {xn } is bounded, {yn } is also bounded. Thus, it follows from  the Lipschitz continuity of ∇h(y) that ||∇h(yn )||2 is bounded. This together with the last relation implies that h(yn ) → 0 as n → ∞.  Finally, we have the following result. ¯ A, B1, Theorem 4.5 Assume that Problem (SEP) admits a solution and Conditions A, B2 and B4 hold.. Then, the whole sequence {xn } generated by Algorithm 4.1 converges strongly to a solution x† of (SEP). Moreover, x† = limn→∞ PΩ (xn ). Proof. From the definition of h(yn ) and Lemma 4.4, by arguing similarly to the proof of Theorem 3.5 we obtain the desired conclusion. 

Remark 4.6 The convergence of Algorithms 3.1 and 4.1 is guaranteed under the following condition in A4 of Condition A, x ∈ EP (f, C), y ∈ C, f (y, x) = 0 =⇒ y ∈ EP (f, C).

15

(18)

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

We can see that, without assumption (18), the proposed algorithms can not converge. The following simple example implies this fact. Indeed, consider H1 = H2 = ℜ2 , C = Q = ℜ2 , F = 0 (thus, TrFn = I for all rn > 0) and f (x, y) = x1 y2 − x2 y1 for all x, y ∈ ℜ2 . In fact, the SEP here is an EP for f on C. The unique solution of SEP is x∗ = (0, 0)T . It is easy to see that A1 - A3 in Condition A are satisfied. Moreover, f is monotone (so, pseudomonotone) because f (x, y) + f (y, x) = 0 for all x, y ∈ ℜ2 . However, f (y, x∗ ) = 0 for all y ∈ C = ℜ2 and this implies that assumption (18) is not satisfied for f . Now, we check for Algorithm 3.1 with noting that xn+1 = yn . Consider ǫn = 0, and so, we obtain that, for all xn = (xn1 , xn2 )T ∈ ℜ2 ,  wn ∈ ∂ǫn f (xn , .)(xn ) = ∂f (xn , .)(xn ) = (−xn2 , xn1 )T .

Thus, from Step 1 of Algorithm 3.1, we see that

xn+1 = yn = xn − αn wn = (xn1 + αn xn2 , xn2 − αn xn1 )T . This implies that   ||xn+1 ||2 = (xn1 + αn xn2 )2 + (xn2 − αn xn1 )2 = (1 + α2n ) (xn1 )2 + (xn2 )2 = (1 + α2n )||xn ||2 .

Therefore, by the induction, we get 2

||xn+1 || =

n Y

(1 +

!

α2k )

k=0

||x0 ||2 ≥ ||x0 ||2 ,

which implies limn→∞ ||xn+1 ||2 6= 0, provided by x0 6= 0. Thus, {xn } does not converge to the solution x∗ of the problem. Similarly, we can check directly for Algorithm 4.1 with nothing that Tn = ℜ2 for all n. The strong convergence is coincident with the weak one in finite dimensional Hilbert spaces. Thus, the aforementioned example has also implied that, without assumption (18), the proposed algorithms can not converge weakly in infinite dimensional Hilbert spaces.

5.

Computational experiments

In this section, we consider some numerical experiments to illustrate the convergence of the proposed algorithms and compare their efficiency with EGPM [16, Algorithm 1] and HEGPM [16, Algorithm 2]. Both of two algorithms in [16] have used the proximal method and the extragradient method which requires to solve two optimization programs per each iteration. Note that EGPM was proved to be weakly convergent in infinitedimensional Hilbert spaces while HEGPM is strongly convergent thanks to the hybrid shrinking projection method. The bifunction f in H1 = ℜm is defined by f (x, y) = hM x + N y + p, y − xi , where p is a vector in ℜm and M, N are two matrices of order m such that N is symmetric positive semidefinite and N −P is negative semidefinite. As in [16, Sect. 5], the bifunction f satisfies the Lipschitz-type condition with two constants c1 = c2 = ||N − M ||/2. The

16

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

bifunction F in H2 = ℜk is defined by F (u, v) = g(v) − g(u), where 1 g(u) = uT Bu + dT u, 2 with d ∈ ℜk and B being a symmetric positive definite matrix of order k. The operator A : ℜm → ℜk is defined by a maxtrix of size k × m. Note that in this case, the resolvent TrF of the bifunction F coincides with the proximal mapping of the function g with the constant r > 0, i.e., TrF (u) = proxrg (u), where   1 2 proxrg (u) = arg min g(v) + ||v − u|| : v ∈ Q . r For numerical experiments, we chose Q = H2 and C ⊂ ℜm such that 0 ∈ C; the vectors p and d equal to zero; the matrices A, B are generated randomly (and uniformly) with their entries in [−5, 5] and M, N are two full matrices and are also generated ramdomly to satisfy their conditions1 . The solution of the SEP in this case is x∗ = 0. It is easy ¯ and A are satisfied, and so, as in [16], all the mentioned algoto see that Conditions A rithms can be applied. We use the sequence Dn = ||xn − x∗ ||2 , n = 0, 1, 2, . . . to check the convergence of the algorithms, where the starting point x0 = (1, 1, . . . , 1)T ∈ ℜm and the sequence {xn } is generated by the algorithms. The convergence of {Dn } to 0 implies that {xn } converges to the solution of the problem. All the projections are rewritten equivalently to optimization problems. All the optimization problems are solved effectively by using the function quadprog in the Matlab 7.0 Optimization Toolbox. The programs are implemented on a PC Desktop Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz 2.50 GHz, RAM 2.00 GB. 5.1

The feasible set C as a polyhedral convex set

Consider the feasible set C as a polyhedral convex set defined by  C = x = (x1 , . . . , xm )T ∈ ℜm : xi ≥ −1, i = 1, . . . , m .

The parameters have been chosen in the experiments as follows: (i) λ =

1 5c1 ,

µn = µ =

1 ||A||2

(ii) ǫn = 0, ρn = 1, βn =

for EGPM, HEGPM and Algorithm 3.1.

1 n+1

for Algorithms 3.1 and 4.1.

(iii) νn = 3 for Algorithm 4.1 and rn = 1 for all the algorithms. We have performed three experiments for dimensional numbers of spaces m and k. Figs. 1 and 2 are for the pair (m, k) = (10, 5); Figs. 3 and 4 are for the pair (m, k) = (20, 20) while Figs. 5 and 6 are for the pair (m, k) = (50, 30). In these figures, the y-axes represent for the value of Dn while the x-axes represent for the number of iterations (# iterations) or for the elapsed execution time (in second - CPU time). b1, Q b 2 as two diagonal matrixes with randomly λ1k ∈ [−m, 0], λ2k ∈ [1, m] for all k = 1, . . . , m. Set Q m eigenvalues {λ1k }m and {λ } , respectively. Then, we make a positive semidefinite matrix N and a negative 2k k=1 k=1 b 2 and Q b 1 , respectively. Finally, we set semidefinite matrix T by using full random orthogonal matrixes with Q M =N −T

1 Choose

17

International Journal of Computer Mathematics

GCOM-R1

5

5

10

10 Alg. 3.1 Alg. 4.1 EGPM HEGPM

0

10

−5

−5

D =||x −x*||2

n

n

−15

10

−20

−20

−25

−25

10

−30

10

−30

0

20

40 60 # iterations × 10

80

10

100

Figure 1. Dn and # iterations with m = 10, k = 5 5

10

15 20 Elapsed time [sec]

25

30

5

10

0

10

−5

−5

10

10

−10

−10

D =||x −x*||2

10

n

−15

n

10

10

−15

10

n

* 2

5

10 Alg. 3.1 Alg. 4.1 EGPM HEGPM

0

n

0

Figure 2. Dn and elapsed time with m = 10, k = 5

10

D =||x −x ||

−15

10

10

10

−20

10

Alg. 3.1 Alg. 4.1 EGPM HEGPM

−25

10

10

−30

−30

10

10

−35

10

−20

10

−25

−35

0

20

40 60 # iterations × 20

80

10

100

Figure 3. Dn and # iterations with m = k = 20

0

10

20 30 Elapsed time [sec]

40

50

Figure 4. Dn and elapsed time with m = k = 20

5

5

10

10

0

0

10

10

−5

−5

10 D =||x −x*||2

10

−10

n

10

−10

10

n

* 2

−10

10

n

* 2

−10

10

n

D =||x −x ||

10

10

−15

10

n

n

Alg. 3.1 Alg. 4.1 EGPM HEGPM

0

10

10

D =||x −x ||

January 3, 2017

Alg. 3.1 Alg. 4.1 EGPM HEGPM

−20

10

−25

10

Alg. 3.1 Alg. 4.1 EGPM HEGPM

−20

10

−25

10

−30

10

−15

10

−30

0

20

40 60 # iterations × 30

80

10

100

Figure 5. Dn and # iterations with m = 50, k = 30

0

20

40 60 Elapsed time [sec]

80

100

Figure 6. Dn and elapsed time with m = 50, k = 30

From these figures, we see that {Dn } generated by both of two proposed algorithms is very fastly convergent to 0 during the early iterations, and after that it is stable. HEGPM is the worst in all the cases, specially when the sizes of the problem are large. Figs. 1 and 3 show that the rate of the convergence of Algorithm 3.1 is better than the one of Algorithm 4.1. This can be due to the intermediate approximation yn in Algorithm 3.1 lies in C which is closer to any solution than that one in Algorithm 4.1 which, in general, is in Tn ⊂ H1 , and of course, it can affect the rate of convergence of the sequence {xn } generated by the algorithms. In comparison Fig. 2 with Fig. 1; Fig. 4 with Fig. 3, we see that the execution time per each step of Algorithm 4.1 is smaller than that one of 18

International Journal of Computer Mathematics

GCOM-R1

Algorithm 3.1. However, this is negligible because a fact that C is chosen as a polyhedral convex set. The feasible set C as a ball

5.2

 We consider the feasible set C as a ball defined by C = x ∈ ℜm : ||x||2 ≤ 10 and the dimensional numbers of spaces are m = 10, k = 5. In this case, the solving of optimization subproblems and the computing of shrinking projections in EGPM and HEGPM seem to be complex while all the projections in Algorithms 3.1 and 4.1 are found explicitly. Then, we will study the numerical behavior of the proposed algorithms for different given parameter sequence {βn }. Other parameters are chosen as in the previous experiments. Precisely, the sequence {βn } is chosen as: (I) βn =

1 (n+1) .

(IV) βn =

log(n+3) (n+1) .

(II) βn =

1 (n+1)0.75 .

(V) βn =

1 (n+1) log(n+3) .

(III) βn =

1 (n+1)0.51 .

A fact here that the sequence {βn } in (V) still satisfies condition B2 which is the most fastly convergent to 0 while the convergence of the sequence {βn } in (III) is the slowest in all the given sequences. Figs. 7 and 8 describe the behavior of the sequences {Dn } generated by Algorithm 3.1 with the number of iterations and the elapsed time, respectively. While Figs. 9 and 10 are for Algorithm 4.1. 0

0

10

10

−50

10 −50

* 2

Dn=||xn−x ||

Dn=||xn−x ||

* 2

10

−100

10

−1

β =(n+1) n

−100

10

−1

β =(n+1)

−150

10

n

−0.75

β =(n+1)

−0.75

β =(n+1)

n

n

−0.51

β =(n+1)

−150

10

−0.51

n

β =(n+1)

−200

n

10

−1

β =(n+1) log(n+3)

−1

β =(n+1) log(n+3)

n

n

−1

β =((n+1)log(n+3))

−1

β =((n+1)log(n+3))

n

−200

10

n

−250

0

10

20 30 # iterations × 10

40

10

50

Figure 7. Algorithm 3.1 for Dn and # iterations

0

1

2 3 Elapsed time [sec]

4

5

Figure 8. Algorithm 3.1 for Dn and elapsed time

0

10

0

10

−20

10

−20

10 −40

* 2

Dn=||xn−x ||

* 2

10 Dn=||xn−x ||

January 3, 2017

−60

10

−80

−1

β =(n+1)

10

n

−40

10

−1

β =(n+1)

−60

n

10

−0.75

β =(n+1)

−0.75

β =(n+1)

n

−100

10

n

−0.51

β =(n+1)

−0.51

n

n

10

−1

β =(n+1) log(n+3)

−120

10

β =(n+1)

−80

−1

βn=(n+1) log(n+3)

n

−1

β =((n+1)log(n+3)) 10

−1

β =((n+1)log(n+3))

n

−140

0

n

−100

10

20 30 # iterations × 10

40

10

50

Figure 9. Algorithm 4.1 for Dn and # iterations

0

1

2 3 Elapsed time [sec]

4

5

Figure 10. Algorithm 4.1 for Dn and elapsed time

19

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

From these figures, we see that the proposed algorithms work well for the sequence βn = (n + 1)−0.51 and badly for βn = ((n + 1) log(n + 1))−1 . In comparison with the previous experiments, we also see that when the projection on the feasible set C is explicit, the obtained error of solution approximations generated by the proposed algorithms is very much better than that one when the projection on C does not have a closed form expression. Although, the studying of the numerical experiments here is preliminary, and of course, it depends on the structures of bifunctions and feasible sets. However, these numerical results have implied the efficiency of the proposed algorithms over existing methods.

6.

Conclusions

The paper has considered a class of split inverse problems involving equilibrium problems in Hilbert spaces which contains almost previously known split form problems. Two hybrid algorithms have been proposed for finding their solution approximations. Three fundamental methods including the projection method, the diagonal subgradient method and the proximal method have used to design the algorithms which have allowed to reduce several complex computations in comparison with extragradient-like methods. It may also be seen that the second algorithm is simpler than the first one in computations. The strongly covergent theorems have been established under some suitable assumptions. The two premilinary numerical examples have been performed to illustrate the computational performance of the algorithms over previously known extragradient-like methods.

Acknowledgements The author would like to thank the Associate Editor and three anonymous referees for their valuable comments and suggestions which helped us very much in improving the original version of this paper. The guidance of Profs. P. K. Anh and L. D. Muu is gratefully acknowledged.

References [1] E. Blum, W. Oettli, From optimization and variational inequalities to equilibrium problems, Math. Stud. 63 (1994), pp. 123-145. [2] C. Byrne, Iterative oblique projection onto convex sets and the split feasibility problems, Inverse Prob. 18 (2002), pp. 441-453. [3] C. Byrne, A unified treatment of some iterative algorithms in signal processing and image reconstruction, Inverse Prob. 20 (2004), pp. 103-120. [4] Y. Censor, T. Elving, A multiprojections algorithm using Bregman projections in a product spaces, Numer. Algorithms 8 (1994), pp. 221-239. [5] Y. Censor, A. Segal, Iterative projection methods in biomedical inverse problems, In: Censor Y, Jiang M, Louis AK (eds) Mathematical methods in biomedical imaging and intensity-modulated therapy. IMRT, Edizioni della Norale, Pisa, (2008), pp. 65-96. [6] Y. Censor, T. Bortfeld, B. Martin, A. Trofimov, A unified approach for inversion problems in intensitymodulated radiation therapy, Phys. Med. Biol. 51 (2006), pp. 2353-2365. [7] Y. Censor, A. Gibali and S. Reich, Algorithms for the split variational inequality problem, Numer. Algorithms, 59 (2012), pp. 301-323. [8] S. Chang, L. Wang, X.R. Wang and G. Wang, General split equality equilibrium problems with application to split optimization problems, J. Optim. Theory Appl. 166 (2015), pp. 377-390.

20

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

[9] P. L. Combettes, S. A. Hirstoaga, Equilibrium programming in Hilbert spaces, J. Nonlinear Convex Anal. 6 (2005), pp. 117-136. [10] J. Contreras, M. Klusch, J. B. Krawczyk, Numerical solution to Nash-Cournot equilibria in coupled constraint electricity markets, EEE Trans. Power Syst. 19 (2004), pp. 195-206. [11] J. Deepho, W. Kumm and P. Kumm, A new hybrid projection algorithm for solving the split generalized equilibrium problems and the system of variational inequality problems, J. Math. Model. Algor., 13 (2014), pp. 405-423. [12] J. Deepho, J. Martnez-Moreno, K. Sitthithakerngkiet and P. Kumam, Convergence analysis of hybrid projection with Ces` aro mean method for the split equilibrium and general system of finite variational inequalities, J. Comput. Appl. Math (2015). http://dx.doi.org/10.1016/j.cam.2015.10.006. [13] J. Deepho, P. Thounthong, P. Kumam, S. Phiangsungnoen, A new general iterative scheme for split variational inclusion and fixed point problems of k-strick pseudo-contraction mappings with convergence analysis, J. Comput. Appl. Math. (2016). http://dx.doi.org/10.1016/j.cam.2016.09.009. [14] J. Deepho, J. Martnez-Moreno, P. Kumam, A viscosity of Ces` aro mean approximation method for split generalized equilibrium, variational inequality and fixed point problems, J. Nonlinear Sci. Appl. 9 (2016), pp. 1475-1496. [15] Z. He, The split equilibrium problems and its convergence algorithms, J. Inequal. Appl. 2012:162 (2012). DOI: 10.1186/1029-242x-2012-162. [16] D. V. Hieu, Parallel extragradient-proximal methods for split equilibrium problems, Math. Model. Anal. 41 (2016), pp. 478-501. [17] D. V. Hieu, L. D. Muu, P. K. Anh, Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings, Numer. Algorithms, 73 (2016), pp. 197-217. [18] D. V. Hieu, An extension of hybrid method without extrapolation step to equilibrium problems, J. Ind. Manag. Optim. (2016). DOI:10.3934/jimo.2017015. [19] D. V. Hieu, Halpern subgradient extragradient method extended to equilibrium problems, Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales - Serie A: Matematicas (2016). DOI :10.1007/s13398-016-0328-9. [20] D. V. Hieu, P. K. Anh, L. D. Muu, Modified hybrid projection methods for finding common solutions to variational inequality problems, Comput. Optim. Appl. (2016). DOI: 10.1007/s10589-016-9857-6. [21] D. V. Hieu, Cyclic subgradient extragradient methods for equilibrium problems, Arabian J. Math. 5 (2016), pp. 159-175. [22] N. E. Hurt, Phase Retrieval and Zero Crossings: Mathematical Methods in Image Reconstruction, Kluwer Academic, Dordrecht, The Netherlands (1989). [23] C. Jaiboon, P. Kumam, Strong convergence theorems for solving equilibrium problems and fixed point problems of ξ-strict pseudocontraction mappings by two hybrid projection methods, J. Comput. Appl. Math. 234 (2010), pp. 722-732. [24] C. Jaiboon, P. Kumam, U. W. Humphries, Weak convergence theorem by an extragradient method for variaotinal inequality, equilibrium and fixed point problems, Bull. Malays. Math. Sci. Soc. 32(2) (2009), pp. 173-185. [25] C. Jaiboon, P. Kumam, A hybrid extragradient viscosity approximation method for solving equilibrium problems and fixed point problems of infinitely many nonexpansive mappings, Fixed Point Theory Appl. 2009 (2009), Article ID: 374815, 32 pages. [26] P. Katchang, T. Jitpeera, P. Kumam, Strong convergence theorems for solving generalized mixed equilibrium problems and general system of variational inequalities by the hybrid method, Nonlinear Anal.: Hybrid Systems 4 (2010), pp. 838-852. [27] K. R. Kazmi and S. H. Rizvi, Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem, J. Egyptian Math. Society, 21 (2013), pp. 44-51. [28] P. Kumam, A Hybrid approximation method for equilibrium and fixed point problems for a monotone mapping and a nonexpansive mapping, Nonlinear Anal.: Hybrid Systems 2(4) (2008), pp. 1245-1255. [29] P. Kumam, C. Jaiboon, A new hybrid iterative method for mixed equilibrium problems and variational inequality problem for relaxed cocoercive mappings with application to optimization problems, Nonlinear Anal.: Hybrid Systems 3 (2009), pp. 510-530. [30] P. Kumam, K. Wattanawitoon, Convergence theorems of a hybrid algorithm for equilibrium problems, Nonlinear Anal.: Hybrid Systems 3 (2009), pp. 386-394. [31] G. L´ opez, V. Mart´in-M´ arquez, F. Wang, H. K. Xu, Solving the split feasibility problem without prior knowledge of matrix norms, Inverse Prob. 28:085004 (2012), 18 pages. DOI:10.1088/02665611/28/8/085004. [32] A. Moudafi, Split monotone variational inclusions, J. Optim. Theory Appl. 150 (2011), pp. 275-283.

21

January 3, 2017

International Journal of Computer Mathematics

GCOM-R1

[33] A. Moudafi, E. Al-Shemas, Simultaneously iterative methods for split equality problem, Trans. Math. Program. Appl. 1(2013), pp. 1-11. [34] A. Moudafi, A relaxed alternating CQ algorithm for convex feasibility problems. Nonlinear Anal. TMA 79 (2013), pp. 117-121. [35] A. Moudafi B. S. Thakur, Solving proximal split feasibility problems without prior knowledge of operator norms, Optim. Lett. 8 (2014), pp. 2099-2110. [36] L. D. Muu, W. Oettli, Convergence of an adative penalty scheme for finding constrained equilibria, Nonlinear Anal. TMA 18 (12) (1992), pp. 1159-1166. [37] T. D. Quoc, L. D. Muu, Iterative methods for solving monotone equilibrium problems via dual gap functions, Comput. Optim. Appl. 51 (2012), pp. 709-728. [38] P. Santos, S. Scheimberg, An inexact subgradient algorithm for equilibrium problems, Comput. Appl. Math. 30 (2011), pp. 91-107. [39] H. Stark (ed.), Image Recovery: Theory and Applications, Academic Press, Orlando (1987). [40] K. Sitthithakerngkiet, J. Deepho, P. Kumam, A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion and fixed point problems, Appl. Math. Comput. 250 (2015), pp. 986-1001. [41] P. T. Vuong, J. J. Strodiot, V. H. Nguyen, A gradient projection method for solving split equality and split feasibility problems in Hilbert spaces, Optimization 64 (2015), pp. 2321-2341. [42] H. K. Xu, Viscosity approximation methods for nonexpansive mappings, Math. Anal. Appl. 298 (2004), pp. 279-291. [43] K. Wattanawitoon, P. Kumam, Strong convergence theorem by a new hybrid algorithm for fixed point problems and equilibrium problems of two relatively quasi-nonexpansive mappings, Nonlinear Anal.: Hybrid Systems 3 (1) (2009), pp. 11-20. [44] D. J. Wen, Weak and strong convergence of hybrid subgradient method for pesudomonotone equilibrium problem and multivalued nonexpansive mappings, Fixed Point Theory Appl. 2014 (2014): 232. DOI:10.1186/1687-1812-2014-232.

22