Numerical Methods for Solving Variational Inequalities and

0 downloads 0 Views 1MB Size Report
considered in this paper is to find a matrix X in the set S at which max {|xij − cij |} reaches its .... A basic property of the projection mapping on a closed convex set is ...... Incorporating with the techniques of identifying the optimal step length ..... The proposed methods and the HRP method in 2 can be used to solve more.
Advances in Operations Research

Numerical Methods for Solving Variational Inequalities and Complementarity Problems Guest Editors: Abdellah Bnouhachem and Min Li

Numerical Methods for Solving Variational Inequalities and Complementarity Problems

Advances in Operations Research

Numerical Methods for Solving Variational Inequalities and Complementarity Problems Guest Editors: Abdellah Bnouhachem and Min Li

Copyright q 2012 Hindawi Publishing Corporation. All rights reserved. This is a special issue published in “Advances in Operations Research.” All articles are open access articles distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Editorial Board Mahyar A. Amouzegar, USA I. L. Averbakh, Canada Xiaoqiang Cai, Hong Kong Karl F. Doerner, Austria Kevin C. Furman, USA Mitsuo Gen, Japan Ahmed Ghoniem, USA Walter J. Gutjahr, Austria Pierre Hansen, Canada Mhand Hifi, France J. J. Judice, Portugal

Imed Kacem, France Michael N. Katehakis, USA Ger Koole, The Netherlands Gilbert Laporte, Canada Guoyin Li, Australia Ching-Jong Liao, Taiwan Yi-Kuei Lin, Taiwan Marco Lubbecke, Germany ¨ Viliam Makis, Canada Silvano Martello, Italy Lars Monch, Germany ¨

Khosrow Moshirvaziri, USA P. R. Parthasarathy, India Carlos Romero, Spain Shey-Huei Sheu, Taiwan Wolfgang Stadje, Germany George Steiner, Canada Hsien-Chung Wu, Taiwan Shangyao Yan, Taiwan Yiqiang Zhao, Canada

Contents Numerical Methods for Solving Variational Inequalities and Complementarity Problems, Abdellah Bnouhachem and Min Li Volume 2012, Article ID 925915, 2 pages Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method, M. H. Xu and H. Shao Volume 2012, Article ID 357954, 15 pages Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem, Min Li Volume 2012, Article ID 483479, 17 pages On a System of Generalized Mixed Equilibrium Problems Involving Variational-Like Inequalities in Banach Spaces: Existence and Algorithmic Aspects, H. Mahdioui and O. Chadli Volume 2012, Article ID 843486, 18 pages An Asymmetric Proximal Decomposition Method for Convex Programming with Linearly Coupling Constraints, Xiaoling Fu, Xiangfeng Wang, Haiyan Wang, and Ying Zhai Volume 2012, Article ID 281396, 20 pages Variational-Like Inequalities and Equilibrium Problems with Generalized Monotonicity in Banach Spaces, N. K. Mahato and C. Nahak Volume 2012, Article ID 648070, 15 pages

Hindawi Publishing Corporation Advances in Operations Research Volume 2012, Article ID 925915, 2 pages doi:10.1155/2012/925915

Editorial Numerical Methods for Solving Variational Inequalities and Complementarity Problems Abdellah Bnouhachem1 and Min Li2 1 2

Ibn Zohr University, BP 1136, ENSA, Agadir, Morocco School of Economics and Management, Southeast University, Nanjing 210096, China

Correspondence should be addressed to Abdellah Bnouhachem, [email protected] Received 25 June 2012; Accepted 25 June 2012 Copyright q 2012 A. Bnouhachem and M. Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This special issue focuses on algorithmic approaches to solve variational inequality problems. It includes 5 papers selected after a peer revision. We hope that the readers will find in these papers stimulating ideas and useful results. The summaries of the 5 papers in this issue are listed as follow: The paper by X. Fu et al., proposes an asymmetric proximal decomposition method to solve a wide variety separable problems by combining the ideas of proximal point algorithm and augmented Lagrangian method. Numerical experiments are employed to show the advantage of the proposed method. The paper by N. K. Mahato and C. Nahak has been aimed to theoretically study the existence of solutions for variational-like inequality problems under a new concept relaxed ρ − θ-η-invariant pseudomonotone maps in reflexive Banach spaces. The paper by Min Li considers the split feasibility problem, which is a special case of the multiple-sets split feasibility problem. With some new strategies for determining the optimal step length, this paper improves some known halfspace-relaxation projection methods for solving the split feasibility problem. Some preliminary computational results are given to illustrate the efficiency and implementation of the proposed method. Based on the auxiliary principal technique and arguments from generalized convexity, the paper by H. Mahdioui and O. Chadli studies the existence and the algorithmic aspect of a system of generalized mixed equilibrium problems SGMEP involving variationallike inequalities in Banach spaces. A new existence theorem for the auxiliary problem is established; this leads to generate an algorithm that converges strongly to a solution of SGMEP under weaker assumptions. The paper by M. H. Xu and H. Shao considers the matrix nearness problem. Based on the relationship between the matrix nearness problem and the linear variational inequality,

2

Advances in Operations Research

a projection and contraction method is presented for solving the matrix nearness problem. Numerical results show that the method suggested in this paper has a good performance.

Acknowledgment Guest editors would like to take this opportunity to express their sincere thanks to all the authors for their contributions to this special issue and the reviewers for their comments and suggestions. Abdellah Bnouhachem Min Li

Hindawi Publishing Corporation Advances in Operations Research Volume 2012, Article ID 357954, 15 pages doi:10.1155/2012/357954

Research Article Solving the Matrix Nearness Problem in the Maximum Norm by Applying a Projection and Contraction Method M. H. Xu1 and H. Shao2 1

School of Mathematics and Physics, Changzhou University, Jiangsu Province, Changzhou 213164, China 2 Department of Mathematics, School of Sciences, China University of Mining and Technology, Xuzhou 221116, China Correspondence should be addressed to M. H. Xu, [email protected] Received 11 April 2012; Accepted 17 June 2012 Academic Editor: Abdellah Bnouhachem Copyright q 2012 M. H. Xu and H. Shao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Let S be a closed convex set of matrices and C be a given matrix. The matrix nearness problem considered in this paper is to find a matrix X in the set S at which max {|xij − cij |} reaches its minimum value. In order to solve the matrix nearness problem, the problem is reformulated to a min-max problem firstly, then the relationship between the min-max problem and a monotone linear variational inequality LVI is built. Since the matrix in the LVI problem has a special structure, a projection and contraction method is suggested to solve this LVI problem. Moreover, some implementing details of the method are presented in this paper. Finally, preliminary numerical results are reported, which show that this simple algorithm is promising for this matrix nearness problem.

1. Introduction Let C  cij  ∈ Rn×n be a given symmetric matrix and   SnΛ  H ∈ Rn×n | H T  H, λmin I  H  λmax I ,

1.1

where λmin , λmax are given scalars and λmin < λmax , I is the identity matrix, and A  B denotes that B − A is a positive semidefinite matrix. It is clear that SnΛ is a nonempty closed convex set. The problem considered in this paper is     min X − Cinf | X  xij ∈ SnΛ ,

1.2

2

Advances in Operations Research

where    X − Cinf  max xij − cij  | i  1, . . . , n, j  1, . . . , n .

1.3

Throughout the paper we assume that the solution set of problem 1.2 is nonempty. Note that when λmin  0 and λmax  ∞, the set SnΛ reduces to the semidefinite cone   Sn  H ∈ Rn×n | H T  H, H  0 .

1.4

Using the terminology in interior point methods, Sn is called semidefinite cone, and thus the related problem belongs to the class of semidefinite programming 1. Problem 1.2 can be viewed as a type of matrix nearness problem, that is, the problem of finding a matrix that satisfies some property and is nearest to a given one. A survey on matrix nearness problems can be found in 2. The matrix nearness problems have many applications especially in finance, statistics, and compressive sensing. For example, one application in statistics is to make adjustments to a symmetric matrix so that it is consistent with prior knowledge or assumptions, and is a valid covariance matrix 3–8. Paper 9 discusses a new class of matrix nearness problems that measure approximation error using a directed distance measure called a Bregman divergence and proposes a framework for studying these problems, discusses some specific matrix nearness problems, and provides algorithms for solving them numerically. Note that a different norm is used in this paper than in these published papers 3–7 and this makes the objective function of problem 1.2 be nonsmooth, and the problem 1.2 cannot be solved very easily. In the next section, we will find that the problem 1.2 can be converted into a linear variational inequality and thus can be solved effectively with the projection and contraction PC method which is extremely simple both in theoretical analysis and numerical implementations 10–15. The paper is organized as follows. The relationship between the matrix nearness problem considered in this paper and a monotone linear variational inequality LVI is built in Section 2. In Section 3, some preliminaries on variational inequalities are summarized. The projection and contraction method for the LVI associated with the considered problem is suggested. In Section 4, the implementing details for applying the projection and contraction method to the matrix optimization problems are studied. Preliminary numerical results and some concluding remarks are reported in Sections 5 and 6, respectively.

2. Reformulating the Problem to a Monotone LVI For any d ∈ Rm , we have d∞  max ξT d, ξ∈B1

where B1  {ξ ∈ Rm | ξ1 ≤ 1} and ξT d is the Euclidean inner-product of ξ and d.

2.1

Advances in Operations Research

3

In order to simplify the following descriptions, let vecA be a linear transformation which converts the matrix A ∈ Rl×n into a column vector in Rln obtained by stacking the columns of the matrix A on top of one another, that is, vecA  a11 , . . . , al1 , a12 , . . . , al2 , . . . , a1n , . . . , aln T ,

2.2

and let matvecA be the original matrix A, that is, matvecA  A.

2.3

Based on 2.1 and the fact that the matrix X and C are symmetric, problem 1.2 can similarly be rewritten as the following min-max problem: min max vec ZT vecX − vecC,

X∈SnΛ Z∈B

2.4

where B

⎧ ⎨ ⎩

n×n

Z∈R

⎫ n  n ⎬  zij  ≤ 1 . |ZZ , ⎭ i1 j1 T

2.5

Remark 2.1. Since X and C are both symmetric matrices, we can restrict the matrices in set B to be symmetric. Let   Ω1  x  vecX | X ∈ SnΛ ,

2.6

Ω2  {z  vecZ | Z ∈ B}.

2.7

Since SnΛ and B are both convex sets, it is easy to prove that Ω1 and Ω2 are also convex sets. Let X ∗ , Z∗  ∈ SnΛ × B be any solution of 2.4 and x∗  vecX ∗ , z∗  vecZ∗ . Then zT x∗ − c ≤ z∗ T x∗ − c ≤ z∗ T x − c,

∀x, z ∈ Ω,

2.8

where c  vecC and Ω  Ω1 × Ω2 . Thus, x∗ , z∗  is a solution of the following variational inequality: find x∗ , z∗  ∈ Ω such that x − x∗ T z∗ ≥ 0, z − z∗ T −x∗  c ≥ 0,

∀ x, z ∈ Ω.

2.9

For convenience of coming analysis, we rewrite the linear variational inequality 2.9 in the following compact form: find u∗ ∈ Ω such that   u − u∗ T Mu∗  q ≥ 0,

∀u ∈ Ω,

2.10

4

Advances in Operations Research

where u∗ 

 ∗ x , z∗

u

  x , z

 M

 0 I , −I 0

q

  0 . c

2.11

In the following, we denote the linear variational inequality 2.10-2.11 by LVIΩ, M, q. Remark 2.2. Since M is skew-symmetric, the linear variational inequality LVIΩ, M, q is monotone.

3. Projection and Contraction Method for Monotone LVIs In this section, we summarize some important concepts and preliminary results which are useful in the coming analysis.

3.1. Projection Mapping Let Ω be a nonempty closed convex set of Rm . For a given v ∈ Rm , the projection of v onto Ω, denoted by PΩ v, is the unique solution of the following problem: min{u − v | u ∈ Ω}, u

3.1

where  ·  is the Euclidian norm. The projection under an Euclidean norm plays an important role in the proposed method. A basic property of the projection mapping on a closed convex set is v − PΩ vT u − PΩ v ≤ 0,

∀v ∈ Rm , ∀u ∈ Ω.

3.2

In many cases of practical applications, the closed convex set Ω has a simple structure, and the projection on Ω is easy to carry out. For example, let e be a vector whose each element is 1, and B∞  {ξ ∈ Rm | ξ∞ ≤ 1}.

3.3

Then the projection of a vector d ∈ Rm on B∞ can be obtained by PB∞ d  max{−e, min{d, e}}, where the min and max are component wise.

3.4

Advances in Operations Research

5

3.2. Preliminaries on Linear Variational Inequalities We denote the solution set of LVIΩ, M, q 2.10 by Ω∗ and assume that Ω∗  / ∅. Since the early work of Eaves 16, it is well known that the variational inequality LVIΩ, M, q problem is equivalent to the following projection equation:    u  PΩ u − Mu  q .

3.5

In other words, to solve LVIΩ, M, q is equivalent to finding a zero point of the continuous residue function    eu : u − PΩ u − Mu  q .

3.6

eu  0 ⇐⇒ u ∈ Ω∗ .

3.7

Hence,

In the literature of variational inequalities, eu is called the error bound of LVI. It quantitatively measures how much u fails to be in Ω∗ .

3.3. The Projection and Contraction Method Let u∗ ∈ Ω∗ be a solution. For any u ∈ Rm , because PΩ u − Mu  q ∈ Ω, it follows from 2.10 that     T  Mu∗  q ≥ 0, PΩ u − Mu  q − u∗



∀u ∈ Rm .

3.8

By setting v  u − Mu  q and u  u∗ in 3.2, we have    T       u − Mu  q − PΩ u − Mu  q ≥ 0, PΩ u − Mu  q − u∗



∀u ∈ Rm .

3.9

Adding the above two inequalities, and using the notation of eu, we obtain {u − u∗  − eu}T {eu − Mu − u∗ } ≥ 0,

∀u ∈ Rm .

3.10

For positive semi-definite not necessary symmetric matrix M, the following theorem follows from 3.10 directly. Theorem 3.1 Theorem 1 in 11. For any u∗ ∈ Ω∗ , we have u − u∗ T du ≥ eu2 ,

∀u ∈ Rm ,

3.11

where du  MT eu  eu.

3.12

6

Advances in Operations Research

For u ∈ Ω \ Ω∗ , it follows from 3.11-3.12 that −du is a descent direction of the unknown function 1/2u − u∗ 2 . Some practical PC methods for LVI based on direction du are given in 11. Algorithm 3.2 Projection and Contraction Method for LVI Problem 2.10. Given u0 ∈ Ω. for / Ω∗ , then do the following k  0, 1 . . ., if uk ∈     uk1  uk − γα uk d uk ,

3.13

where γ ∈ 0, 2, du is defined in 3.12, and αu 

eu2 du2

.

3.14

Remark 3.3. In fact, γ is a relaxation factor and it is recommended to be taken from 1, 2. In practical computation, usually we take γ ∈ 1.2, 1.9. The method was first mentioned in 11. Among the PC methods for asymmetric LVIΩ, M, q 10–13, this method makes just one projection in each iteration. For completeness sake, we include the theorem for LVI 2.10 and its proofs. Theorem 3.4 Theorem 2 in 11. The method 3.13-3.14 produces a sequence {uk }, which satisfies 2  2   2           k1 u − u∗  ≤ uk − u∗  − γ 2 − γ α uk e uk  .

3.15

Proof. It follows from 3.11 and 3.14 that 2      2    k1  u − u∗   uk − u∗ − γα uk d uk  2  T      2        uk − u∗  − 2γα uk uk − u∗ d uk  γ 2 α2 uk d uk  2     2    2       ≤ uk − u∗  − 2γα uk e uk   γ 2 α2 uk d uk 

3.16

2   2          uk − u∗  − γ 2 − γ α uk e uk  . Thus the theorem is proved. The method used in this paper is called projection and contraction method because it makes projections in each iteration and the generated sequence is Fej´er monotone with respect to the solution set. For skew-symmetric M in LVI 2.10, it is easy to prove that αu ≡ 1/2. Thus, the contraction inequality 3.15 can be simplified to 2  2 γ 2 − γ    2     k  k1 ∗ ∗ e uk  . u − u  ≤ u − u  − 2

3.17

Advances in Operations Research

7

Following 3.17, the convergence results of the sequence {uk } can be found in 11 or be proved similarly to that in 17. Since the above inequality is true for all u∗ ∈ Ω∗ , we have 2



dist u

k1







2



k

≤ dist u , Ω





    γ 2−γ   2 − e uk  , 2

3.18

where distu, Ω∗   min{u − u∗  | u∗ ∈ Ω∗ }.

3.19

The above inequality states that we get a “great” profit from the kth iteration, if euk  is not too small; conversely, if we get a very small profit from the kth iteration, then euk  is already very small and uk is a “sufficiently good” approximation of a u∗ ∈ Ω∗ .

4. Implementing Details of Algorithm 3.2 for Solving Problem 1.2 We use the Algorithm 3.2 to solve the linear variational inequality 2.10 arising from problem 1.2. For a given u  x, z ∈ Ω, the process of computing a new iterate is listed as follows:  Mu  q 

 z , c−x

    x − PΩ1 x − z ex u  , ez u z − PΩ2 z  x − c     ex u − ez u dx u  MT eu  eu  , du  dz u ex u  ez u eu 

unew

⎛ γdx u ⎞  new  x − ⎜ x 2 ⎟ ⎟,  ⎜ new γd ⎝ z u ⎠ z z− 2

4.1

γ ∈ 0, 2.

The key operations here are to compute PΩ1 x − z and PΩ2 z  x − c, where matx, matz, and matz  x − c are symmetric matrices. In the following, we first focus on the computing method of PΩ1 v, where matv is a symmetric matrix. Since matv is a symmetric matrix, we have      2 2 n PΩ1 v  arg min x − v | x ∈ Ω1  vec arg min X − V F | X ∈ SΛ , x

X

4.2

where V  matv. It is known that the optimal solution of problem   min X − V 2F | X ∈ SnΛ

4.3

8

Advances in Operations Research

is given by  T, QΛQ

4.4

V Q  QΛ, Λ  diagλ11 , . . . , λnn ,    diag λ11 , . . . , λnn , λii  min{max{λii , λmin }, λmax }, Λ

4.5

where



and  · F is the Frobenius norm. Thus, we have    T . PΩ1 v  vec QΛQ

4.6

Now we move to consider the technique of computing PΩ2 v, where matv is a symmetric matrix. Lemma 4.1. If matv is a symmetric matrix and x∗ is the solution of the problem  min x − v | x

m

 |xi | ≤ 1

4.7

i1

then we have matx∗   matx∗ T .

4.8

Proof. Since  arg min x − v | x

m

 |xi | ≤ 1

i1

⎫⎞ n  n ⎬  xij  ≤ 1 ⎠  vec⎝arg min X − V F | ⎭ X ⎩ i1 j1 ⎛

⎧ ⎨

4.9

and V  V T , where V  matv, we have that if x∗ is the solution of problem 4.7, then vecmatx∗ T  is also the solution of problem 4.7. As it is known that the solution of problem 4.7 is unique. Thus, x∗  vecmatx∗ T , and the proof is complete. Lemma 4.2. If x∗ is the solution of problem 

 m   min x − signv ◦ v | xi ≤ 1, x ≥ 0 , x

4.10

i1

where signv  signv1 , signv2 , . . . , signvm T , signvi  is the sign of real number vi , and  T signv ◦ v  signv1 v1 , signv2 v2 , . . . , signvm vm  |v|,

4.11

Advances in Operations Research

9

then signv ◦ x∗ is the solution of the problem  m min x − v | |xi | ≤ 1 . 

x

4.12

i1

Proof. The result follows from   x − v  signv ◦ x − v, m  m   signvi xi   |xi | ≤ 1. i1

4.13

i1

Lemma 4.3. Let v ≥ 0, T be a permutation transformation sorting the components of v in descending order, that is, the components of v  T v are in descending order. Further, suppose that x∗ is the solution of the problem 

 m xi ≤ 1 , min x − v | x

4.14

i1

then x∗  T −1 x∗ is the solution of the problem 

 m xi ≤ 1 . min x − v | x

4.15

i1

Proof. Since T is a permutation transformation, we have that T −1  T,

4.16

and the optimal values of objective function of problem 4.14 and 4.15 are equal. Note that   x∗ − v  T x∗ − v  x∗ − v, m i1

xi∗ 

m x∗i ≤ 1. i1

Thus, x∗ is the optimal solution of problem 4.15. And the proof is complete.

4.17

10

Advances in Operations Research

Remark 4.4. Suppose that matv ∈ Rn×n is a symmetric matrix for a given v ∈ Rm . Let v  T signv ◦ v, where T is a permutation transformation sorting the components of signv ◦ v in descending order. Lemmas 4.1–4.3 show that if x∗ is the solution of the following problem  m 1 2 min xi ≤ 1, x ≥ 0 , x − v  | x 2 i1

4.18

PΩ2 v  signv ◦ T x∗ .

4.19



then

Hence, to solve the problem 4.18 is a key work to obtain the projection PΩ2 v. Let e  1, 1, . . . , 1T ∈ Rm , then problem 4.18 can be rewritten as ! 1 2 T min x − v  | e x ≤ 1, x ≥ 0 . x 2

4.20

The Lagrangian function for the constrained optimal problem 4.20 is defined as     1 L x, y, w  x − v 2 − y 1 − eT x − wT x, 2

4.21

where scale y and vector w ∈ Rm are the Lagrange multipliers corresponding to inequalities eT x ≤ 1 and x ≥ 0, respectively. By KKT condition, we have

y ≥ 0,

x − v  ye − w  0,   1 − eT x ≥ 0, y 1 − eT x  0, w ≥ 0,

x ≥ 0,

4.22

wT x  0,

that is, y ≥ 0, x ≥ 0,

1 − eT x ≥ 0, x  ye − v ≥ 0,

  y 1 − eT x  0,   xT x  ye − v  0.

4.23

v, 0 is the solution of problem 4.23. Now It is easy to check that if eT v ≤ 1, then x∗ , y∗    we assume that eT v > 1. In this case, let Δv   v1 − v2 , v2 − v3 , . . . , vm−1 − vm , vm T .

4.24

Advances in Operations Research

11

Note that Δv ≥ 0 and m m i × Δvi  vi > 1. i1

4.25

i1

Thus, there exists a least-integer K such that K i × Δvi ≥ 1.

4.26

i1

Since K

vi 

i1

⎧K # ⎪ ⎪ ⎨ i × Δvi  K × vK1 ,

1 ≤ K < m,

⎪ ⎩ i × Δvi ,

K  m,

i1 K ⎪# i1

4.27

we have that K vi ≥ 1.

4.28

i1

Let 1 y  K ∗

$ K

% vi − 1 ,

4.29

i1

 T x∗  v1 − y∗ , v2 − y∗ , . . . , vK − y∗ , 0, . . . , 0 ∈ Rm .

4.30

Theorem 4.5. Let x∗ and y∗ be given by 4.30 and 4.29, respectively, then x∗ , y∗  is the solution of problem 4.23, and thus PΩ2 v  signv ◦ T x∗ . Proof. It follows from 4.26, 4.27, and 4.29 that 1 y  K ∗



$ % K K 1 vi − 1 ≥ vi − i × Δvi   K i1 i1

vK1 , 1 ≤ K < m, 0, K  m, % $ $ % K K K−1 1 1 ∗ y  vi − 1 ≤ vi − i × Δvi K i1 K i1 i1 

4.31

12

Advances in Operations Research 1  K

$

K−1

K−1

i1

i1

vi −

%

i × Δvi  vK

 vK . 4.32 Following from 4.32 it is easy to check that x∗ , y∗  is a solution of problem 4.23. Note that problem 4.18 is convex, thus x∗ , y∗  is the solution of problem 4.18. Further, according to Remark 4.4, we have PΩ2 v  signv ◦ T x∗ .

4.33

The proof is complete.  ∗ is the solution of problem Remark 4.6. Note that if X   !    1   | X  xij ∈ Sn , X − min  C Λ  β inf

4.34

where β > 0 is a given scalar, and SnΛ 

n×n

H∈R

! λmax λmin IH I , | H  H, β β T

4.35

 ∗ is the solution of problem 1.2. Thus, we can find the solution of problem then X ∗  βX 1.2 by solving problem 4.34. Let    1  x  vecX | X ∈ Sn . Ω Λ

4.36

Now, we are in the stage to describe the implementing process of Algorithm 3.2 for problem 1.2 in detail. Algorithm 4.7 projection and contraction method for problem 1.2. Step 1 Initialization. Let C  cij  ∈ Rn×n be a given symmetric matrix, λmin , λmax , β > 0 begiven scalars. Choose arbitrarily an initial point x0  vecX 0 , z0  vecZ0 , and u0   x0 , where X 0 ∈ Ω  1 , Z 0 ∈ Ω2 , Ω  1 , and Ω2 are defined by 4.36 and 2.7, respectively. Let z0 γ ∈ 0, 2, k  0, and ε > 0 be a prespecified tolerance. Step 2 Computation. Compute PΩ 1 xk − zk  and PΩ2 zk  xk − c by using 4.6 and 4.31, respectively. Let ex uk   xk − PΩ 1 xk − zk , ez uk   zk − PΩ2 zk  xk − c,

Advances in Operations Research

13

Table 1: Numerical results of example 1. n 10 20 30 40 50 60 70 80 90 100 150 200

λ1 0.2947 −0.5849 −0.9467 −1.2405 −1.7593 −2.0733 −2.3450 −2.5817 −2.8345 −2.9193 −4.1850 −4.6721

λn 2.2687 2.8134 2.9938 3.4024 3.7925 4.1648 4.3352 4.5194 4.7512 5.1780 5.7006 6.7545

k 55 58 65 75 89 106 121 140 160 181 302 446

CPU time s 0.06 0.17 0.35 0.49 0.67 1.29 2.00 3.56 5.23 7.43 35.90 119.07

eu∞ 9.2699 × 10−8 7.3282 × 10−8 7.5651 × 10−8 9.6325 × 10−8 8.5592 × 10−8 8.9517 × 10−8 9.5952 × 10−8 8.3927 × 10−8 8.3227 × 10−8 9.3473 × 10−8 7.4435 × 10−8 7.8171 × 10−8

 vecX − C∞ / vecC∞ 1.9768 × 10−7 2.7002 × 10−7 2.8008 × 10−7 5.0350 × 10−7 3.9993 × 10−7 6.1197 × 10−7 6.6696 × 10−7 3.7658 × 10−7 4.8914 × 10−7 6.2160 × 10−7 6.1552 × 10−7 8.4093 × 10−7

Table 2: Numerical results of example 2. n 10 20 30 40 50 60 70 80 90 100 150 200

λ1 0.0090 0.0048 0.0184 0.0004 0.0045 0.0020 0.0084 0.0021 0.0002 0.0011 0.0001 0.0013

λn 9.0559 19.4004 33.6477 46.4279 55.7072 73.8863 86.8784 100.3238 108.6724 130.7923 195.6736 261.9417

k 74 107 7617 162 192 1259 245 614 384 326 2214 1582

CPU time s 0.08 0.30 17.98 0.76 1.46 15.18 4.28 14.13 10.38 9.03 98.28 165.29

eu∞ 9.9036 × 10−8 9.4778 × 10−8 9.9976 × 10−8 9.9938 × 10−8 9.4131 × 10−8 9.9708 × 10−8 8.9912 × 10−8 9.0897 × 10−8 7.7490 × 10−8 9.0180 × 10−8 7.0598 × 10−8 7.1460 × 10−8

 vecX − C∞ / vecC∞ 4.2676 × 10−9 4.2266 × 10−7 7.0348 × 10−2 1.2216 × 10−1 1.8517 × 10−1 2.4254 × 10−1 3.3916 × 10−1 3.8534 × 10−1 4.7801 × 10−1 5.2592 × 10−1 6.6452 × 10−1 7.4489 × 10−1

dx uk   ex uk  − ez uk , dz uk   ex uk   ez uk ,  k where uk  xzk .

Step 3. Verification. Ifeuk   ∞ < ε, then stop and output the approximate solution ex uk  k k X  βmatx , where eu   e uk  . z Step 4. Iteration. xk1  xk − γdx uk /2, zk1  zk − γdz uk /2, k : k  1, goto Step 2.

5. Numerical Experiments In this section, some examples are provided to illustrate the performance of Algorithm 4.7 for solving problem 1.2. In the following illustrative examples, the computer program for

14

Advances in Operations Research

implementing Algorithm 4.7 is coded in Matlab and the program runs on IBM notebook R51.  C  T /2  eyen, λmin  −50, and λmax  50, Example 5.1. Consider problem 1.2 with C  C  where C  randn − 0.5, rand and eye are both the Matlab functions, and n is the size of problem 1.2. Let γ  1.5, ε  1 × 10−7 , then we have β



nvecC∞ ,

x0  vec

    1  λ1  λ2 × eyen , 2

z0  veczerosn,

5.1

where zeros is also the Matlab function, λ1  max{λ1 /β, λmin /β}, λ2  min{λn /β, λmax /β}, and λ1 , λn are the smallest and the largest eigenvalue of matrix C, respectively. Table 1 reports the numerical results of Example 5.1 solved by Algorithm 4.7, where k is the number of iterations, the unit of time is second, and X is the approximate solution of problem 1.2 obtained by Algorithm 4.7.   C  T , λmin  0, and λmax  20, where C Example 5.2. Consider problem 1.2 with C  C 2 × randn − 1. In this test example, let γ, ε be same as that in Example 5.1, β, x0 , and z0 be also given according to 5.1. Table 2 reports the numerical results of Example 5.2 and shows the numerical performance of Algorithm 4.7 for solving problem 1.2.

6. Conclusions In this paper, a relationship between the matrix nearness problem and the linear variational inequality has been built. The matrix nearness problem considered in this paper can be solved by applying an algorithm for the related linear variational inequality. Based on this point, a projection and contraction method is presented for solving the matrix nearness problem, and the implementing details are introduced in this paper. Numerical experiments show that the method suggested in this paper has a good performance, and the method can be improved by setting the parameters in Algorithm 4.7 properly. Thus, further studying of the effect of the parameters in Algorithm 4.7 maybe a very interesting work.

Acknowledgments This research is financially supported by a research grant from the Research Grant Council of China Project no. 10971095.

References 1 S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. 2 N. J. Higham, “Matrix nearness problems and applications,” in Applications of Matrix Theory, M. Gover and S. Barnett, Eds., pp. 1–27, Oxford University Press, Oxford, UK, 1989. 3 S. Boyd and L. Xiao, “Least-squares covariance matrix adjustment,” SIAM Journal on Matrix Analysis and Applications, vol. 27, no. 2, pp. 532–546, 2005. 4 N. J. Higham, “Computing a nearest symmetric positive semidefinite matrix,” Linear Algebra and its Applications, vol. 103, pp. 103–118, 1988. 5 N. J. Higham, “Computing the nearest correlation matrix—a problem from finance,” IMA Journal of Numerical Analysis, vol. 22, no. 3, pp. 329–343, 2002.

Advances in Operations Research

15

6 G. L. Xue and Y. Y. Ye, “An efficient algorithm for minimizing a sum of Euclidean norms with applications,” SIAM Journal on Optimization, vol. 7, no. 4, pp. 1017–1036, 1997. 7 G. L. Xue and Y. Y. Ye, “An efficient algorithm for minimizing a sum of p-norms,” SIAM Journal on Optimization, vol. 10, no. 2, pp. 551–579, 2000. 8 J. Yang and Y. Zhang, “Alternating direction algorithms for 1 -problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011. 9 I. S. Dhillon and J. A. Tropp, “Matrix nearness problems with Bregman divergences,” SIAM Journal on Matrix Analysis and Applications, vol. 29, no. 4, pp. 1120–1146, 2007. 10 B. S. He, “A projection and contraction method for a class of linear complementarity problems and its application in convex quadratic programming,” Applied Mathematics and Optimization, vol. 25, no. 3, pp. 247–262, 1992. 11 B. S. He, “A new method for a class of linear variational inequalities,” Mathematical Programming, vol. 66, no. 2, pp. 137–144, 1994. 12 B. S. He, “Solving a class of linear projection equations,” Numerische Mathematik, vol. 68, no. 1, pp. 71–80, 1994. 13 B. S. He, “A modified projection and contraction method for a class of linear complementarity problems,” Journal of Computational Mathematics, vol. 14, no. 1, pp. 54–63, 1996. 14 B. He, “A class of projection and contraction methods for monotone variational inequalities,” Applied Mathematics and Optimization, vol. 35, no. 1, pp. 69–76, 1997. 15 M. H. Xu and T. Wu, “A class of linearized proximal alternating direction methods,” Journal of Optimization Theory and Applications, vol. 151, no. 2, pp. 321–337, 2011. 16 B. C. Eaves, “On the basic theorem of complementarity,” Mathematical Programming, vol. 1, no. 1, pp. 68–75, 1971. 17 M. H. Xu, J. L. Jiang, B. Li, and B. Xu, “An improved prediction-correction method for monotone variational inequalities with separable operators,” Computers & Mathematics with Applications, vol. 59, no. 6, pp. 2074–2086, 2010.

Hindawi Publishing Corporation Advances in Operations Research Volume 2012, Article ID 483479, 17 pages doi:10.1155/2012/483479

Research Article Modified Halfspace-Relaxation Projection Methods for Solving the Split Feasibility Problem Min Li School of Economics and Management, Southeast University, Nanjing 210096, China Correspondence should be addressed to Min Li, [email protected] Received 24 March 2012; Accepted 11 May 2012 Academic Editor: Abdellah Bnouhachem Copyright q 2012 Min Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper presents modified halfspace-relaxation projection HRP methods for solving the split feasibility problem SFP. Incorporating with the techniques of identifying the optimal step length with positive lower bounds, the new methods improve the efficiencies of the HRP method Qu and Xiu 2008. Some numerical results are reported to verify the computational preference.

1. Introduction Let C and Q be nonempty closed convex sets in Rn and Rm , respectively, and A an m × n real matrix. The problem, to find x ∈ C with Ax ∈ Q if such x exists, was called the split feasibility problem SFP by Censor and Elfving 1. In this paper, we consider an equivalent reformulation 2 of the SFP: minimize fz

subject to z 

  x ∈ Ω, y

1.1

where fz 

2 1 1 Bz2  y − Ax , 2 2

  B  −A I ,

Ω  C × Q.

1.2

For convenience, we only consider the Euclidean norm. It is obvious that fz is convex. If z  T xT , yT  ∈ Ω and fz  0, then x solves the SFP. Throughout we assume that the solution

2

Advances in Operations Research

set of the SFP is nonempty. And thus the solution set of 1.1, denoted by Ω∗ , is nonempty. In addition, in this paper, we always assume that the set Ω is given by Ω  {z ∈ Rn m | cz ≤ 0},

1.3

where c : Rn m → R is a convex not necessarily differentiable function. This representation of Ω is general enough, because any system of inequalities {cj z ≤ 0, j ∈ J}, where cj z are convex and J is an arbitrary index set, can be reformulated as the single inequality cz ≤ 0 with cz  sup{cj z | j ∈ J}. For any z ∈ Rn m , at least one subgradient ξ ∈ ∂cz can be calculated, where ∂cz is a subgradient of cz at z and is defined as follows:   ∂cz  ξ ∈ Rn m | cu ≥ cz u − zT ξ, ∀u ∈ Rn m .

1.4

Qu and Xiu 2 proposed a halfspace-relaxation projection method to solve the convex optimization problem 1.1. Starting from any z0 ∈ Rn × Rm , the HRP method iteratively updates zk according to the formulae:

zk  PΩk zk − αk ∇f zk , zk 1

1.5





,  zk − γk zk − zk − αk ∇f zk − ∇f zk

1.6

where

 T



n m k k k Ωk  z ∈ R |c z z−z ξ ≤0 ,

1.7

ξk is an element in ∂czk , αk  γlmk and mk is the smallest nonnegative integer m such that 2

T

    ∇f zk − ∇f zk ≤ 1 − ρ zk − zk  , αk zk − zk 2    θρzk − zk  γk  

 

2 ,   k k k  z − z − αk ∇f zk − ∇f z

ρ ∈ 0, 1,

θ ∈ 0, 2.

1.8

1.9

The notation PΩk v denotes the projection of v onto Ωk under the Euclidean norm, that is, PΩk v  arg min{u − v | u ∈ Ωk }.

1.10

Here the halfspace Ωk contains the given closed convex set Ω and is related to the current iterative point zk . From the expressions of Ωk , the projection onto Ωk is simple to be computed for details, see Proposition 3.3. The idea to construct the halfspace Ωk and replace PΩ by PΩk is from the halfspace-relaxation projection technique presented by Fukushima 3. This technique is often used to design algorithms see, e.g., 2, 4, 5 to solve the SFP. The drawback

Advances in Operations Research

3

of the HRP method in 2 is that the step length γk defined in 1.9 may be very small since limk → ∞ zk − zk   0. Note that the reformulation 1.1 is equivalent to a monotone variational inequality VI: z∗ ∈ Ω,

z − z∗ T ∇fz∗  ≥ 0,

∀z ∈ Ω,

1.11

where ∇fz  B T Bz.

1.12

The forward-backward splitting method 6 and the extragradient method 7, 8 are considerably simple projection-type methods in the literature. They are applicable for solving monotone variational inequalities, especially for 1.11. For given zk , let

zk  PΩ zk − αk ∇f zk .

1.13

Under the assumption   

     αk ∇f zk − ∇f zk  ≤ νzk − zk ,

ν ∈ 0, 1,

1.14

the forward-backward FB splitting method generates the new iterate via



zk 1  PΩ zk αk ∇f zk − ∇f zk ,

1.15

while the extra-gradient EG method generates the new iterate by

zk 1  PΩ zk − αk ∇f zk .

1.16

The forward-backward splitting method 1.15 can be rewritten as



, zk 1  PΩ zk − γk zk − zk − αk ∇f zk − ∇f zk

1.17

where the descent direction −zk − zk − αk ∇fzk  − ∇fzk  is the same as 1.6 and the step length γk along this direction always equals to 1. He et al. 9 proposed the modified versions of the FB method and EG method by incorporating the optimal step length γk along

4

Advances in Operations Research

the descent directions −zk − zk − αk ∇fzk  − ∇fzk  and −αk ∇fzk , respectively. Here γk is defined by

γk  θγk∗ ,

θ ∈ 0, 2,

T

 

zk − zk zk − zk − αk ∇f zk − ∇f zk γk∗  . 

 

2   k k k k  z − z − αk ∇f z − ∇f z

1.18

Under the assumption 1.14, γk∗ ≥ 1/2 is lower bounded. This paper is to develop two kinds of modified halfspace-relaxation projection methods for solving the SFP by improving the HRP method in 2. One is an FB type HRP method, the other is an EG type HRP method. The numerical results reported in 9 show that efforts of identifying the optimal step length usually lead to attractive numerical improvements. This fact triggers us to investigate the selection of optimal step length with positive lower bounds in the new methods to accelerate convergence. The preferences to the HRP method are verified by numerical experiments for the test problems arising in 2. The rest of this paper is organized as follows. In Section 2, we summarize some preliminaries of variational inequalities. In Section 3, we present the new methods and provide some remarks. The selection of optimal step length of the new methods is investigated in Section 4. Then, the global convergence of the new methods is proved in Section 5. Some preliminary numerical results are reported in Section 6 to show the efficiency of the new methods, and the numerical superiority to the HRP method in 2. Finally, some conclusions are made in Section 7.

2. Preliminaries In the following, we state some basic concepts for the variational inequality VIS, F: s∗ ∈ S,

s − s∗ T Fs∗  ≥ 0,

∀s ∈ S,

2.1

where F is a mapping from RN into RN , and S ⊆ RN is a nonempty closed convex set. The mapping F is said to be monotone on RN if s − tT Fs − Ft ≥ 0,

∀s, t ∈ RN .

2.2

Notice that the variational inequality VIS, F is invariant when we multiply F by some positive scalar α. Thus VIS, F is equivalent to the following projection equation see 10: s  PS s − αFs,

2.3

that is, to solve VIS, F is equivalent to finding a zero point of the residue function es, α : s − PS s − αFs.

2.4

Note that es, α is a continuous function of s because the projection mapping is nonexpansive. The following lemma states a useful property of es, α.

Advances in Operations Research

5

Lemma 2.1 4, Lemma 2.2. Let F be a mapping from RN into RN . For any s ∈ RN and α > 0, we have min{1, α}es, 1 ≤ es, α ≤ max{1, α}es, 1.

2.5

Remark 2.2. Let S ⊇ S be a nonempty closed convex set and let eSs, α be defined as follows: eSs, α  s − PSs − αFs.

2.6

Inequalities 2.5 still hold for eSs, α. Some fundamental inequalities are listed below without proof, see, for example, 10. Lemma 2.3. Let S be a nonempty closed convex set. Then the following inequalities always hold 

T    t − PSt PSt − s ≥ 0, ∀t ∈ RN , ∀s ∈ S,     P t − s2 ≤ t − s2 − t − P t2 , ∀t ∈ RN , s ∈ S.  S S

2.7 2.8

The next lemma lists some inequalities which will be useful for the following analysis. Lemma 2.4. Let S ⊇ S be a nonempty closed convex set, s∗ a solution of the monotone VIS, F 2.1 and especially Fs∗   0. For any s ∈ RN and α > 0, one has     αs − s∗ T F PSs − αFs ≥ αeSs, αT F PSs − αFs ,     s − s∗ T eSs, α − α Fs − F PSs − αFs     ≥ eSs, αT eSs, α − α Fs − F PSs − αFs .

2.9 2.10

Proof. Under the assumption that F is monotone we have    T   αF PSs − αFs − αFs∗  PSs − αFs − s∗ ≥ 0,

∀s ∈ RN .

2.11

Using Fs∗   0 and the notation of eSs, α, from 2.11 the assertion 2.9 is proved. Setting t  s − αFs and s  s∗ in the inequality 2.7 and using the notation of eSs, α, we obtain  T   eSs, α − αFs PSs − αFs − s∗ ≥ 0,

∀s ∈ RN .

2.12

Adding 2.11 and 2.12, and using Fs∗   0, we have 2.10. The proof is complete. Note that the assumption Fs∗   0 in Lemma 2.4 is reasonable. The following proposition and remark will explain this.

6

Advances in Operations Research

Proposition 2.5 2, Proposition 2.2. For the optimization problem 1.1, the following two statements are equivalent: i z∗ ∈ Ω and fz∗   0, ii z∗ ∈ Ω and ∇fz∗   0. Remark 2.6. Under the assumption that the solution set of the SFP is nonempty, if z∗  x∗ T , y∗ T T is a solution of 1.1, then we have   ∇fz∗   BT Bz∗  BT y∗ − Ax∗  0.

2.13

This point z∗ is also the solution point of the VI 1.11. The next lemma provides an important boundedness property of the subdifferential, see, for example, 11. Lemma 2.7. Suppose h : RN → R is a convex function, then it is subdifferentiable everywhere and its subdifferentials are uniformly bounded on any bounded subset of RN .

3. Modified Halfspace-Relaxation Projection Methods In this section, we will propose two kinds of modified halfspace-relaxation projection methods—Algorithms 1 and 2. Algorithm 1 is an FB type HRP method and Algorithm 2 is an EG type HRP method. The relationship of these two methods is that they use the same optimal step length along different descent directions. The detailed procedures are presented as below.

The Modified Halfspace-Relaxation Projection Methods Step 1. Let α0 > 0, 0 < μ < ν < 1, z0 ∈ Rn m , θ ∈ 0, 2, ε > 0 and k  0. In practical computation, we suggest to take μ  0.3, ν  0.9 and θ  1.8. Step 2. Set

zk  PΩk zk − αk ∇f zk ,

3.1

where Ωk is defined in 1.7. If zk − zk  ≤ ε, terminate the iteration with the iterate zk  T

T T

xk  , yk   , and then xk is the approximate solution of the SFP. Otherwise, go to Step 3. Step 3. If   

   αk ∇f zk − ∇f zk    ≤ ν, rk :  k k z − z 

3.2

Advances in Operations Research

7

then set



ek zk , αk  eΩk zk , αk  zk − zk ,



gk zk , αk  αk ∇f zk ,







dk zk , αk  ek zk , αk − αk ∇f zk gk zk , αk , γk∗

 T   ek zk , αk dk zk , αk  ,    dk zk , αk 2

γk  θγk∗ ,

3.3 3.4 3.5



  zk 1  PΩk zk − γk dk zk , αk , Algorithm 1 : FB type HRP method

3.6



  zk 1  PΩk zk − γk gk zk , αk , Algorithm 2 : EG type HRP method

3.7

or

⎧ ⎨3α k αk :  2 ⎩α k αk 1  αk ,

if rk ≤ μ, otherwise ,

3.8

k  k 1, go to Step 2.

Step 4. Reduce the value of αk by αk : 2/3αk ∗ min{1, 1/rk },

set zk  PΩk zk − αk ∇f zk and go to Step 3.

3.9

Remark 3.1. In Step 3, if the selected αk satisfies 0 < αk ≤ ν/L L is the largest eigenvalue of the matrix BT B, then from 1.12, we have     

       αk ∇f zk − ∇f zk  ≤ αk Lzk − zk  ≤ νzk − zk ,

3.10

and thus Condition 3.2 is satisfied. Without loss of generality, we can assume that infk {αk }  αmin > 0. Remark 3.2. By the definition of subgradient, it is clear that the halfspace Ωk contains Ω. From the expressions of Ωk , the orthogonal projections onto Ωk may be directly calculated and then we have the following proposition see 3, 12. Proposition 3.3. For any z ∈ Rn m ,

PΩk z 

⎧ ⎪ ⎨ ⎪ ⎩

z− z,

where Ωk is defined in 1.7.

   T c zk z − zk ξk ξk 2

ξk ,

   T if c zk z − zk ξk > 0; otherwise,

3.11

8

Advances in Operations Research

Remark 3.4. For the FB type HRP method, taking

zk 1  zk − γk dk zk , αk

3.12

as the new iterate instead of Formula 3.6 seems more applicable in practice. Since from Proposition 3.3 the projection onto Ωk is easy to be computed, Formula 3.6 is still preferable to generate the new iterate zk 1 . Remark 3.5. The proposed methods and the HRP method in 2 can be used to solve more general convex optimization problem minimize fz

subject to z ∈ Ω,

3.13

where fz is a general convex function only with the property that ∇fz∗   0 for any solution point z∗ of 3.13, and Ω is defined in 1.3. The corresponding theoretical analysis is similar as these methods to solve 1.1.

4. The Optimal Step Length This section concentrates on investigating the optimal step length with positive lower bounds in order to accelerate convergence of the new methods. To justify the reason of choosing the optimal step length γk in the FB type HRP method 3.6, we start from the following general form of the FB type HRP method:     k 1 z zk 1 γ  P Ω k FB PC γ ,

4.1

  k k zk 1 P C γ  z − γdk z , αk .

4.2

2  2         ∗ Θk FBγ :  zk − z∗  − zk 1 γ − z  , FB

4.3

where

Let

which measures the progress made by the FB type HRP method. Note that Θk FBγ  is a function of the step length γ. It is natural to consider maximizing this function by choosing an optimal parameter γ. The solution z∗ is not known, so we cannot maximize Θk FBγ  directly. The following theorem gives an estimate of Θk FBγ  which does not include the unknown solution z∗ . Theorem 4.1. Let z∗ be an arbitrary point in Ω∗ . If the step length in the general FB type HRP method is taken γ > 0, then we have 2  2           ∗ Θk FBγ :  zk − z∗  − zk 1 FB γ − z  ≥ Υk γ ,

4.4

Advances in Operations Research

9

where 

T

2     Υk γ : 2γek zk , αk dk zk , αk − γ 2 dk zk , αk  .

4.5

k 1 ∗ Proof. Since zk 1 FB γ  PΩk zP C γ and z ∈ Ω ⊆ Ωk , it follows from 2.8 that

 2  2  2       k 1   2  k 1      k 1   ∗ k 1 ∗ − ≤ γ − z γ − z γ γ − z zFB γ − z∗  ≤ zk 1   , z z FB PC PC PC and consequently

2  2         ∗ γ − z Θk FBγ ≥ zk − z∗  − zk 1  . PC

4.6

4.7

Setting α  αk , s  zk , s∗  z∗ and S  Ωk in the equality 2.10 and using the notation of ek zk , αk  see 3.3 and dk zk , αk  see 3.4, we have

zk − z∗

T



T

dk zk , αk ≥ ek zk , αk dk zk , αk .

4.8

Using this and 4.2, we get  2  2  2  2

   k    k  k ∗ ∗ k ∗ z − z∗  − zk 1 P C γ − z   z − z  − z − γdk z , αk − z  

T

2

   2γ zk − z∗ dk zk , αk − γ 2 dk zk , αk 

4.9



T

2   ≥ 2γek zk , αk dk zk , αk − γ 2 dk zk , αk  , and then from 4.7 the theorem is proved. Similarly, we start from the general form of the EG type HRP method

  k k . zk 1 EG γ  PΩk z − γgk z , αk

4.10

to analyze the optimal step length in the EG type HRP method 3.7. The following theorem estimates the “progress” in the sense of Euclidean distance made by the new iterate and thus motivates us to investigate the selection of the optimal length γk in the EG type HRP method 3.7. Theorem 4.2. Let z∗ be an arbitrary point in Ω∗ . If the step length in the general EG type HRP method is taken γ > 0, then one has 2  2           ∗ Θk EGγ : zk − z∗  − zk 1 γ − z  ≥ Υk γ , EG where Υk γ is defined in 4.5 and zk 1 P C γ is defined in 4.2.

4.11

10

Advances in Operations Research

k k ∗ Proof. Since zk 1 EG γ  PΩk z − γgk z , αk  and z ∈ Ω ⊆ Ωk , it follows from 2.8 that

 2  2 



  2  k 1       zEG γ − z∗  ≤ zk − γgk zk , αk − z∗  − zk − γgk zk , αk − zk 1 EG γ  ,

4.12

and consequently we get 2 

2 

2            k Θk EGγ ≥ zk − z∗  − zk − z∗ − γgk zk , αk  zk − zk 1 EG γ − γgk z , αk   T





    T k 2  k ∗  zk − zk 1 gk zk , αk − 2γ zk − zk 1 gk z , αk . EG γ  2γ z − z EG γ 4.13 Setting α  αk , s  zk , s∗  z∗ and S  Ωk in the equality 2.9 and using the notation of ek zk , αk  and gk zk , αk  see 3.3, we have

zk − z∗

T



T

gk zk , αk ≥ ek zk , αk gk zk , αk .

4.14

From the above inequality, we obtain

T



       T k 2  k k k k 1 z z − 2γ z z . Θk EGγ ≥ zk − zk 1 2γe , α g , α − z g , α γ γ  k k k k k k EG EG 4.15 Using gk zk , αk   dk zk , αk  − ek zk , αk  − αk ∇fzk  see 3.4, it follows that

T 





      2  k dk zk , αk − ek zk , αk − αk ∇f zk Θk EGγ ≥ zk − zk 1 EG γ  2γek z , αk





   T  k − 2γ zk − zk 1 dk z , αk − ek zk , αk − αk ∇f zk , EG γ 4.16 which can be rewritten as 

2 2

T

        k 2 k k k Θk EGγ ≥ zk − zk 1 z z z z , α − γ , α 2γe , α d , α γ − γd   d k k k k k k k k EG

T



  k k z e z − α 2γ zk − zk 1 , α , α ∇f zk γ − e k k k k k EG

T





    k k ≥ Υk γ 2γ zk − zk 1 z e z − α , α , α ∇f zk . γ − e k k k k k EG 4.17 Now we consider the last term in the right-hand side of 4.17. Notice that



    k k k zk − zk 1 − zk 1 EG γ − ek z , αk  PΩk z − αk ∇f z EG γ .

4.18

Advances in Operations Research

11

 Setting t : zk −αk ∇fzk , s : zk 1 EG γ and S  Ωk in the basic inequality 2.7 of the projection k mapping and using the notation of ek z , αk , we get 



  T  k e z − α ≥ 0, PΩk zk − αk ∇f zk − zk 1 , α ∇f zk γ k k k EG

4.19

and therefore 

T 



   k zk − zk 1 ek zk , αk − αk ∇f zk ≥ 0. EG γ − ek z , αk

4.20

Substituting 4.20 in 4.17, it follows that     Θk EGγ ≥ Υk γ

4.21

and the theorem is proved. Theorems 4.1 and 4.2 provide the basis of the selection of the optimal step length of the new methods. Note that Υk γ is the profit-function since it is a lower-bound of the progress obtained by the new methods both the FB type HRP method and EG type HRP method. This motivates us to maximize the profit-function Υk γ to accelerate convergence of the new methods. Since Υk γ a quadratic function of γ, it reaches its maximum at

γk∗

 T   ek zk , αk dk zk , αk : .    dk zk , αk 2

4.22

Note that under Condition 3.2, using the notation of dk zk , αk  we have

T



2

T

  ∇f zk − ∇f zk ek zk , αk dk zk , αk  ek zk , αk  − αk ek zk , αk 

2   ≥ 1 − νek zk , αk  .

4.23

In addition, since

T



2

T

  ∇f zk − ∇f zk ek zk , αk dk zk , αk  ek zk , αk  − αk ek zk , αk

2

T

1   ∇f zk − ∇f zk ek zk , αk  − αk ek zk , αk 2



2 1    αk ∇f zk − ∇f zk  2

1  dk zk , αk 2 , 2 ≥

4.24

12

Advances in Operations Research

we have

γk∗

 T   ek zk , αk dk zk , αk 1 : ≥ .   2 k 2 dk z , αk 

4.25

From numerical point of view, it is necessary to attach a relax factor to the optimal step length γk∗ obtained theoretically to achieve faster convergence. The following theorem concerns how to choose the relax factor. Theorem 4.3. Let z∗ be an arbitrary point in Ω∗ , θ a positive constant and γk∗ defined in 4.22. For given zk ∈ Ωk , αk is chosen such that Condition 3.2 is satisfied. Whenever the new iterate zk 1 θγk∗  is generated by

  zk 1 θγk∗  PΩk zk − θγk∗ dk zk , αk

or



  zk 1 θγk∗  PΩk zk − θγk∗ gk zk , αk ,

4.26

we have  2  2 θ2 − θ1 − ν 

2  k 1  ∗       θγk − z∗  ≤ zk − z∗  − z ek zk , αk  . 2

4.27

Proof. From Theorems 4.1 and 4.2 we have  2  2      k    z − z∗  − zk 1 θγk∗ − z∗  ≥ Υk θγk∗ .

4.28

Using 4.5, 4.23, and 4.25, we obtain

T

  

2   2  Υk θγk∗  2θγk∗ ek zk , αk dk zk , αk − θγk∗ dk zk , αk 

T



T

 2θγk∗ ek zk , αk dk zk , αk − θ2 γk∗ ek zk , αk dk zk , αk

k

T





γk∗ θ2



2 θ2 − θ1 − ν    ek zk , αk  , 2

− θek z , αk

k

dk z , αk



4.29

and the assertion is proved. Theorem 4.3 shows theoretically that any θ ∈ 0, 2 guarantees that the new iterate makes progress to a solution. Therefore, in practical computation, we choose γk  θγk∗ with θ ∈ 0, 2 as the step length in the new methods. We need to point out that from numerical experiments, θ ∈ 1, 2 is much preferable since it leads to better numerical performance.

Advances in Operations Research

13

5. Convergence It follows from 4.27 that for both the FB type HRP method 3.6 and the EG type HRP method 3.7, there exists a constant τ > 0, such that 2  2  

2     k 1   z − z∗  ≤ zk − z∗  − τ ek zk , αk  ,

∀z∗ ∈ Ω∗ .

5.1

The convergence result of the proposed methods in this paper is based on the following theorem. Theorem 5.1. Let {zk } be a sequence generated by the proposed method 3.6 or 3.7. Then {zk } converges to a point z, which belongs to Ω∗ . Proof. First, from 5.1 we get 

   lim ek zk , αk   0.

k→∞

5.2

Note that

ek zk , αk  zk − zk ,

see 3.3.

5.3

We have     lim zk − zk   0.

k→∞

5.4

Again, it follows from 5.1 that the sequence {zk } is bounded. Let z be a cluster point of {zk } and the subsequence {zkj } converges to z. We are ready to show that z is a solution point of 1.1. First, we show that z ∈ Ω. Since zkj ∈ Ωkj , then by the definition of Ωkj , we have T



c zkj zkj − zkj ξkj ≤ 0,

∀j  1, 2, . . . .

5.5

Passing onto the limit in this inequality and taking into account 5.4 and Lemma 2.7, we obtain that c z ≤ 0.

5.6

Hence, we conclude z ∈ Ω. z ≥ 0, ∀z ∈ Ω. To do so, we first prove Next, we need to show z − zT ∇f 

   lim ekj zkj , 1   0.

j →∞

5.7

14

Advances in Operations Research

It follows from Remark 3.1 in Section 3 that infj {αkj } ≥ infk {αk }  αmin > 0. Then from Lemma 2.1, we have    kj  

 − zkj  z   kj ,  ekj z , 1  ≤ min 1, αkj

5.8

which, together with 5.4, implies that      kj   kj  

 − zkj  − zkj  z z   kj lim ekj z , 1  ≤ lim  0.  ≤ lim  j →∞ j →∞ j → ∞ min{1, αmin } min 1, αkj

5.9

Setting t  zkj − ∇fzkj , S  Ωkj in the inequality 2.7, for any z ∈ Ω ⊆ Ωkj , we obtain









T

zkj − ∇f zkj − PΩkj zkj − ∇f zkj PΩkj zkj − ∇f zkj − z ≥ 0.

5.10

From the fact that ekj zkj , 1  zkj − PΩkj zkj − ∇fzkj , we have







T

ekj zkj , 1 − ∇f zkj zkj − ekj zkj , 1 − z ≥ 0,

∀z ∈ Ω,

5.11

that is,

T



T



z − zkj ∇f zkj ekj zkj , 1 zkj − ekj zkj , 1 − z ∇f zkj ≥ 0,

∀z ∈ Ω.

5.12

Letting j → ∞, taking into account 5.7, we deduce z ≥ 0, z − zT ∇f

∀z ∈ Ω,

5.13

which implies that z ∈ Ω∗ . Then from 5.1, it follows that  2  2 

2  k 1      z − z ≤ zk − z − τ ek zk , αk  .

5.14

Together with the fact that the subsequence {zkj } converges to z, we can conclude that {zk } converges to z. The proof is complete.

6. Numerical Results In this section, we implement the proposed methods to solve some numerical examples arising in 2 and then report the results. To show the superiority of the new methods, we also compare them with the HRP method in 2. The codes for implementing the

Advances in Operations Research

15

Table 1: Results for Example 6.1 using the HRP method in 2. Starting points 1, 2, 3, 0, 0, 0T 1, 1, 1, 1, 1, 1T 1, 2, 3, 4, 5, 6T

CPU Sec. 0.0500 0.0910 0.1210

Number of iterations 43 67 85

Approximate solution 0.3213, 0.2815, 0.1425T 0.8577, 0.8577, 1.3097T 1.1548, 0.8518, 1.8095T

Table 2: Results for Example 6.1 using FB type HRP method. Starting points 1, 2, 3, 0, 0, 0T 1, 1, 1, 1, 1, 1T 1, 2, 3, 4, 5, 6T

CPU Sec. 0; b there exists x0 ∈ C such that ∀y ∈ K \ C Ψx0 , y > 0. Then, there exists y ∈ C such that Φx, y ≤ 0 for all x ∈ K. Furthermore, the set of solution is compact. Remark 2.11. 1 Condition iii in Lemma 2.10 is much more related to convexity assumptions of the bifunction Ψ, see Proposition 2.12 in the following. 2 If K is compact, then condition iv in Lemma 2.10 can be dropped. The following proposition gives some sufficient conditions which insure condition iii in Lemma 2.10. Proposition 2.12. Suppose that i Ψx, x ≤ 0 for each x ∈ K; ii for each y ∈ K fixed, the set {x ∈ K : Ψx, y > 0} is convex. Then, condition (iii) of Lemma 2.10 is staisfied.

Advances in Operations Research

7

Proof. Suppose by contradiction that condition iii of Lemma 2.10 is not satisfied. Then, there exist x1 , . . . , xn ∈ K and λ1 , . . . , λn ≥ 0, with ni1 λi  1, such that Ψxi , nj1 λj xj  > 0. n Therefore, by setting y  j1 λj xj , it follows from ii that Ψy, y > 0, which contradicts assumption i. As a consequence of Lemma 2.10, we obtain the following result on existence of mixed equilibrium problem that we will need in the sequel. We will include its proof for completeness. Lemma 2.13. Let X be a Banach space and K a closed convex subset of X. Let f, g : K × K → R be two real bifunctions such that i fx, x ≥ 0 for all x ∈ K; f is monotone and upper-hemicontinuous; for each x ∈ K fixed, the function y → fx, y is convex and lower semicontinuous; ii gx, x ≥ 0 for all x ∈ K; for each y ∈ K fixed, the function x → gx, y is uppersemicontinuous; for each x ∈ K fixed, the function y → gx, y is convex and lower semicontinuous; iii coercivity: there exists a nonempty compact convex subset C of X and y0 ∈ C ∩ K such that     f x, y0  g x, y0 < 0 for each x ∈ K \ C.

2.20

Then, there exists x ∈ C ∩ K such that     f x, y  g x, y ≥ 0

∀y ∈ K.

2.21

Furthermore, the solution set Sf,g of the mixed equilibrium problem 2.21 is compact and convex. Proof. The proof is a direct consequence of Lemma 2.10 by setting       Φ x, y  f x, y − g y, x ,

      Ψ x, y  −g y, x − f y, x .

2.22

Remark 2.14. 1 Lemma 2.13 is in fact a slight extension of Theorem 1 in 2 and Theorem 4.5 in 21 where the equilibrium condition fx, x  gx, x  0 has been relaxed by assuming fx, x ≥ 0 and gx, x ≥ 0 for all x ∈ K. 2 If X is a reflexive Banach space endowed with its weak topology σX, X ∗ , then the coercivity condition iii in Lemma 2.13 can be replaced by the following condition: iii there exists y0 ∈ K such that lim x−y0 → ∞ gx, y0 / x − y0  −∞. Ended, let r1 > 0, and set By0 , r1   {x ∈ X : x − y0 ≤ r1 }. One has By0 , r1  a convex and σX, X ∗ -compact subset of X. Since X is a reflexive Banach space, fy0 , · is lower semicontinuous and By0 , r is weakly compact, it follows that there exists α0 ∈ R such that fy0 , y > α0 for all y ∈ By0 , r1 . Let x ∈ K \ By0 , r1 , and set r1 x  y x − y0 



r1  1−  x − y0 

y0 .

2.23

8

Advances in Operations Research Since fy0 , · is convex, y ∈ By0 , r1  and fy0 , y0  ≥ 0, one deduces   r1  f y0 , x  α0 ≤  x − y0 



r1  1−  x − y0 



  f y0 , y0

  ∀x ∈ K \ B y0 , r1 .

2.24

It follows that   x − y0       f y0 , x ≥ α0 − f y0 , y0  f y0 , y0 . r1 



2.25

Thus,     x − y0     f y0 , x ≥ α0 − f y0 , y0 , r1

  ∀x ∈ K \ B y0 , r1 .

2.26

Since f is monotone, it follows from relation 2.26 that for all 









f x, y0  g x, y0 ≤ g x, y0



   α0 − f y0 , y0  x − y0 . − r1

2.27

Since gx, y0 / x − y0 → −∞ when x − y0 → ∞, then there exists r2 > 0 such that for x ∈ K with x − y0 > r2 one has      α0 − f y0 , y0  x − y0  < 0. g x, y0 − r1

2.28

Take r  max{r1 , r2 } and set C  {x ∈ X : x − y0 ≤ r}, then from relations 2.27 and 2.28, one deduces that for each x ∈ K \ C one has fx, y0   gx, y0  < 0.

2.29

Hence, condition iii in Lemma 2.13 is satisfied. We end this section by the following result related to the Hausdorff metric that we will need in the sequel and for which we refer to 22. Lemma 2.15. Let E be a complete metric space and R : E ⇒ CBE a set-valued mapping. Then, for any given ε > 0 and any given x, y ∈ E and u ∈ Rx, there exists v ∈ Ry such that    du, v ≤ 1  εH Rx, R y .

2.30

3. Approximation by an Auxiliary Principle In order to get approximate solutions for the system 1.2 of generalized mixed equilibrium problem involving generalized mixed variational-like inequality problems SGMEP, we

Advances in Operations Research

9

consider the following auxiliary problem: for i ∈ I  {1, 2} and for given mappings Ti : Xi → Xi∗ , ρi > 0, x1 , x2  ∈ K1 × K2 , u1 , v1  ∈ R1 x1  × A1 x2 , and u2 , v2  ∈ R2 x1  × A2 x2 , ⎧ ⎪ z1 , z2  ∈ K1 × K2 such that for all yi ∈ Ki ⎪ ⎨find          AP ρi Fi zi , yi  Ni ui , vi  − wi , ηi yi , zi i  ψi zi , yi − ψi zi , zi  ⎪ ⎪ ⎩  T y − z , z − x  ≥ 0. i i i i i i

3.1

In this section, we give some existence results of solutions for the auxiliary problem AP. The results obtained will be needed in the sequel to generate a unified algorithm to approach solutions of the system 1.2 under some weaker assumptions in comparison with some known results in literature. Theorem 3.1. For each i ∈ I  {1, 2}, let Xi be a Banach space and Ki a nonempty closed convex subset of Xi , Fi , ψi : Ki × Ki → R two real-valued bifunctions, Ti : Xi → Xi∗ a bounded linear operator, ρi > 0 and ηi : Xi × Xi → Xi single-valued mappings such that i Fi x, x ≥ 0 for all x ∈ Ki ; Fi is monotone and upper-hemicontinuous; for each x ∈ Ki fixed, the function y → Fi x, y is convex and lower semicontinuous; ii ηi is affine in the first argument and continuous in the second argument such that     ηi x, y  ηi y, x  0,

∀x, y ∈ Ki ;

3.2

iii ψi is skew symmetric and continuous; for each y ∈ Ki fixed, the function x → ψi x, y is convex; iv Ti is δi -strongly positive; v coercivity: for each xi ∈ Ki , ui ∈ X1∗ and vi ∈ X2∗ , there exists a nonempty compact convex subset Cxi i ,ui ,vi of Xi and yi0 ∈ Cxi i ,ui ,vi ∩ Ki such that ∀zi ∈ Ki \ Cxi i ,ui ,vi              ρi Fi zi , yi0  Ni ui , vi −wi , ηi yi0 , zi ψi zi , yi0 −ψzi , zi   Ti yi0 −zi , zi −xi < 0. i

3.3

Then, the auxiliary problem (AP) has a unique solution. Proof. The proof of this lemma is a direct application of Lemma 2.13 by considering, for each i ∈ I  {1, 2}, the bifunctions fi , gi : Ki × Ki → R defined by            fi zi , yi  ρi Fi zi , yi  Ni ui , vi  − wi , ηi yi , zi i  ψi zi , yi − ψi zi , zi        gi zi , yi  Ti yi − Ti zi , zi − xi i ,

3.4

10

Advances in Operations Research

where x1 , x2  ∈ K1 × K2 , u1 , v1  ∈ R1 x1  × A1 x2 , and u2 , v2  ∈ R2 x1  × A2 x2  are given. We need only to show that the solution is unique. To this aim, suppose that problem AP has two solutions z1i and z2i , then for i ∈ I and for all z ∈ Ki , we have               ψi z1i , z −ψi z1i , z1i  Ti z−Ti z1i , z1i −xi ≥ 0, ρi Fi z1i , z  Ni ui , vi −wi , ηi z, z1i i

i

3.5               ρi Fi z2i , z  Ni ui , vi −wi , ηi z, z2i ψi z2i , z −ψi z2i , z2i  Ti z−Ti z2i , z2i −xi ≥ 0. i

i

3.6

Take z  z2i in relation 3.5 and z  z1i in relation 3.6 and adding the two inequalities, one obtain             ψi z1i , z2i ρi Fi z1i  Fi z2i , z1i  Ni ui , vi  − wi , ηi z1i , z2i  ηi z2i , z1i 









ψi z2i , z1i − ψi z1i , z1i − ψi z2i , z2i



i

    ≥ Ti z2i − z1i , z2i − z1i .

3.7

i

Since for each i ∈ I, Fi is monotone, ψi is skew symmetric, and Ti is δi -strongly positive, it follows that  2       0 ≥ Ti z2i − z1i , z2i − z1i ≥ δi z1i − z2i  . i

i

3.8

Therefore, z1i  z2i for i ∈ I  {1, 2}, which completes the proof. Theorem 3.2. For each i ∈ I  {1, 2}, let Xi be a reflexive Banach space and Ki a nonempty closed convex subset of Xi , Fi , ψi : Ki × Ki → R two real-valued bifunctions, Ti : Xi → Xi∗ a bounded linear operator, ρi > 0 and ηi : Xi × Xi → Xi single-valued mappings such that i Fi x, x ≥ 0 for all x ∈ Ki ; Fi is monotone and upper-hemicontinuous; for each x ∈ Ki fixed, the function y → Fi x, y is convex and lower semicontinuous; ii ηi is affine in the first argument and continuous in the second argument such that     ηi x, y  ηi y, x  0,

∀x, y ∈ Ki ;

3.9

iii ψi is skew symmetric and continuous; for each y ∈ Ki fixed, the function x → ψi x, y is convex; iv Ti is δi -strongly positive linear operator. Then, the auxiliary problem (AP) has a unique solution. Proof. For each i ∈ I and given x1 , x2  ∈ K1 × K2 , u1 , v1  ∈ R1 x1  × A1 x2 , and u2 , v2  ∈ R2 x1  × A2 x2 , define the following bifunctions fi , gi : Ki × Ki → R by            fi zi , yi  ρi Fi zi , yi  Ni ui , vi  − wi , ηi yi , zi i  ψi yi , zi − ψi zi , zi  ,       gi zi , yi  Ti yi − Ti zi , zi − xi i .

3.10

Advances in Operations Research

11

One can easily see that conditions i–iii above imply conditions i and ii of Lemma 2.13. In order to get the conclusion, we need only to show that the coercivity condition iii of Lemma 2.13 is satisfied. To this aim, taking into account Remark 2.14 2, we need only to show that for some vi0 ∈ Ki one has gi ui , vi0 / vi0 − ui i → −∞ when vi0 − ui i → ∞. Let vi0 ∈ Ki . Then,       gi ui , vi0  Ti vi0 − Ti ui , ui − xi

i

         Ti vi0 − ui , ui − vi0  Ti vi0 − ui , vi0 − xi i

i

         − Ti vi0 − ui , vi0 − ui  Ti vi0 − ui , vi0 − xi i

 2         ≤ −δvi0 − ui   Ti vi0 − ui  vi0 − xi  . i

i

3.11 i

i

Therefore,       gi ui , vi0      0  ≤ −δvi0 − ui   Ti vi0 − xi  .  v − ui  i i i i

3.12

It follows that gi ui , vi0 / vi0 − ui i → −∞ when vi0 − ui i → ∞ which completes the proof. Remark 3.3. Theorem 3.2 improves recent results given by Ding 18, Theorem 3.1 since the bifunction Fi is not needed to be δi -Lipschitz continuous and weakly upper semicontinuous with respect to the first argument. We mention also that all the results obtained in 18  ∅; in our approach, this are under the assumption int{yi ∈ Ki , ψi yi , yi  < ∞}  intKi / assumption is not needed. Theorem 3.2 shows that the auxiliary problem AP has a unique solution; we can define the following general iterative method to approach the solution of system 1.2 of generalized mixed equilibrium problem involving variational-like inequalities SGMEP. Algorithm 3.4. For a given x10 , x20  ∈ K1 ×K2 , u01 , v10  ∈ R1 x10 ×A1 x20 , and u02 , v20  ∈ R2 x10 × A2 x20 . By Theorem 3.1, the auxiliary problem AP has a unique solution x11 , x21  ∈ K1 × K2 , that is, for each i ∈ I, we have              ψi xi1 , yi − ψi xi1 , xi1 ρi1 Fi xi1 , yi  Ni u0i , vi0 − wi , ηi yi , xi1 i 3.13      Ti yi − Ti xi1 , xi1 − xi0 i ≥ 0 ∀yi ∈ Ki . Since for each i ∈ I, u0i ∈ Ri xi0  ∈ CBX1∗ , and vi0 ∈ Ai x20  ∈ CBX2∗ , by Lemma 2.15, there exist u1i ∈ Ri x11  and vi1 ∈ Ai x21  such that         1  ui − u0i  ≤ 1  1H1 Ri x11 , Ri x10 1 3.14         1  vi − vi0  ≤ 1  1H2 Ai x21 , Ai x20 , 2

where H1 and H2 are the Hausdorff metrics on CBX1∗  and CBX2∗ , respectively.

12

Advances in Operations Research

By using Theorem 3.1 again, the auxiliary problem AP has a unique solution x12 , x22  ∈ K1 × K2 such that             ρi2 Fi xi2 , yi  Ni u1i , vi1 − wi , ηi yi , xi2  ψi xi2 , yi − ψi xi2 , xi2        Ti yi − Ti xi2 , xi2 − xi1 ≥ 0, i

i

3.15

∀yi ∈ Ki .

By induction, we can construct an iterative algorithm to compute the approximate solution for the system 1.2 as follows: for given x10 , x20  ∈ K1 × K2 , u01 , v10  ∈ R1 x10  × A1 x20 , and u02 , v20  ∈ R2 x10  × A2 x20 , there exist sequences {x1n }, {x2n }, {un1 }, {un2 }, {v1n }, and {v2n } such that for each i ∈ I         1 H1 Ri x1n1 , Ri x1n , uni ∈ Ri x1n , 1 n1            1  n1 n H2 Ai x2n1 , Ai x2n , vin ∈ Ai x2n , vi − vi  ≤ 1  2 n1             ρin1 Fi xin1 , yi  Ni uni , vin − wi , ηi yi , xin1  ψi xin1 , yi − ψi xin1 , xin1      n1 ui − uni  ≤ 1 

       Ti yi − Ti xin1 , xin1 − xin ≥ 0 i

i

∀yi ∈ Ki , n  0, 1, 2, . . . . 3.16

The following convergence analysis is presented for the algorithm above. Theorem 3.5. Under the hypotheses of Theorem 3.2, further assume that for each i ∈ I, i Fi is σi -strongly monotone and upper-hemicontinuous; Ni is βi , ξi -mixed Lipschitz continuity; Ri is ki -H1 -Lipschitz continuous, and Ai is μi -H2 -Lipschitz continuous; ii ηi is τi -Lipschitz continuous; iii the sequence {ρn }n∈N of positive real numbers is increasing and limn → ∞ ρn  ∞. Furthermore, assume that the following condition holds: ⎧ ⎪ ⎪ ⎪Λ  max{θ1  θ2 , ϑ1  ϑ2 } < 1, where ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ τ1 β1 k1 τ2 β2 k2 ⎨ θ2  , C1  θ1  σ1 , σ2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ τ1 ξ1 μ1 τ2 ξ2 μ2 ⎪ ⎪ ϑ2  . ⎪ ⎩ϑ1  σ1 , σ2

3.17

Then, the sequences {x1n }, {x2n }, {un1 }, {un2 }, {v1n }, and {v2n } generated by Algorithm 3.4 converge strongly to x1 , x2 , u1 , v1 , u2 , and v2 , respectively, where u1 , v1  ∈ T1 x1  × A1 x2 ,u2 , v2  ∈ T2 x1  × A2 x2 , and x1 , x2 , u1 , v1 , u2 , v2  is a solution of the System of Generalized Mixed Equilibrium Problem involving variational-like in equalities 1.2.

Advances in Operations Research

13

Proof. By the definition of Algorithm 3.4, we have for i ∈ I,             n−1 − wi , ηi yi , xin ρin Fi xin , yi  Ni un−1  ψi xin , yi − ψi xin , xin i , vi i

3.18

       Ti yi − Ti xin , xin − xin−1 ≥ 0 ∀yi ∈ Ki , i

            ρin1 Fi xin1 , yi  Ni uni , vin − wi , ηi yi , xin1  ψi xin1 , yi − ψi xin1 , xin1 i

       Ti yi − Ti xin1 , xin1 − xin ≥ 0 ∀yi ∈ Ki .

3.19

i

By considering yi  xin1 in relation 3.18 and yi  xin in relation 3.19, and taking into account ηi xin1 , xin   −ηi xin , xin1 , one obtains by adding the two inequalities            n−1 Fi xin1 , xin  Fi xin , xin1  Ni un−1 − Ni uni , vin , ηi xin1 , xin i , vi 









 n



 ψi xin , xin1  ψi xin1 , xin − ψi xin , xi − ψi xin1 , xin1  −



i

  1   n1 − xin , xin − xin−1 n Ti xi i ρi

3.20

 1  n n1 n n1 T x − x , x − x ≥ 0. i i i i i n1

ρi

i

Since Fi is σi -monotone, ψi is skew-symmetric, and Ti is δi -strongly positive and Ti Lispchitz continuous, it follows from relation 3.20 that         T   n n    i  n1 n−1 n1 n  n  n n−1  − N x  , v , v , x − x − x u  η x  x  Ni un−1 i i i i i i i i i i i i i i i ρin   2 δi    ≥ σi  n1 xin − xin1  . i ρi

3.21

Note that N1 is β1 , ξ1 -mixed Lipschitz continuous, R1 is k1 -H1 -Lipschitz continuous, and A1 is μ1 -H2 -Lipschitz continuous, it follows by Algorithm 3.4         n n     n−1  n−1 n−1 n n − N , v , v ≤ β − u  ξ − v u  Ni un−1 u  v i 1 1 i i i i i i i i i



1

2

      1 ≤ β1 1  H1 R1 x1n−1 , R1 x1n n        1  ξ1 1  H2 R1 x2n−1 , R1 x2n n       1  1   n−1  n−1   ≤ β1 k1 1  x1 − x1n   ξ1 μ1 1  x2 − x2n  . 1 2 n n 3.22

14

Advances in Operations Research

Since η1 is τ1 -Lipschitz continuous, it follows from relation 3.21 considered for i  1 that 

       δ1  1 T1   n   n1  σ1  n1 x1 − x1  ≤ τ1 β1 k1 1   n x1n−1 − x1n  1 1 n ρ1 ρ1    1   n−1   τ1 ξ1 μ1 1  x2 − x2n  . 2 n

3.23

 n      x − xn1  ≤ θ1n xn−1 − xn   ϑ1n xn−1 − xn  , 2 2 2 1 1 1 1 1 1

3.24

Hence,

with θ1n

 n1  ρ1 τ1 β1 k1 1  1/n  ρ1n1 T1 /ρ1n  ,  n1  ρ1 σ1  δ1

ϑ1n 

ρ1n1 τ1 ξ1 μ1 1  1/n .  n1  ρ1 σ1  δ1

3.25

Similarly, by the assumptions on F2 , N2 , η2 , R2 , A2 , and T2 , it follows from relation 3.21 considered for i  2 that             n 3.26 x2 − x2n1  ≤ θ2n x1n−1 − x1n   ϑ2n x2n−1 − x2n  , 2

1

2

with θ2n

 n1  ρ2 τ2 β2 k2 1  1/n  ,  n1  ρ2 σ2  δ2

ϑ2n 

ρ2n1 τ2 ξ2 μ2 1  1/n  ρ2n1 T2 /ρ2n .  n1  ρ2 σ2  δ2

3.27

Adding the inequalities 3.24 and 3.26, we obtain          n        x1 − x1n1   x2n − x2n1  ≤ θ1n  θ2n x1n−1 − x1n   ϑ1n  ϑ2n x2n−1 − x2n  1

2

1

2

         ≤ Λn x1n−1 − x1n   x2n−1 − x2n  , 1

3.28

2

where Λn  max{θ1n  θ2n , ϑ1n  ϑ2n }. On X1 × X2 , let us consider the norm · ∗ defined by x, y ∗  x 1  y 2 for all x, y ∈ X1 × X2 , that is, X1 × X2 , · ∗  is a Banach space. It follows from relation 3.28 that         n n     x1 , x2 − x1n1 , x2n1  ≤ Λn  x1n−1 , x2n−1 − x1n , x2n  . ∗



3.29

Taking account of the assumptions, it is easy to see that Λn → Λ as n → ∞. From condition C1  relation 3.17, we know that 0 < Λ < 1. Hence, there exist Λ0 ∈ 0, 1 and n0 > 0 such that Λn ≤ Λ0 < 1 for all n ≥ n0 . Therefore, it follows from 3.29 that            n n  x1 , x2 − x1n1 , x2n1  ≤ Λ0  x1n−1 , x2n−1 − x1n , x2n  . ∗



3.30

Advances in Operations Research

15

This implies that {x1n , x2n } is a Cauchy sequence in X1 ×X2 . Thus, x1n , x2n  converges strongly to some x1 , x2  ∈ K1 × K2 . By Algorithm 3.4 and the Lipschitz continuity assumption on R1 ,R2 , A1 , and A2 , we have          1 1  n1  H1 Ri x1n1 , Ri x1n ≤ ki 1  x1 − x1n  , 1 1 n1 n1              1 1  n1    n1 H2 Ai x2n1 , Ai x2n ≤ μi 1  x2 − x2n  . vi − vin  ≤ 1  2 2 n1 n1      n1 ui − uni  ≤ 1 

3.31

It follows, for each i ∈ I, that {uni } is a Cauchy sequence in X1∗ and {vin } is a Cauchy sequence in X2∗ . Thus, there exists ui , vi  ∈ X1∗ × X2∗ such that uni , xin  converges strongly to ui , vi . Noting that un1 ∈ R1 x1n , it follows that             d u1 , R1 x1n ≤ u1 − un1 1  d un1 , R1 x1n  H1 R1 x1n , R1 x1      ≤ u1 − un1 1  k1 x1 − x1n 1 −→ 0 as n −→ ∞.

3.32

Hence, we must have u1 ∈ R1 x1 . Similarly, one can show that u2 ∈ R2 x1 , v1 ∈ A1 x2 , and v2 ∈ A2 x2 . By Algorithm 3.4, we have that, for all i ∈ I,             ρin1 Fi xin1 , yi  Ni uni , vin − wi , ηi yi , xin1  ψ xin1 , yi − ψ xin1 , xin1 i

       Ti yi − Ti xin1 , xin1 − xin ≥ 0 ∀yi ∈ Ki , n  0, 1, 2, . . .

3.33

Since Ni is βi , ξi -mixed Lipschitz continuous and ηi is continuous in the second argument, one has, for each yi ∈ Ki ,           − Ni ui , vi  − wi , ηi yi , xi i   Ni uni , vin − wi , ηi yi , xin1 i

               ≤  Ni uni , vin − Ni ui , vi , ηi yi , xin1    Ni ui , vi  − wi , ηi yi , xin1 − ηi yi , xi  i

i

               ≤ Ni un , vn − Ni ui , vi ηi yi , xn1    Ni ui , vi  − wi , ηi yi , xn1 − ηi yi , xi . i

i

i

i

i

3.34

Hence,           − Ni ui , vi  − wi , ηi yi , xi i  −→ 0 as n −→ ∞.  Ni uni , vin − wi , ηi yi , xin1 i

3.35

16

Advances in Operations Research

From the monotonicity of Fi and the linearity of Ti , one has 

       Ni uni , vin − wi , ηi yi , xin1 − Ni ui , vi  − wi , ηi yi , xi i i

   T    i   ψi xin1 , yi − ψi xin1 , xin1  n1 yi − xin1 xin1 − xin  ρi      ≥ Fi yi , xin1 − Ni ui , vi  − wi , ηi yi , xi i ∀yi ∈ Ki , ∀n  0, 1, 2, . . . 





3.36

Taking into account result 3.35 and since, for each i ∈ I, Fi y, · is a lower semicontinuous function, ψi is continuous, and the sequence {xin } converges strongly to xi , it follows from relation 3.36 by passing to the limit when n goes to infinity that        ψi yi , xi − ψi xi , xi  ≥ Fi yi , xi − Ni ui , vi  − wi , ηi yi , xi i .

3.37

Now, for each i ∈ I and for t ∈ 0, 1 and yi ∈ Ki , set yi t  tyi  1 − txi . Since Ki is convex, then yi t ∈ Ki for t ∈ 0, 1; it follows that 

      Ni ui , vi  − wi , ηi yi t, xi i  ψi yi t, xi − ψi xi , xi  ≥ Fi yi t, xi .

3.38

Taking into account the convexity of ψi ·, xi , the fact that Ni is affine with respect to the first argument and ηi xi , xi   0, one has          Fi yi t, xi ≤ t ψi yi , xi − ψi xi , xi   t Ni ui , vi  − wi , ηi yi , xi i .

3.39

On the other hand, since Fi xi , xi   0 for each xi ∈ Ki and Fi yi , · is convex, it follows that       0 ≤ Fi yi t, yi t ≤ tFi yi t, yi  1 − tF yi t, xi .

3.40

Hence, from relation 3.39, one has           0 ≤ t Fi yi t, yi  1 − t ψi yi , xi − ψi xi , xi   Ni ui , vi  − wi , ηi yi , xi i .

3.41

Therefore, for all t ∈0, 1, one has          0 ≤ Fi yi t, yi  1 − t ψi yi , xi − ψi xi , xi   Ni ui , vi  − wi , ηi yi , xi i .

3.42

Since for each yi ∈ Ki , x → Fi x, y is upper hemicontinuous, it follows by passing to limit when t → 0 in the previous inequality that        Fi xi , yi  Ni ui , vi  − wi , ηi yi , xi i  ψi xi , yi − ψi xi , xi  ≥ 0 ∀yi ∈ Ki , which completes the proof.

3.43

Advances in Operations Research

17

4. Commentaries In conclusion, the approach used in this paper lets us improve and extend some new results in literature related to the problem studied. To be more precise, 1 Theorems 3.2 and 3.5 improve recent results given by Ding 18, Theorem 3.1, Theorem 4.1, since the bifunction Fi is not needed to be δi -Lipschitz continuous nor weakly upper semicontinuous with respect to the first argument in all steps of the procedure, firstly when dealing with the existence of solutions for the auxiliary problem and secondly when studying the convergence of the algorithm; 2 we mention also that all the results obtained in 18 are under the assumption int{yi ∈ Ki , ψi yi , yi  < ∞}  int Ki  / ∅; in our approach, this assumption is not needed; 3 in the special case where Fi ≡ 0, the results obtained improve those obtained by Ding and Wang 17, since the assumptions on the bifunctions ψi have been relaxed.

References 1 C. Baiocchi and A. Capelo, Variational and Quasivariational Inequalities: Applications to Free Boundary Problems, John Wiley & Sons, New York, NY, USA, 1984. 2 E. Blum and W. Oettli, “From optimization and variational inequalities to equilibrium problems,” The Mathematics Student, vol. 63, no. 1–4, pp. 123–145, 1994. 3 I. Konnov, “Generalized monotone equilibrium problems and variational inequalities,” in Handbook of Generalized Convexity and Generalized Monotonicity, Nonconvex Optimization and Applications, N. Hadjisavvas, S. Komlosi, and S. Schaible, Eds., pp. 559–618, Springer, New York, NY, USA, 2005. 4 M. Patriksson, Nonlinear Programming and Variational Inequality Problems: a unified approach, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1999. 5 A. Bnouhachem and M. A. Noor, “A new iterative method for variational inequalities,” Applied Mathematics and Computation, vol. 182, no. 2, pp. 1673–1682, 2006. 6 P. L. Combettes and S. A. Hirstoaga, “Equilibrium programming using proximal-like algorithms,” Mathematical Programming, vol. 78, no. 1, pp. 29–41, 1997. 7 X.-F. He, J. Chen, and Z. He, “Generalized projection method for a variational inequality system with different mapping in Banach spaces,” Computers & Mathematics with Applications, vol. 58, no. 7, pp. 1391–1396, 2009. 8 B.-S. He, Z.-H. Yang, and X.-M. Yuan, “An approximate proximal-extragradient type method for monotone variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 300, no. 2, pp. 362–374, 2004. 9 A. Moudafi and M. Th´era, “Proximal and dynamical approaches to equilibrium problems,” in Lecture Notes in Economics and Mathematical Systems, vol. 477, pp. 187–201, Springer, Berlin, Germany, 1999. 10 M. A. Noor, “Projection-proximal methods for general variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 318, no. 1, pp. 53–62, 2006. 11 R. U. Verma, “General convergence analysis for two-step projection methods and applications to variational problems,” Applied Mathematics Letters, vol. 18, no. 11, pp. 1286–1292, 2005. 12 X. P. Ding, “Existence of solutions and an algorithm for mixed variational-like inequalities in Banach spaces,” Journal of Optimization Theory and Applications, vol. 127, no. 2, pp. 285–302, 2005. 13 X. P. Ding, J.-C. Yao, and L.-C. Zeng, “Existence and algorithm of solutions for generalized strongly nonlinear mixed variational-like inequalities in Banach spaces,” Computers & Mathematics with Applications, vol. 55, no. 4, pp. 669–679, 2008. 14 G. Mastroeni, “On auxiliary principle for equilibrium problems,” in Equilibrium Problems and Variational Models, P. Daniele, F. Giannessi, and A. Maugeri, Eds., pp. 289–298, Kluwer, Dordrecht, The Netherlands, 2003. 15 L. C. Zeng, S. Schaible, and J. C. Yao, “Iterative algorithm for generalized set-valued strongly nonlinear mixed variational-like inequalities,” Journal of Optimization Theory and Applications, vol. 124, no. 3, pp. 725–738, 2005.

18

Advances in Operations Research

16 K. R. Kazmi and F. A. Khan, “Auxiliary problems and algorithm for a system of generalized variational-like inequality problems,” Applied Mathematics and Computation, vol. 187, no. 2, pp. 789–796, 2007. 17 X. P. Ding and Z. B. Wang, “The auxiliary principle and an algorithm for a system of generalized set-valued mixed variational-like inequality problems in Banach spaces,” Journal of Computational and Applied Mathematics, vol. 233, no. 11, pp. 2876–2883, 2010. 18 X.-P. Ding, “Auxiliary principle and approximation solvability for system of new generalized mixed equilibrium problems in reflexive Banach spaces,” Journal of Applied Mathematics and Mechanics, vol. 32, no. 2, pp. 231–240, 2011. 19 B. S. Mordukhovich, B. Panicucci, M. Pappalardo, and M. Passacantando, “Hybrid proximal methods for equilibrium problems,” Optimization Letters. In press. 20 A. S. Antipin, “Computation of fixed points of extremal mappings by means of gradient-type methods,” Zhurnal Vychislitel cprime no˘ı Matematiki i Matematichesko˘ı Fiziki, vol. 37, no. 1, pp. 42–53, 1997. 21 O. Chadli, Z. Chbani, and H. Riahi, “Equilibrium problems with generalized monotone bifunctions and applications to variational inequalities,” Journal of Optimization Theory and Applications, vol. 105, no. 2, pp. 299–323, 2000. 22 S. B. Nadler,, “Multi-valued contraction mappings,” Pacific Journal of Mathematics, vol. 30, pp. 475–488, 1969.

Hindawi Publishing Corporation Advances in Operations Research Volume 2012, Article ID 281396, 20 pages doi:10.1155/2012/281396

Research Article An Asymmetric Proximal Decomposition Method for Convex Programming with Linearly Coupling Constraints Xiaoling Fu,1 Xiangfeng Wang,2 Haiyan Wang,1 and Ying Zhai3 1

Institute of System Engineering, Southeast University, Nanjing 210096, China Department of Mathematics, Nanjing University, Nanjing 210093, China 3 Department of Mathematics, Guangxi Normal University, Guilin 541004, China 2

Correspondence should be addressed to Xiaoling Fu, [email protected] Received 17 November 2011; Accepted 10 January 2012 Academic Editor: Abdellah Bnouhachem Copyright q 2012 Xiaoling Fu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The problems studied are the separable variational inequalities with linearly coupling constraints. Some existing decomposition methods are very problem specific, and the computation load is quite costly. Combining the ideas of proximal point algorithm PPA and augmented Lagrangian method ALM, we propose an asymmetric proximal decomposition method AsPDM to solve a wide variety separable problems. By adding an auxiliary quadratic term to the general Lagrangian function, our method can take advantage of the separable feature. We also present an inexact version of AsPDM to reduce the computation load of each iteration. In the computation process, the inexact version only uses the function values. Moreover, the inexact criterion and the step size can be implemented in parallel. The convergence of the proposed method is proved, and numerical experiments are employed to show the advantage of AsPDM.

1. Introduction The original model considered here is the convex minimization problem with linearly coupling constraints: N    θj xj minimize j1

⎛ ⎞ N N   Aj xj − bj ≥ 0 ⎝or Aj xj − bj  0⎠, j1

xi ∈ Xi ,

j1

i  1, . . . , N, 1.1

2

Advances in Operations Research

where Xi ⊂ Rni , Ai are given m × ni matrixes, bi are given m-vector, and θi : Rni → R are the ith block convex differentiable functions for each i  1, . . . , N. This special problem is called convex separable problem. Problems possessing such separable structure arise in discretetime deterministic optimal control and in the scheduling of hydroelectric power generation 1. Note that θi are differentiable, setting ∇θi xi   fi xi ; by the well-known minimum principle in nonlinear programming, it is easy to get an equivalent form of problem 1.1: ∗  ∈ Ω such that find x∗  x1∗ , . . . , xN 

xi − xi∗

T  ∗  fi xi ≥ 0,

i  1, . . . , N, ∀x ∈ Ω,

1.2

where ⎧ ⎨

⎫ ⎞ N ⎬  Ω  x1 , . . . , xN  | Aj xj ≥ b ⎝or Aj xj  b⎠, xi ∈ Xi , i  1, . . . , N , ⎩ ⎭ j1 j1 N 



b

N  bj . j1

1.3 Problems of this type are called separable variational inequalities VIs. We will utilize this equivalent formulation and provide method for solution of separable VI. One of the best-known algorithms for solving convex programming or equivalent VI is the proximal point algorithm PPA first proposed by Martinet see 2 and had been studied well by Rockafellar 3, 4. PPA and its dual version, the method of multipliers, draw on a large volume of prior work by various authors 5–9. However, classical PPA and most of its subsequence papers cannot take advantage of the separability of the original problem, and this makes them inefficient in solving separable structure problems. One major direction of PPA’s study is to develop decomposition methods for separable convex programming and VI. The motivations for decomposition techniques are splitting the problem into isolate smaller or easier subproblems and parallelizing computations on specific parallel computing device. Decomposition-type methods 10–14 for large-scale problems have been widely studied in optimization as well as in variational problems and are explicitly or implicitly derived from PPA. However, most of those methods only can solve separable problems with special equality constraints: Minimize

  θx ψ y , Ax y  0,

1.4

x ∈ X, y ∈ Y. Two very well-known methods for solving equality constrained convex problems and VI are the augmented Lagrangian method 15, 16 ALM and the alternating direction method ADM 17. The classic ALM has been deeply studied and has many advantages over the general Lagrange methods; see 18 for more detail. However, it can not preserve separability. ADM is a different method but closely related to ALM, which essentially can preserve separability for problems with two operators N  2. Recently, separable augmented Lagrangian method SALM 19, 20 overcomes the nonseparability of ALM. For example, for solving problem

Advances in Operations Research

3

1.1 with equality constraints, Hamdi and Mahey 19 allocated a resource quantity yi to  mN each block Ai xi − bi leading 1.1 to an enlarged problem in N j1 Xj × R

minimize

N    θj xj j1 N 

1.5

yj  0

j1

Ai xi − bi yi  0,

i  1, . . . , N,

m

xi ∈ Xi ,

yi ∈ R ,

i  1, . . . , N.

It is worth mentioning that 1.5 is only equivalent to problem 1.1 with equality constraints. The expression of the augmented lagrangian function of 1.5 is: N      L x, y, u, τ  Lj xj , yj , uj , τ , j1

1.6

with 2       τ Lj xj , yj , uj , τ  θj xj − uj , Aj xj − bj yj Aj xj − bj yj  . 2

1.7

SALM finds a saddle point of problem 1.5 by the following stages: i xik 1  arg minxi ∈Xi Li xi , yik , uki , τ; ii yik 1  arg minN

j1

k 1 k yj 0 Li xi , yi , ui , τ;

iii uk 1  uki τAi xik 1 − bi yik 1 . i Note that the process in SALM for xk 1 allows one to solve N subproblems in parallel. This has great practical importance from the computation point of view. In fact, SALM belongs to the family of splitting algorithms and ADM for solving special convex problem 1.4 with ψy  0 and ⎛ ⎜ A⎝



A1 ..

⎟ ⎠;

.

1.8

AN SALM has to introduce an additive variable y to exploit the inner separable structure of the problem, which makes the problem larger. Moreover, SALM is suitable to solve equality constraints problems and fraught with difficulties in solving inequality constraints problems. To our best knowledge, there are few dedicated methods for solving inequality constraints problems 1.1 or VI1.2-1.3, except the decomposition method proposed by Tseng 21 and the PPA-based contraction method by He et al. 22. The decomposition method in 21 decomposes the computation of xk 1 at a fine level without introducing

4

Advances in Operations Research

additive variable y. But, in each iteration of this method, the minimization subproblems for xk 1 are dependent on the step size of multiplier, which greatly restricts the computation of subproblems. The PPA-based contraction method in 22 has a nice decomposable structure; however, it has to solve the subproblem exactly. To solve 1.1 or VI1.21.3, motivated by PPA-based contraction method and SLAM, we propose an asymmetric proximal decomposition method AsPDM which can well conserve the separability feature of the problem. Besides, it does not need to introduce the resource variables y like SALM and the subproblems do not depend on the step size of multiplier. In the following, we briefly describe our method for 1.1: we add an auxiliary quadratic term to the general Lagrangian function: N      Lj xj , xjk , λ , L x, xik , λ 

1.9

j1

with 2       βj    Lj xj , xjk , λ  θj xj − λ, Aj xj − bj xj − xjk  . 2

1.10

The general framework of AsPDM is as follows: Phase I   xik  argminLi xi , xik , λk ; xi ∈Xi

1.11

Phase II ⎡ λk  PΛ ⎣λk − μ−1

j1

wk 1



N  



Aj xjk − bj ⎦,

   wk − αk G wk − w k ,

m Λ  Rm

or R ,

1.12

w  x, λ.

Here, βi > 0, μ > 0, αk > 0, and G are proper chosen which will be detailed in the later sections. Note that the first phase consists of N isolate subproblems, and each involves xi , i  1, . . . , N only, namely; it can be partitioned into N independent lower-dimension subproblems. Hence, this method can take advantage of operators’ separability. Since we mainly focus on solving equivalent separable VI, hence, we present this method under VI framework and analyze its convergence in the following sections.

2. The Asymmetric Proximal Decomposition Method 2.1. Structured VI The separable VI1.2-1.3 consists of N partitioned sub-VIs. Introducing a Lagrange m multiplier vector λ ∈ ΛΛ  Rm

or R  associated with the linearly coupling constraint

Advances in Operations Research

5

 Aj xj ≥ b or N j1 Aj xj  b, we equivalently formulate the separable VI1.2-1.3 as an enlarged VI: find w∗  x∗ , λ∗  ∈ W such that N

j1

 T   ∗  fi xi − ATi λ∗ ≥ 0, ⎛ ⎞ N     T λ − λ∗ ⎝ Aj x∗ − b⎠ ≥ 0,



xi − xi∗

j1

i  1, . . . , N, ∀w ∈ W,

2.1

j

where W  X × Λ,

X

N 

Xj .

2.2

j1

VI2.1-2.2 is referred as structured variational inequality SVI, denoted as SVIW, Q. Here, ⎛

⎞ f1 x1  − AT1 λ ⎜ ⎟ .. ⎜ ⎟ . ⎜ ⎟ ⎜ ⎟ T Qw  ⎜fN xN  − AN λ⎟. ⎜ N ⎟ ⎜  ⎟ ⎝ A x −b ⎠

2.3

j j

j1

2.2. Preliminaries We summarize some basic properties and related definitions which will be used in the following discussions. Definition 2.1. i The mapping f is said to be monotone if and only if    ≥ 0,  T fw − fw w − w

∀w, w. 

2.4

ii A function f is said to be Lipschitz continuous if there is a constant L > 0 such that   fw − fw    ≤ L w − w ,

∀w, w. 

2.5

The projection onto a closed convex set is a basic concept in this paper. Let W ⊂ Rn be any closed convex set. We use PW w to denote the projection of w onto W under the Euclidean norm; that is,   ! PW w  arg min w − w  | w ∈ W . The following lemmas are useful for the convergence analysis in this paper.

2.6

6

Advances in Operations Research

Lemma 2.2. Let W be a closed convex set in Rn , then one has 1   w − PW wT w − PW w ≤ 0 ∀w ∈ Rn , ∀w ∈ W,

2.7

2     PW w − w 2 ≤ w − w 2 − w − PW w 2 ,

∀w ∈ Rn , w ∈ W.

2.8

Proof. See 23. Lemma 2.3. Let W be a closed convex set in Rn , then w∗ is a solution of VIW, Q if and only if # " w∗  PW w∗ − βQw∗  ,

∀β > 0.

2.9

Proof. See 10, page 267. Hence, solving VIW, Q is equivalent to finding a zero point of the residue function   " # e w, β  w − PW w − βQw , β > 0.

2.10

Generally, the term ew denotes ew, 1  is referred to as the error bound of VIW, Q, since it measures the distance of u from the solution set.

2.3. The Presentation of the Exact AsPDM In each iteration, by our proper construction, our method solves N independent sub-VIs involving each individual variable xi only so that xi can be obtained in parallel. In what follows, to illustrate our method’s practical significance, we interpret our algorithm process as a system which has a central authority and N local administrators; each administrator attempts to unilaterally solve a certain problem under the presumption that the instructions given by the authority are parametric inputs and the responses of other administrations’ actions are not available; namely, the N local administrators acts synchronously and independently once they receive the information given by the central authority. We briefly describe our method which consists of two main phases. k , λk  given by the central authority, each local Phase I: For arbitrary x1k , . . . , xN administrator uses his own way to offer the solution denoted as xik  of his individual problem: find xik ∈ Xi , such that 

xi − xik

T       fi xik βi xik − xik − ATi λk ≥ 0,

∀xi ∈ Xi .

2.11

Advances in Operations Research

7

Phase II: After the N local administrators accomplish their tasks, the central authority k k , moreover, corresponding f1 x1k , . . .,fN xN  which can be collects the resulting x1k , . . . , xN viewed as the feedback information from the N local administrators and sets: ⎛ ⎞⎤ N  λk  PΛ ⎣λk − μ−1 ⎝ Aj xjk − b⎠⎦. ⎡

2.12

j1

Here, μ is suitably chosen by the central authority. So the central authority aims to employ k 1 k 1 , λ  which will be beneficial this feedback information effectively to provide x1k 1 , . . . , xN for the next iteration loop. In this paper, our proposed methods will update the new iterate by the following two forms:   w1k 1  wk − αk G wk − w k ,

2.13

$  % w2k 1  PW wk − αk Q w k ,

2.14

or

where αk > 0 is a specific step size and ⎛ ⎞ AT1 β1 ⎜ .. ⎟ .. ⎜ . . ⎟ G⎜ ⎟. ⎝ βN ATN ⎠ μI

2.15

We make the standard assumptions to guarantee that the problem under consideration is solvable and the proposed methods are well defined. Assumption A. fi xi  is monotone and Lipschitz continuous, i  1, . . . , N. By this assumption, it is easy to get that Qw is monotone.

2.4. Presentation of the Inexact AsPDM In this section, the inexact version of the AsPDM method is present, and some remarks are briefly made. For later analysis convenience, we denote ⎞



β1 ⎜2 ⎜ ⎜ .. ⎜ . D⎜ ⎜ βN ⎜ ⎝ 2

⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠ ηI

2.16

8

Advances in Operations Research

At each iteration, we solve N sub-VIs see 2.11 independently. No doubt, the computation load for an exact solution of 2.11 is usually expensive. Hence, it is desirable for us to consider solving 2.11 inexactly under a relative relaxed inexact criterion. We now describe and analyze our inexact method. Each iteration consists of two main phases, one of which provides an inexact solution of 2.11 and the other of which employs the inexact solution to offer a new iterate for the next iteration. The first phase of our method works as follows. At the beginning of kth iteration, an iterate wk  xk , λk  is given. If xik  PXi xik − fi xik  − ATi λk , then xik is the exact solution of ith sub-VI; there is nothing we need to do with the ith sub-VI. Otherwise, we should find xik ∈ Xi such that 

xi − xik

T       fi xik βi xik − xik − ATi λk ξxki ≥ 0,

∀xi ∈ Xi ,

2.17

with     ξxki  fi xik − fi xik .

2.18

Here, the obtained βi should satisfy following two inexact criteria: 

xik − xik

2 νβi   k  xi − xik  , ν ∈ 0, 1, 2    βi   k   ξxi  ≤ √ xik − xik . 2

T

ξxki ≤

2.18∗ a 2.18∗ b

Once one of the above criteria fails to be satisfied, we will increase βi by βi  βi ∗ 1.8 and turn back to solve the ith sub-VI of 2.17 with this updated βi . It should be noted that both inexact criteria are quite easy to check since they do not contain any unknown variables. In addition, another favorable characterization of these criteria is that they are independent; namely, they  i. only involve xik , xik , irrelevant to xjk , xjk j / In what follows, let us describe the second phase. We require ⎛ ⎞⎤ N  λk  PΛ ⎣λk − μ−1 ⎝ Aj xjk − b⎠⎦, ⎡

2.19

j1

where μ ∈ 0,

N

T j1  Aj Aj /2βj 



η here, η > 0 is suitably chosen to satisfy

wk − w k

2 T      G wk − w  k ≥ wk − w  k . D

2.20

 k  to construct the new iteration. Here, we provide Now we use this w  k  xk , λk  or Qw two simple forms for the new iteration: $   %      k − ξk , ξk  ξxk , 0 , ξxk  ξxk1 , . . . , ξxkN , w1k 1  wk − αk G wk − w

2.21∗ a

Advances in Operations Research

9

or $  % k , w2k 1  PW wk − αk Q w

2.21∗ b

where T   k   k  k − ξk G w −w wk − w , γ ∈ 0, 2. 2  Gwk − w  k  − ξ

 αk  γα∗k ,

α∗k 

2.21

In fact, each iteration of the proposed method consists of two main phases. Using the point of view that the problem is a system with a central authority and N administrators, the first phase is accomplished by N administrators based on the instruction given by the authority. That is, ith sub-VI only involves ith administrator’s activities. On the other hand, the second phase is implemented by the central authority to give new instruction for the next iteration. Remark 2.4. In the inexact AsPDM, the main task of Phase I is to find a solution for 2.17. From 2.17, it is easy to get that $   %    xik  PXi xik − fi xik βi xik − xik − ATi λk .

2.22

It seems that equality 2.22 is an implicit form since both sides of 2.22 contain xik . In fact, we can transform equality 2.22 to an explicit form. Using the property of the projection, we have  $     % xik  PXi xik − βi xik − xik βi −1 fi xik − ATi λk & ' 1   k fi xi − ATi λk .  PXi xik − βi

2.23

Consequently, using the above formula, we can compute xik quite easily. Remark 2.5. Combining 2.22 and 2.19, we then find that $   %    k − Q w  k − wk ξk . k G w w  k  PW w

2.24

If ξk  0, it yields an exact version. In this special case, it is clear that $  %    k − Q w  k − wk . k G w w  k  PW w

2.25

We find that this formula is quite similar to the iterates produced by the classic PPA 3, which employs $   %   wk 1  PW wk 1 − Q wk 1 S wk 1 − wk

2.26

10

Advances in Operations Research

as the new iterate; here, S is a positive symmetry definite matrix. For deeper insight, our method does not appear fit into any of the known PPA frameworks. It is virtually not equivalent to PPA even if G is positive definite. The reason why our method can not be viewed as PPA lies in the fact that G is asymmetry, moreover, may be not positive definite. This lack of symmetry makes it fail to introduce an inner product as S. Consequently, if one sets w  k as the new iterate, one may fail to obtain the convergence. Due to the asymmetric feature of G, we call our method asymmetric proximal decomposition method. Remark 2.6. Recalling that λk is obtained by 2.19, it is easy to get that ⎧ ⎛ ⎞⎫ N ⎬ T ⎨  λk − λk μ−1 ⎝ Aj xj − b⎠ ≥ 0, λ − λk ⎭ ⎩ j1



∀λ ∈ Rm

.

2.27

Combining 2.17 and 2.27, we have 

w − w k

T       Q w k G w  k − wk ξk ≥ 0,

∀w ∈ W.

2.28

k k Since w  k  x1k , . . . , xN , λ  ∈ W is generated by 2.17–2.20 from a given we have that wk − w  k  0 implies ξk  0 and Gw  k − wk  ξk  0. According to 2.28, we have

k , λk , x1k , . . . , xN



w − w k

T   Q w  k ≥ 0 ∀w ∈ W.

2.29

In other words, w  k is a solution of Problem 2.1-2.2 if xik  xik i  1, . . . , N and λk  λk .  k ≤ ε as stopping criterion in the proposed method. Hence, we use wk − w Remark 2.7. The update form 2.21∗ a is based on the fact that Gw  k − wk  ξk is a descent ∗ 2 direction of the unknown distance function 1/2 w − w at point wk . This property will be proved in Section 3.1. α∗k in 2.21 is the “optimal” step length, which will be detailed in Section 3.2. We can also use 2.21∗ b to update the new iterate. For fast convergence, the practical step length should be multiplied by a relaxed factor γ ∈ 1, 2.  k  0 if and only if wk − w  k D  0. In the case wk − w  k D /  0, Remark 2.8. Note that wk − w N T by choosing a suitable μ ∈ 0, j1  Aj Aj /2βj  η, 2.20 will be satisfied. We state this fact in the following lemma. Lemma 2.9. Let G and D be defined in 2.15 and 2.16, respectively. If μ  N

for all w  x1 , . . . , xN , λ ∈ R

j1

nj m

N

T j1  Aj Aj /2βj  η,

, one has wT Gw ≥ w 2D .

2.30

Advances in Operations Research

11

Proof. According to Definition 2.16, we have  ⎛  ⎞   N N Aj AT  N      j 2 wT Gw  βj xj  ⎝

η⎠ λ 2 xjT ATj λ 2βj j1 j1 j1  2 ⎛    ⎞    T 2 2 T T  N β   N ( N  A − λ  λ A A λ A    j j j  ⎟ j j  ⎜ xj 2 η λ 2 1    βj xj (  ⎝ ⎠ 2 2 j1  2βj j1 j1 βj    ≥ w 2D ,

2.31

and the assertion is obtained. Set w  wk − w  k in the above lemma, we get 

wk − w k

2 T      G wk − w  k ≥ wk − w  k . D

2.32

 T If one chooses μ  N j1  Aj Aj /2βj  η, then the Condition 2.20 is always satisfied; hence, N T j1  Aj Aj /2βj  η can be regarded as a safe upper bound for this condition. Note that we use an inequality in the proof of Lemma 2.9; it seems that there exists some relaxations.  T As a result, rather than fix μ  N j1  Aj Aj /2βj  η, let μ be a smaller value, and check if  T Condition 2.20 is satisfied. If not, increase μ by μ  min{ N j1  Aj Aj /2βj  η, 4μ} and try N again. This process enables us to reach a suitable μ ∈ 0,  j1 Aj ATj 1/2 to meet 2.20. Note that, in our proposed method, problems VI 2.17 produce xik in a parallel wise. In addition, instead of taking the solution of the subproblems, the new iterate in the proposed methods is updated by a simple manipulation, for example, 2.18∗ a-2.18∗ b.

3. Convergence of AsPDM In the proposed methods, the first phase accomplished by the local administrators offers a descent direction of the unknown distance function, and the second phase accomplished by the central authority determines the “optimal” step length along this direction. This section gives more theory analysis.

3.1. The Descent Direction in the Proposed AsPDM For any w∗ ∈ W∗ , wk − w∗  is the gradient of the unknown distance function 1/2 w − w∗ 2 at point wk ∈ / W∗ . A direction d is called a descent direction of 1/2 w − w∗ 2 at point wk k k if and only if the inner product wk − w∗ , d < 0. Let w  k  x1k , . . . , xN , λ  be generated by

12

Advances in Operations Research

k , λk . A goal of this subsection is to elucidate that, 2.17–2.20 from a given wk  x1k , . . . , xN ∗ ∗ for any w ∈ W ,



wk − w∗

 2 T       G wk − w  k − ξk ≥ 1 − νwk − w  k . D

3.1

It guarantees that Gw  k − wk  ξk is a descent direction of 1/2 w − w∗ 2 at point wk ∈ / W∗ . The above inequality plays an important role in the convergence analysis. k k , λ  is generated by 2.17–2.20 from a given wk  Lemma 3.1. Assume that w  k  x1k , . . . , xN k k ∗ ∗ k ∗ , λ∗  ∈ W∗ one has x1 , . . . , xN , λ , then for any w  x1 , . . . , xN



wk − w∗

 2 T       G wk − w  k − ξk ≥ 1 − νwk − w  k . D

3.2

Proof. Since w∗ ∈ W, substituting w  w∗ in 2.28, we obtain 

k w∗ − w

T       Q w k G w  k − wk ξk ≥ 0.

3.3

Using the monotonicity of Qw and applying with w  w  k in 2.1, it is easy to get 

w  k − w∗

T    T  k − w∗ Qw∗  ≥ 0. Q w k ≥ w

3.4

Combining 3.3 and 3.4, we then find 

wk − w∗

T      T     G wk − w G wk − w  k − ξk ≥ wk − w k  k − ξk .

3.5

Note that Criterion 2.18∗ a holds; we have 

wk − wk

T

N β  2  j k  xj − xjk  2 j1 2    k ≤ νw − w  k D T    ≤ ν wk − w  k G wk − w k .

ξk ≤ ν

3.6

The last inequality follows directly from the result of Lemma 2.9. Consequently, 

wk − w k

 T     T   G wk − w  k − ξk ≥ 1 − ν wk − w  k G wk − w k .

3.7

Advances in Operations Research

13

Using the preceding inequality and 2.32 in 3.5 yields 

wk − w∗

 2 T       G wk − w  k − ξk ≥ 1 − νwk − w  k , D

3.8

completing the proof. Now, we state the main properties of Qw  k  in the lemma below. k k Lemma 3.2. Let w  k  x1k , . . . , xN , λ  be generated by 2.17–2.20 from given wk k k k  x1 , . . . , xN , λ . Then for any w ∈ W one has



w − w k

T    T $   % G wk − w Q w  k ≥ w − w k  k − ξk .



3.9

 k − Qw  k  Gw  k − wk  ξk  ∈ W, substituting w  Proof. Recalling that w  k  PW  w k k k k k   Gw  − w  ξ  in inequality 2.1, we have, immediately, w  − Qw 

w − w k

T    % $    w k − w  k − wk ξk ≥ 0. k − Q w k G w

3.10

By some manipulations, our assertion holds immediately.

3.2. The Step Size and the New Iterate Since Gw  k − wk  ξk is a descent direction of 1/2 w − w∗ 2 at point wk , the new iterate will be determined along this direction by choosing a suitable step length. In order to explain why we have the “optimal” step α∗k as defined in 2.21, we let  % $  w1k 1 α  wk − α G wk − w  k − ξk , $  % w2k 1 α  PW wk − αQ w k

3.11

be the step-size-dependent new iterate, and let 2  2      Θk,i α  wk − w∗  − wik 1 α − w∗  ,

i  1, 2,

3.12

be the profit function of the kth iteration. Because Θk,i α includes the unknown vector w∗ , it can not be maximized directly. The following lemma offers us a lower bound of Θk,i α which is a quadratic function of α. k k , λ  be generated by 2.17–2.20 from a given wk  Lemma 3.3. Let w  k  x1k , . . . , xN k k k x1 , . . . , xN , λ . Then one has

Θk,i α ≥ qk α,

∀α > 0, i  1, 2,

3.13

14

Advances in Operations Research

where  2 T        G wk − w k  k − ξk − α2 Gwk − w  k  − ξk  . qk α  2α wk − w

3.14

Proof. It follows from Definition 3.12 and inequality 3.5 that 2    %2 $      Θk,1 α  wk − w∗  − wk − w∗ − α G wk − w  k − ξk   2 T $   %    G wk − w  2α wk − w∗  k − ξk − α2 Gwk − w  k  − ξk 

3.15

3.5

≥ qk α.

Let us deal with Θk,2 α which seems more complicated: 2  2  2 2.8        Θk,2 α ≥ wk − w∗  − wk − w∗ − αk Qw  k  wk − wk 1 − αk Qw  k  2 T        2α wk 1 − w∗ Q w  k wk − wk 1  2 T       ≥ 2α wk 1 − w k Q w  k wk − wk 1  2 T $   %  3.9    3.16 G wk − w ≥ wk − wk 1  2α wk 1 − w k  k − ξk   %2 T $   % $     G wk − w  k − ξk  2α wk − w k  k − ξk  wk − wk 1 − α G wk − w  2   − α2 Gwk − w  k  − ξk   2 T        G wk − w ≥ 2α wk − w k  k − ξk − α2 Gwk − w  k  − ξk  . Since qk α is a quadratic function of α, it reaches its maximum at T   k   k  k − ξk G w −w wk − w  ;  2 Gwk − w  k  − ξk  

α∗k

3.17

this is just the same defined in 2.21. In practical computation, taking a relaxed factor γ is wise for fast convergence. Note that for any αk  γα∗k , it follows from 3.13, 3.14, and 2.21 that     Θk,i γα∗k ≥ qk γα∗k )   2 *  T          G wk − w k  k − ξk − γ 2 α∗k α∗k G wk − w  k − ξk   2γα∗k wk − w 3.18       T   G wk − w k  k − ξk .  γ 2 − γ α∗k wk − w In order to guarantee that the right-hand side of 3.18 is positive, we take γ ∈ 0, 2.

Advances in Operations Research

15

In fact, α∗k is bounded below by a positive amount which is the subject of the following lemma. k k , λ  is generated by 2.17–2.20 from a given wk  Lemma 3.4. Assume that w  k  x1k , . . . , xN k k k x1 , . . . , xN , λ , then one has

1−ν α∗k ≥    . GD−1/2  1 2

3.19

Proof. Using the fact that square matrix D is positive symmetric definite, we have                   k   GD−1/2 D1/2 wk − w  k  ≤ GD−1/2 D1/2 wk − w  k . G wk − w

3.20

Moreover, Note that Criterion 2.18∗ b implies N β   2  2 2  j k  k    ξ  ≤ xj − xjk  ηλk − λk  2 j1 2     wk − w  k .

3.21

D

2

2

 k D  D1/2 wk − w  k  in the above inequality, we get Hence, applying wk − w        k   1/2 k w −w  k . ξ  ≤ D

3.22

Combining 3.20 and 3.22, we have                 k − ξk  ≤ GD−1/2  1 D1/2 wk − w  k . G wk − w

3.23

Consequently, applying the above inequality, 3.7, and 2.32 to α∗k yields T   k   k  k − ξk G w −w wk − w   2 Gwk − w  k  − ξk   2  k D 1 − νwk − w ≥  2    GD−1/2  1 2 D1/2 wk − w  k  1−ν ≥    , GD−1/2  1 2 

α∗k

and thus the assertion is proved. Now, we are in the stage to prove the main convergence theorem of this paper.

3.24

16

Advances in Operations Research

Theorem 3.5. For any w∗  x∗ , λ∗  ∈ W∗ , the sequence {wk  xk , λk } generated by the proposed method satisfies 2  2 γ 2 − γ 1 − ν2  2   k  k 1  k ∗ ∗ k − w  w  . w − w  ≤ w − w  −    D GD−1/2  1 2

3.25

Thus we have  2   lim wk − w  k   0,

3.26

k→∞

and the iteration of the proposed method will terminate in finite loops. Proof. First, it follows from 3.12 and 3.18 that  2  2 T         k 1    G wk − w k  k − ξk . w − w∗  ≤ wk − w∗  − γ 2 − γ α∗k wk − w

3.27

Using 2.32, 3.7, and 3.19, we have  2  T     1 − ν2  k k α∗k wk − w G wk − w k  k − ξk ≥  − w  w  .   D GD−1/2  1 2

3.28

Substituting 3.28 in 3.27, Assertion 3.25 is proved. Therefore, we have   ∞ γ 2−γ 2  2  1 − ν2   k  0 k ∗ − w  ≤ − w w   , w     D GD−1/2  1 2 k0

3.29

and Assertion 3.26 follows immediately. Since we use            k  k  max x1k − x1k  , . . . , xN < , − xN  , λk − λk  ∞





3.30

as the stopping criterium, it follows from 3.26 that the iteration will terminate in finite loops for any given > 0.

4. Numerical Examples This section describes experiments testifying to the good performance of proposed method. The algorithms were written in Matlab version 7.0 and complied on a notebook with CPU of Intel Core 2 Duo 2.01 GHz and RAM of 0.98 GB.

Advances in Operations Research

17

To evaluate the behavior of the proposed method, we construct examples about convex separable quadratic programming CSQP with linearly coupling constraints. The convex separable quadratic programming was generated as follows: , N 1 T T x Hi xi − ci xi | x ∈ Ω , min 2 i1 i

+

+

Ω

x≥0|

N 

, Ai xi ≤ b ,

4.1

i1

where Hi ∈ Rni ×ni is a symmetric positive definite matrix, Ai ∈ Rm×ni , b ∈ Rm , and ci ∈ Rni . We construct matrices Ai and Hi in the test examples as follows. The elements of Ai are randomly given in −5, 5, and the matrices Hi are defined by setting Hi : Vi Σi ViT ,

4.2

where Vi  Ini −

2vi viT viT vi

,

Σi  diagσk,i ,

σk,i  cos



1. ni 1

4.3

In this way, Hi is positive definite and has prescribed eigenvalues between 0, 2. If x∗ is the solution of Problem 4.1, according to the KKT principle, there is a 0 ≤ y∗ ∈ Rm such that Hi xi∗ ATi y∗ − ci ≥ 0, N  Ai xi∗ ≤ b, i1

  x∗ T Hi xi∗ ATi y∗ − ci  0, xi∗ ≥ 0, . N  ∗T ∗ y Ai xi − b  0, y∗ ≥ 0.

4.4

i1

Let ξi ∈ Rni and z ∈ Rm be random vectors whose elements are between −1, 1. We set xi∗  maxξi , 0 ∗ τ1 , ∗

y  maxz, 0 ∗ τ3 ,

ξi∗  max−ξi , 0 ∗ τ2 ,

z∗  max−z, 0 ∗ τ4 ,

4.5

where τi , i  1, 2, 3, 4, are positive parameters. By setting ci  Hi xi∗ ATi y∗ − ξi∗ ,

b

N  Ai xi∗ z∗ ,

4.6

i1

we constructed a test problem of 4.1 which has the known solution point x∗ and the optimal Lagrangian multipliers y∗ . We tested such problems with τ1  0.5, τ2  10, τ3  0.5, τ4  10. Here, two example sets were considered. The problems in the first set have 3 separable operators N  3, and the second have 2 N  2. In the first experiment, we employ AsPDM with update w2k 1 to solve CSQP with 3 separable operators. The reason why we choose w2k 1 here is that it usually performs better than w1k 1 . The stopping criterion was chosen as ew ∞ < 10−3 ; the parameters were set as ν  0.2, η  0.5, and γ  1.8. Table 1 reports the number iterations denoted as Its.,

18

Advances in Operations Research Table 1: AsPDM for CSQP with 3 separable operators.

m 100 100 150 200

n1 50 100 150 200

n2 50 100 150 200

n3 50 100 150 200

Its. 674 1581 1775 2108

Numf 4066 9508 10672 12670

Table 2: Its. and function eval. for different problem sizes. m 100 200 300 400 500 600 100 100 200 200

n1 100 200 300 400 500 600 80 50 150 100

n2 100 200 300 400 500 600 80 150 100 200

AsPDM Its. 1089 1265 1655 1573 2299 2267 568 1192 567 744

PCM Numf 4371 5075 6635 6307 9211 9083 2287 4787 2283 2991

Its. 1343 1649 1765 1834 2218 2289 616 1231 572 846

Numf 5393 6621 7087 7365 8901 9187 2485 4945 2311 3407

the total number of function evaluations denoted as Numf  for different problem-sizes.  Here, Numf  3i1 Numfi . Observed form Table 1, the solutions are obtained in a moderate number of iterations; thus the proposed method is effectively applicable. In addition, the evaluations of fi per iteration are approximately equal to 2. AsPDM is well suited to solve separable problems. Next, we compared the computational efficiency of AsPDM against the method in 7 denoted as PCM, regarded as a highly efficient PPA-based method that can be well suited to solve VI. Iterations were terminated when the criterion ew ∞ < 10−5 was met. Table 2 reports the iterations, the total number of function evaluation for both methods. We observe that both methods are acceptable to for us to find a solution. Concerning computational efficiency, we can observe that AsPDM is comparable and clearly faster than PCM; moreover, function evaluations are also less, except in the case of m  500, n1  500, n2  500. In some cases, AsPDM can reduce about 20% computation cost than PCM. For m  100, n1  100, n2  100, we plot the error versus iteration number for both AsPDM and PCM in Figure 1. We have found that both methods converge quickly for the first hundred iterations but slow down as the exact solution is reached. The speed of AsPDM is better than PCM. In addition to being fast, AsPDM can solve the problem separately; that is the most significant advantage over other methods. Hence, AsPDM is more suitable to solve the reallife separable problems.

5. Conclusions We have proposed AsPDM for solving separable problems. It decomposes the original problem to independent low-dimension subproblems and solves those subproblems in parallel. Only the function values is required in the process, and the total computational

Advances in Operations Research

19

0.04 0.035

Error bound

0.03 0.025 0.02 0.015 0.01 0.005 0 500

600

700

800

900

1000

1100

1200

1300

Iteration AsPDM PCM

Figure 1: Error versus iteration number for the method and for with m  100, n1  100, n2  100.

cost is very small. AsPDM is easy to implement and does not appear to require applicationspecific tuning. The numerical results also evidenced the efficiency of our method. Thus, the new method is applicable and recommended in practice.

Acknowledgment The author was supported by the NSFC Grant 70901018.

References 1 R. T. Rockafellar and R. J.-B. Wets, “Generalized linear-quadratic problems of deterministic and stochastic optimal control in discrete time,” SIAM Journal on Control and Optimization, vol. 28, no. 4, pp. 810–822, 1990. 2 B. Martinet, “R´egularisation d’in´equations variationnelles par approximations successives,” Revue Francaise d’Informatique et de Recherche Op´erationelle, vol. 4, pp. 154–158, 1970. 3 R. T. Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM Journal on Control and Optimization, vol. 14, no. 5, pp. 877–898, 1976. 4 R. T. Rockafellar, “Augmented Lagrangians and applications of the proximal point algorithm in convex programming,” Mathematics of Operations Research, vol. 1, no. 2, pp. 97–116, 1976. 5 A. Auslender, M. Teboulle, and S. Ben-Tiba, “A logarithmic-quadratic proximal method for variational inequalities,” Computational Optimization and Applications, vol. 12, no. 1-3, pp. 31–40, 1999. 6 Y. Censor and S. A. Zenios, “Proximal minimization algorithm with D-functions,” Journal of Optimization Theory and Applications, vol. 73, no. 3, pp. 451–464, 1992. 7 B. He, X. Yuan, and J. J. Z. Zhang, “Comparison of two kinds of prediction-correction methods for monotone variational inequalities,” Computational Optimization and Applications, vol. 27, no. 3, pp. 247–267, 2004. 8 A. Nemirovsky, Prox-method with rate of convergence 0(1/k) for smooth variational inequalities and saddle point problem, Draft of 30/10/2003. 9 M. Teboulle, “Convergence of proximal-like algorithms,” SIAM Journal on Optimization, vol. 7, no. 4, pp. 1069–1083, 1997.

20

Advances in Operations Research

10 D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation, Numerical Methods, Prentice Hall, Englewood Cliffs, NJ, USA, 1989. 11 G. Chen and M. Teboulle, “A proximal-based decomposition method for convex minimization problems,” Mathematical Programming, vol. 64, no. 1, Ser. A, pp. 81–101, 1994. 12 B. He, L.-Z. Liao, and S. Wang, “Self-adaptive operator splitting methods for monotone variational inequalities,” Numerische Mathematik, vol. 94, no. 4, pp. 715–737, 2003. 13 P. Mahey, S. Oualibouch, and D. T. Pham, “Proximal decomposition on the graph of a maximal monotone operator,” SIAM Journal on Optimization, vol. 5, no. 2, pp. 454–466, 1995. 14 P. Tseng, “Applications of a splitting algorithm to decomposition in convex programming and variational inequalities,” SIAM Journal on Control and Optimization, vol. 29, no. 1, pp. 119–138, 1991. 15 R. Glowinski and P. Le Tallec, Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics, vol. 9 of SIAM Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1989. 16 B.-S. He, H. Yang, and C.-S. Zhang, “A modified augmented Lagrangian method for a class of monotone variational inequalities,” European Journal of Operational Research, vol. 159, no. 1, pp. 35– 51, 2004. 17 M. Fukushima, “Application of the alternating direction method of multipliers to separable convex programming problems,” Computational Optimization and Applications, vol. 1, no. 1, pp. 93–111, 1992. 18 J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer, New York, NY, USA, 1999. 19 A. Hamdi and P. Mahey, “Separable diagonalized multiplier method for decomposing nonlinear programs,” Computational & Applied Mathematics, vol. 19, no. 1, p. 1–29, 125, 2000. 20 A. Hamdi, P. Mahey, and J. P. Dussault, “A new decomposition method in nonconvex programming via a separable augmented Lagrangian,” in Recent advances in Optimization, vol. 452 of Lecture Notes in Economics and Mathematical Systems, pp. 90–104, Springer, Berlin, Germany, 1997. 21 P. Tseng, “Alternating projection-proximal methods for convex programming and variational inequalities,” SIAM Journal on Optimization, vol. 7, no. 4, pp. 951–965, 1997. 22 B. S. He, X. L. Fu, and Z. K. Jiang, “Proximal-point algorithm using a linear proximal term,” Journal of Optimization Theory and Applications, vol. 141, no. 2, pp. 299–319, 2009. 23 E. H. Zarantonello, “Projections on convex sets in Hilbert space and spectral theory,” in Contributions to Nonlinear Functional Analysis, E. H. Zarantonello, Ed., Academic Press, New York, NY, USA, 1971.

Hindawi Publishing Corporation Advances in Operations Research Volume 2012, Article ID 648070, 15 pages doi:10.1155/2012/648070

Research Article Variational-Like Inequalities and Equilibrium Problems with Generalized Monotonicity in Banach Spaces N. K. Mahato and C. Nahak Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur 721302, India Correspondence should be addressed to C. Nahak, [email protected] Received 22 September 2011; Accepted 16 November 2011 Academic Editor: Abdellah Bnouhachem Copyright q 2012 N. K. Mahato and C. Nahak. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We introduce the notion of relaxed ρ-θ-η-invariant pseudomonotone mappings, which is weaker than invariant pseudomonotone maps. Using the KKM technique, we establish the existence of solutions for variational-like inequality problems with relaxed ρ-θ-η-invariant pseudomonotone mappings in reflexive Banach spaces. We also introduce the concept of ρθ-pseudomonotonicity for bifunctions, and we consider some examples to show that ρ-θpseudomonotonicity generalizes both monotonicity and strong pseudomonotonicity. The existence of solution for equilibrium problem with ρ-θ-pseudomonotone mappings in reflexive Banach spaces are demonstrated by using the KKM technique.

1. Introduction Let K be a nonempty subset of a real reflexive Banach space X, and let X ∗ be the dual space of X. Consider the operator T : K → X ∗ and the bifunction η : K × K → X. Then the variational-like inequality problem in short, VLIP is to find x ∈ K, such that    T x, η y, x ≥ 0,

∀y ∈ K,

1.1

where ·, · denote the pairing between X and X ∗ . If we take ηx, y  x − y, then 1.1 becomes to find x ∈ K, such that 

 T x, y − x  ≥ 0,

∀y ∈ K,

1.2

2

Advances in Operations Research

which is classical variational inequality problems VIPs. These problems have been studied in both finite and infinite dimensional spaces by many authors 1–3. VIP has numerous applications in optimization, nonlinear analysis, and engineering sciences. In the study of VLIP and VIP, monotonicity is the most common assumption for the operator T . Recently many authors established the existence of solutions for VIP and VLIP under generalized monotonicity assumptions, such as quasimonotonicity, relaxed monotonicity, densely pseudomonotonicity, relaxed η-α-monotonicity, and relaxed η-α-pseudomonotonicity see 1, 3–6 and the references therein. In 2008 7, Behera et al. defined various concepts of generalized ρ-θ-η-invariant monotonicities which are proper generalization of generalized invariant monotonicity introduced by Yang et al. 8. Chen 9 defined semimonotonicity and studied semimonotone scalar variational inequalities problems in Banach spaces. Fang and Huang 3 obtained the existence of solution for VLIP using relaxed η-α-monotone mappings in the reflexive Banach spaces. In 1, Bai et al. extended the results of 3 with relaxed η-α-pseudomonotone mappings and provided the existence of solution of the variational-like inequalities problems in reflexive Banach spaces. Bai et al. 10 studied variational inequalities problems with the setting of densely relaxed μpseudomonotone operators and relaxed μ-quasimonotone operators, respectively. Inspired and motivated by 1, 3, 10, we introduce the concept of relaxed ρ-θη-invariant pseudomonotone mappings. Using the KKM technique, we establish the existence of solutions for Variational-like inequality problems with relaxed ρ-θ-η-invariant pseudomonotone mappings. We also introduce the notion of ρ-θ-pseudomonotonicity for bifunctions, and study some examples to show that ρ-θ-pseudomonotonicity is proper generalization of monotonicity and the strong pseudomonotonicity. The existence of solutions of equilibrium problem with ρ-θ-pseudomonotone mappings in reflexive Banach spaces are demonstrated, by using the KKM technique.

2. Preliminaries Let X be a real reflexive Banach space and K be a nonempty subset of X, and X ∗ be the space of all continuous linear functionals on X. Consider the functions T : K → X ∗ , η : K × K → X and θ : K × K → R and ρ ∈ R. Definition 2.1. The operator T : K → X ∗ is said to be relaxed ρ-θ-η-invariant pseudomonotone mapping with respect to η and θ, if for any pair of distinct points x, y ∈ K, one has 

   2      T x, η y, x ≥ 0 ⇒ T y, η x, y  ρθ x, y  ≤ 0.

2.1

Remark 2.2. i If we take ρ  0 then from 2.1 it follows that T x, ηy, x ≥ 0 ⇒ T y, ηx, y ≤ 0, for all x, y ∈ K, here T is said to be invariant pseudomonotone, see 8. ii If we take ρ  0, and ηx, y  x − y, then 2.1 reduces to T x, y − x ≥ 0 ⇒ T y, x − y ≤ 0, for all x, y ∈ K, and T is said to be pseudomonotone map. iii If θx, y  x − y , ηx, y  x − y, ρ < 0, that is, let ρ  −μ2 where μ ∈ R. 2 Then 2.1 follows that T x, y − x ≥ 0 ⇒ T y, y − x ≥ −μ2 x − y , and T is called relaxed μ-pseudomonotone mapping 10. It is obvious that every invariant pseudomonotone mapping is relaxed ρ-θ-ηinvariant pseudomonotone. However, the converse is not true in general, which is illustrated by the following counterexample.

Advances in Operations Research

3

Example 2.3. Let K  0, π/2 and T : 0, π/2 → R be defined by T x  sin x − 1.

2.2

Let the functions η and θ be defined by   η x, y  cos x − cos y, ⎧√   ⎨ cos x − cos y, x ≤ y; θ y, x  ⎩1, y < x.

2.3

Now, T x, ηy, x  sin x − 1cos y − cos x ≥ 0, when x ≤ y or x  π/2. Take ρ  −3, then 

   2        T y, η x, y − 3θ y, x   sin y − 1 cos x − cos y − 3 cos x − cos y ,     cos x − cos y sin y − 4

x≤y 2.4

≤ 0. Therefore T is relaxed ρ-θ-η-invariant pseudomonotone mapping with respect to η and θ. But T is not invariant pseudomonotone mapping with respect to the same η. In fact, if we take x  π/2 and y  0. Therefore we have    T x, η y, x  0.

2.5



   π T y, η x, y  −1 cos − cos 0  1 > 0. 2

2.6

However,

Definition 2.4. The operator T : K → X ∗ is said to be relaxed ρ-θ-η-invariant quasimonotone mapping with respect to η and θ, if for any pair of distinct points x, y ∈ K, one has 

   2      T x, η y, x > 0 ⇒ T y, η x, y  ρθ x, y  ≤ 0.

2.7

Next, we will show that relaxed ρ-θ-η-invariant quasimonotonicity and relaxed ρθ-η-invariant pseudomonotonicity coincide under some conditions. For this we need the following η-hemicontinuity definition.

4

Advances in Operations Research

Definition 2.5 see 3. Let T : K → X ∗ and η : K × K → X. T is said to be η-hemicontinuous if for any fixed x, y ∈ K, the mapping f : 0, 1 → R defined by ft  T xty−x, ηy, x is continuous at 0 . Lemma 2.6. Let T be an η-hemicontinuous and relaxed (ρ-θ)-η-invariant quasimonotone on K. Assume that the mapping x → T y, ηx, y is concave and θ : K × K → R is hemicontinuous in the second argument. Then for every x, y ∈ K with T y, ηx, y ≥ 0 one has either T x, ηy, x  ρ|θy, x|2 ≤ 0 or T y, ηz, y ≤ 0, for all z ∈ K. Proof. Suppose there exists some z ∈ K such that T y, ηz, y > 0. Then we have to prove that T x, ηy, x  ρ|θx, y|2 ≤ 0. Let xt  tz  1 − tx, 0 < t ≤ 1. Then 

        T y, η xt , y ≥ t T y, η z, y  1 − t T y, η x, y > 0.

2.8

Since T is relaxed ρ-θ-η-invariant quasimonotone on K, we have     2  T xt , η y, xt  ρθ y, xt  ≤ 0.

2.9

Since T is η-hemicontinuous and θ is hemicontinuous in the first variable and letting t → 0, we have 

  2   T x, η y, x  ρθ y, x  ≤ 0.

2.10

Denote K ⊥  {z ∈ K ∗ : z, ηx, y  0, for all x, y ∈ K}, and T K  {T x : x ∈ K}

2.11

Definition 2.7. A point x0 ∈ K is said to be a η-positive point of T : K → X ∗ on K if for all x ∈ K, either T x ∈ K ⊥ or there exists y ∈ K such that T x, ηy, x0  > 0. Let KT denotes the set of all η-positive points of T on K. Now we give a characterization of relaxed ρ-θ-η-invariant pseudomonotone. Lemma 2.8. Let T be η-hemicontinuous and relaxed (ρ-θ)-η-invariant quasimonotone on K with T K ∩ K ⊥  φ. Assume that the mapping x → T y, ηx, y is concave and θ is hemicontinuous. Then T is relaxed (ρ-θ)-η-invariant pseudomonotone on KT . Proof. Let x, y ∈ KT with T x, ηy, x ≥ 0. Therefore by the previous Lemma, we have either 

   2   T y, η x, y  ρ θ x, y  ≤ 0 or

   T x, η z, y ≤ 0,

∀ z ∈ K.

2.12

Advances in Operations Research

5

Now we will show that the second inequality in 2.12 is impossible. In fact, since x ∈ KT and T x ∈ / K ⊥ , there exists z ∈ K such that T x, ηz, x > 0, which shows that the second inequality in 2.12 is impossible. Therefore, 

   2   T y, η x, y  ρθ x, y  ≤ 0,

2.13

hence T is relaxed ρ-θ-η-invariant pseudomonotone on KT . Remark 2.9. Lemmas 2.6 and 2.8 generalize Bai et al. 10 results Lemma 2.1 and Proposition 2.1 from the case of relaxed μ-quasimonotone operators to relaxed ρ-θ-ηinvariant quasimonotone operators. Definition 2.10. Let f : K → 2X be a set-valued mapping. Then f is said to be KKM mapping if for any {y1 , y2 , . . . , yn } of K one has co{y1 , y2 , . . . , yn } ⊂ ni1 fyi , where co{y1 , y2 , . . . , yn } denotes the convex hull of y1 , y2 , . . . , yn . Lemma 2.11 see 11. Let M be a nonempty subset of a Hausdorff topological vector space X and let f : M → 2X be a KKM mapping. If fy is closed in X for all y ∈ M and compact for some

 φ. y ∈ M, then y∈M fy / Theorem 2.12 see 12. Bounded, closed, convex subset of a reflexive Banach space is weakly compact.

3. VLIP with Relaxed (ρ-θ)-η-Invariant Pseudomonotonicity In this section, we establish the existence of the solution for VLIP, using relaxed ρ-θ-ηinvariant pseudomonotone mappings in reflexive Banach spaces. Theorem 3.1. T : K → X ∗ is η-hemicontinuous and relaxed (ρ-θ)-η-invariant pseudomonotone mapping. Let the following hold: i ηx, y  ηy, x  0, and θx, y  θy, x  0, for all x, y ∈ K; ii θx, y is convex in second argument and concave in first argument; iii for a fixed z, y ∈ K, the mapping x → T z, ηx, y is convex. Then the following problems (a) and (b) are equivalent: a find x ∈ K, T x, ηy, x ≥ 0, for all y ∈ K; b find x ∈ K, T y, ηx, y  ρ|θx, y|2 ≤ 0, for all y ∈ K. Proof. Assume that x is a solution of a. Therefore, b follows from the definition of relaxed ρ-θ-η-invariant pseudomonotonicity of T . Conversely, suppose that there exists an x ∈ K satisfying b, that is, 

    2  T y, η x, y  ρθ x, y  ≤ 0,

∀y ∈ K.

Choose any point y ∈ K and consider xt  ty  1 − tx, t ∈ 0, 1, then xt ∈ K.

3.1

6

Advances in Operations Research

Case I. When ρ  0. Therefore from 3.1 we have   T xt , ηx, xt  ≤ 0,   T xt , −ηxt , x ≤ 0,   T xt , ηxt , x ≥ 0.

3.2

         T xt , ηxt , x ≤ t T xt , η y, x  1 − t T xt , ηx, x  t T xt , η y, x .

3.3

Now, 

From 3.2 and 3.3 we have    T xt , η y, x ≥ 0.

3.4

Since T is η-hemicontinuous in the first argument and taking t → 0 we get    T x, η y, x ≥ 0,

∀ y ∈ K.

3.5

Case II. When ρ < 0, let ρ  −k2 . From 3.1 we have  T xt , ηx, xt  ≤ k2 |θx, xt |2 ,   ⇒ T xt , ηxt , x ≥ −k2 |θx, xt |2 .



3.6

From ii, iii, 3.3, and 3.6 we get,       2  2      t T xt , η y, x ≥ −k2 t2 θ x, y  which implies T xt , η y, x ≥ −k2 tθ x, y  .

3.7

Since T is η-hemicontinuous and taking t → 0 we have    T x, η y, x ≥ 0,

∀y ∈ K.

3.8

Case III. When ρ > 0, let ρ  k2 . From 3.1 we have 

 2 T xt , ηx, xt  ≤ −k2 |θx, xt |

  2 ⇒ T xt , ηxt , x ≥ k2 |θx, xt | .

3.9

Advances in Operations Research

7

From i, ii, iii,3.3, and 3.9 we get, 

    2 T xt , η y, x ≥ k2 tθ y, x  .

3.10

Since T is η-hemicontinuous and taking t → 0 we have   T x, η y, x  ≥ 0,

∀y ∈ K.

3.11

Theorem 3.2. Let K be a nonempty bounded closed convex subset of a real reflexive Banach space X. T : K → X ∗ is η-hemicontinuous and relaxed (ρ-θ)-η-invariant pseudomonotone mapping. Let the following hold: i ηx, y  ηy, x  0, and θx, y  θy, x  0, for all x, y ∈ K; ii θx, y is convex in second argument and concave in first argument, and lower semicontinuous in the first argument; iii for a fixed z, y ∈ K, the mapping x → T z, ηx, y is convex and lower semicontinuous. Then the problem 1.1 has a solution. Proof. Consider the set valued mappings F : K → 2X and G : K → 2X such that        F y  x ∈ K : T x, η y, x ≥ 0 , ∀y ∈ K,     2       G y  x ∈ K : T y, η x, y  ρθ x, y  ≤ 0 , ∀y ∈ K.

3.12

It is easy to see that x ∈ K solves the VLIP if and only if x ∈ y∈K Fy. Thus it suffices to

 φ. To prove this, first we will show that F is a KKM mapping. prove y∈K Fy / If possible let F not be a KKM mapping. Then there exists {x1 , x2 , . . . , xm } ⊂ K ⊆ m that means there exists a x0 ∈ co{x1 , x2 , . . . , xm }, such that co{x1 , x2 , . . . , xm }/ i1 Fxi ,   m t x where t ≥ 0, i  1, 2, . . . , m, / m x0  m i i1 i i i1 ti  1, but x0 ∈ i1 Fxi . Hence, T x0 , ηxi , x0  < 0; for i  1, 2, . . . , m. From i and iii it follows that m      0  T x0 , ηx0 , x0  ≤ ti T x0 , ηxi , x0  < 0,

3.13

i1

which is a contradiction. Hence F is a KKM mapping. From the relaxed ρ-θ-η-invariant pseudomonotonicity of T it follows that Fy ⊂ Gy, for all y ∈ K. Therefore G is also a KKM mapping. Since K is closed bounded and convex, it is weakly compact. From the assumptions, we know that Gy is weakly closed for all y ∈ K. In fact, because x → T z, ηx, y and x → ρ|θx, y|2 are lower semicontinuous. Therefore, Gy is weakly compact in K, for each y ∈

 φ.So K. Therefore from Lemma 2.11 and Theorem 3.1 it follows that y∈K Fy  y∈K Gy / there exists x ∈ K such that T x, ηy, x ≥ 0, for all y ∈ K, that is, 1.1 has a solution.

8

Advances in Operations Research

Theorem 3.3. Let K be a nonempty unbounded closed convex subset of a real reflexive Banach space X. T : K → X ∗ is η-hemicontinuous and relaxed (ρ-θ)-η-invariant pseudomonotone mapping. Let the following hold: i ηx, y  ηy, x  0, and θx, y  θy, x  0, for all x, y ∈ K; ii θx, y is convex in second argument and concave in first argument, and lower semicontinuous in the first argument; iii for a fixed z, y ∈ K, the mapping x → T z, ηx, y is convex and lower semicontinuous; iv T is weakly η-coercive, that is, there exits x0 ∈ K such that T x, ηx, x0  > 0, whenever x → ∞ and x ∈ K. Then 1.1 is solvable. Proof. For r > 0, assume Kr  {y ∈ K : y ≤ r}. Consider the problem: find xr ∈ K ∩ Kr such that    T xr , η y, xr ≥ 0,

∀y ∈ K ∩ Kr .

3.14

By Theorem 3.2 we know that problem 3.14 has at least one solution xr ∈ K ∩ Kr . Choose x0 < r with x0 as in condition iv. Then x0 ∈ K ∩ Kr and  T xr , ηx0 , xr  ≥ 0.

3.15

   T xr , ηx0 , xr   − T xr , ηxr , x0  .

3.16

 From i we get, 

If xr  r for all r, we may choose r large enough so that by the assumption iv and 3.16 imply that T xr , ηx0 , xr  < 0, which contradicts 3.15. Therefore there exists r such that xr < r. For any y ∈ K, we can choose 0 < t < 1 small enough such that xr  ty − xr  ∈ K ∩ Kr . From 3.14 it follows that      0 ≤ T xr , η xr  t y − xr , xr    ≤ t T xr , η y, xr  1 − tT xr , ηxr , xr      t T xr , η y, xr .

3.17

Hence T xr , ηy, xr  ≥ 0, for all y ∈ K.

4. Equilibrium Problem with (ρ-θ)-Pseudomonotone Mappings Let K be a nonempty subset of a real reflexive Banach space X, and consider the bifunction f : K × K → R. Then the equilibrium problem in short, EP is to find x ∈ K, such that   f x, y ≥ 0,

∀y ∈ K.

4.1

Advances in Operations Research

9

Equilibrium problem was first introduced and studied by Blum and Oettli 2 in 1994. EP has many applications in nonlinear analysis, optimization, and game theory. The EP contains many problems as particular cases for examples, mathematical programming problems, complementary problems, Nash equilibrium problems in noncooperative games, variational inequality problems, fixed point problems, and minimax inequality problems. Next we describe a number of particular cases of EP to explain our interest in EP, which have been discussed in 2. i Optimization problem: let φ : K → R, and consider minimization problem M find x ∈ K such that φx ≤ φy, for all y ∈ K. If we set fx, y  φy − φx, for all x, y ∈ K. Then problems EP and M are equivalent. ii Variational inequality problem: if we define fx, y  T x, y−x where T : K → X ∗ is a given mapping, where X ∗ denotes the space of all continuous linear maps on X. Then EP collapses into the classical VIP which states the following, VIP find x ∈ X such that x ∈ K, with T x, y − x ≥ 0, for all y ∈ K. iii Fixed point problem: let X be a Hilbert space, and K is a nonempty closed convex subset of X. Let T : K → K be a given mapping. Then the fixed point problem is to FPP find x ∈ K such that x  T x. Set fx, y  x − T x, y − x. Then x solves EP if and only if x is a solution of FPP. The purpose of this section is to establish the existence of solution for equilibrium problems with ρ-θ-pseudomonotone mappings in the reflexive Banach spaces. We first introduce the notion of ρ-θ-monotone mappings and ρ-θ-pseudomonotone mappings. We also provide some examples to justify that ρ-θ-monotone mapping generalizes weakly monotone maps, and ρ-θ-pseudomonotone mapping generalizes pseudomonotone, weakly pseudomonotone maps. Let K be a nonempty subset of a real reflexive Banach space X. Consider the function f : K × K → R and θ : K × K → R and ρ ∈ R. Definition 4.1. The function f : K × K → R is said to be ρ-θ-monotone with respect to θ : K × K → R if, for all x, y ∈ K, one has    2     f x, y  f y, x ≤ ρθ x, y  .

4.2

When i ρ > 0 and θx, y  x − y , f is weakly monotone; ii ρ  0, f is monotone; iii ρ < 0 and θx, y  x − y , f is strongly monotone. We now give an example to show that ρ-θ monotonicity is a generalization of both monotonicity and weakly monotonicity.

10

Advances in Operations Research

Example 4.2. Let K  1, 10. Let the functions f and θ be defined by   f x, y  x2  y2  1,

     θ x, y  2 x2  y2  4.

4.3

Then

    f x, y  f y, x  2 x2  y2  1

≤ ρ 2x2  2y2  4 ,

4.4 for any ρ ≥ 1.

Therefore f is ρ-θ-monotone with respect to θ. There exists no constant ρ > 0 such that fx, y  fy, x ≤ ρ x − y 2 , for all x, y ∈ K. As if we assume x and y to be such that their difference is very small, then right-hand side of the inequality tends to zero and left-hand side is always greater than 2. Hence fx, y is not weakly monotone. Again since f is positive valued, f is not monotone. Definition 4.3. The function f : K × K → R is said to be ρ-θ-pseudomonotone with respect to θ if for any pair of distinct points x, y ∈ K, one has   f x, y ≥ 0

   2   implies f y, x ≤ ρθ x, y  .

4.5

Every ρ-θ-monotone mapping is a ρ-θ-pseudomonotone with respect to the same ρ and θ. However, the converse is not true in general, which follows from the following counterexample. Example 4.4. Let the functions f : 0, π/2×0, π/2 → R and θ : 0, π/2 ×0, π/2 → R be defined by    θ x, y  50  cos x − cos y, respectively.

   2 f x, y  cos y  6 − cos2 x,

4.6

Take ρ  1. We have to show    2     f x, y ≥ 0 implies f y, x ≤ ρθ x, y  .

π    2 f x, y  cos y  6 − cos2 x ≥ 0, ∀x, y ∈ 0, . 2

4.7

Now,    2     f y, x − ρθ x, y   cos x  62 − cos2 y − 50  cos x − cos y

 cos x  62  cos y − 50  cos2 y  cos x ≤ 0.

4.8

Advances in Operations Research

11

Hence f is ρ-θ-pseudomonotone mapping with respect to θ. But f is not ρ-θ-monotone mapping with respect to the same θ. In fact,

     2 f x, y  f y, x  cos y  6  cos x  62 − cos2 x  cos2 y   2    θ x, y   50  cos x − cos y .

4.9

Note that in the above example, f is neither a monotone nor pseudomonotone mapping. Definition 4.5. The function f : K × K → R is said to be ρ- θ-quasimonotone with respect to θ if for any pair of distinct points x, y ∈ K, one has    2     f x, y > 0 implies f y, x ≤ ρθ x, y  .

4.10

Next, we will show that ρ-θ-quasimonotonicity and ρ-θ-pseudomonotonicity are equivalent under certain conditions. Lemma 4.6. Let f be hemicontinuous and (ρ-θ)-quasimonotone on K. Assume that f is concave in the second argument and θ is hemicontinuous in the second argument. Then for every x, y ∈ K with fy, x ≥ 0 one has either fx, y ≤ ρ|θy, x|2 or fy, z ≤ 0, for all z ∈ K. Proof. Suppose there exists some z ∈ K such that fy, z > 0. Then we have to prove that fx, y ≤ ρ|θx, y|2 . Let xt  tz  1 − tx, 0 < t ≤ 1. Then       f y, xt ≥ tf y, z  1 − tf y, x > 0.

4.11

Since T is relaxed ρ-θ-quasimonotone on K, it implies that     2 f xt , y ≤ ρθ y, xt  .

4.12

  2   f x, y ≤ ρθ y, x  .

4.13

Now letting t → 0, we have

This completes the proof. Theorem 4.7. Let K be a nonempty convex subset of a real reflexive Banach space X. Suppose f : K × K → R is (ρ-θ)-pseudomonotone with respect to θ and is hemicontinuous in the first argument with the following conditions: i fx, x  0, for all x ∈ K; ii for fixed z ∈ K, the mapping x → fz, x is convex;

12

Advances in Operations Research iii θx, y  θy, x  0, for all x, y ∈ K; iv θ is convex in first argument and concave in the second argument.

Then x ∈ K is a solution of 4.1 if and only if      2 f y, x ≤ ρθ x, y  ,

∀y ∈ K.

4.14

Proof. Assume that x is a solution of 4.1 that is, fx, y ≥ 0, for all y ∈ K. Therefore from the definition of ρ-θ pseudomonotonicity of T it follows that     2  f y, x ≤ ρθ x, y  ,

∀y ∈ K.

4.15

∀y ∈ K.

4.16

Conversely, suppose ∃ x ∈ K satisfying 4.14, that is,      2 f y, x ≤ ρθ x, y  ,

Choose any point y ∈ K and consider xt  ty  1 − tx, t ∈ 0, 1, then xt ∈ K. Case I. When ρ  0. Therefore from 4.16 we have fxt , x ≤ 0,

∀y ∈ K.

4.17

Now conditions i and ii imply that,   0  fxt , xt  ≤ tf xt , y  1 − tfxt , x    ⇒ t fxt , x − f xt , y ≤ fxt , x.

4.18

From 4.17 and 4.18 we have   fxt , x − f xt , y ≤ 0,

∀y ∈ K.

4.19

Since f is hemicontinuous in the first argument and taking t → 0, it implies that   f x, y ≥ 0,

∀y ∈ K.

4.20

2

4.21

Hence x is a solution of 4.1. Case II. When ρ < 0, let ρ  −k2 . From 4.16 we have fxt , x ≤ −k2 |θx, xt | .

Advances in Operations Research

13

Now using 4.18, 4.21, and iv it follows that     2 fxt , x − f xt , y ≤ −k2 tθ x, y  .

4.22

Since f is hemicontinuous in the first argument and letting t → 0, we get   f x, y ≥ 0,

∀y ∈ K.

4.23

Case III. When ρ > 0, let ρ  k2 . From 4.16, 4.18, and iv we have 2

fxt , x ≤ k2 |θx, xt | ,   2 ⇒ fxt , x − f xt , y ≤ k tθ y, x  . 



4.24

2

Since f is hemicontinuous in the first argument and taking t → 0, we get   f x, y ≥ 0,

∀y ∈ K.

4.25

Theorem 4.8. Let K be a nonempty bounded convex subset of a real reflexive Banach space X. Suppose f : K × K → R is (ρ-θ)-pseudomonotone with respect to θ and is hemicontinuous in the first argument with the following conditions: i fx, x  0, for all x ∈ K; ii for fixed z ∈ K, the mapping x → fz, x is convex and lower semicontinuous; iii θx, y  θy, x  0, for all x, y ∈ K; iv θ is convex in first argument and concave in the second argument, and lower semicontinuous in the first argument. Then the problem 4.1 has a solution. Proof. Consider the two set valued mappings F : K → 2X and G : K → 2X such that       F y  x ∈ K : f x, y ≥ 0 , ∀y ∈ K,   2       G y  x ∈ K : f y, x ≤ ρθ x, y  , ∀y ∈ K.

4.26

It is easy to see that x ∈ K solves the equilibrium problem 4.1 if and only if

x ∈ y∈K Fy. First to show that F is a KKM mapping. If possible let F not be a KKM ⊆ m mapping. Then there exists {x1 , x2 , . . . , xm } ⊂ K such that co{x1 , x2 , . . . , xm }/ i1 Fxi , that m ti xi where ti ≥ 0, i  1, 2, . . . , m, means there exists a x0 ∈ co{x1 , x2 , . . . , xm }, x0  i1 m / m i1 ti  1, but x0 ∈ i1 Fxi . Hence, fx0 , xi  < 0; for i  1, 2, . . . , m.

14

Advances in Operations Research From i and ii it follows that 0  fx0 , x0  ≤

m 

ti fx0 , xi  < 0,

4.27

i1

which is a contradiction. Hence F is a KKM mapping. From the ρ-θ-pseudomonotonicity of f it follows that Fy ⊂ Gy, for all y ∈ K. Therefore G is also a KKM mapping. Since K is closed bounded and convex, it is weakly compact. From the assumptions, we know that Gy is weakly closed for all y ∈ K. In fact, because x → fz, x and x → ρ|θx, z|2 are lower semicontinuous. Therefore, Gy is weakly compact in K, for each y ∈ K

 Therefore from Lemma 2.11 and Theorem 4.7 it follows that y∈K Fy

Gy  φ. / y∈K So there exists x ∈ K such that fx, y ≥ 0, for all y ∈ K, that is, 4.1 has a solution. Theorem 4.9. Let K be a nonempty unbounded closed convex subset of a real reflexive Banach space X. Suppose f : K × K → R is (ρ-θ)-pseudomonotone with respect to θ and is hemicontinuous in the first argument and satisfy the following assumptions: i fx, x  0, for all x ∈ K; ii for fixed z ∈ K, the mapping x → fz, x is convex and lower semicontinuous; iii θx, y  θy, x  0, for all x, y ∈ K; iv θ is convex in first argument and concave in the second argument, and lower semicontinuous in the first argument. v f is weakly coercive, that is, there exists x0 ∈ K such that fx, x0  < 0,

whenever x → ∞, an x ∈ K.

4.28

Then 4.1 has a solution. Proof. For r > 0, assume Kr  {y ∈ K : y ≤ r}. Consider the problem: find xr ∈ K ∩ Kr such that   f xr , y ≥ 0,

∀y ∈ K ∩ Kr .

4.29

By Theorem 4.8 we know that the problem 4.29 has at least one solution xr ∈ K ∩ Kr . Choose x0 < r with x0 as in condition v. Then x0 ∈ K ∩ Kr and fxr , x0  ≥ 0.

4.30

If xr  r for all r, we may choose r large enough so that by the assumption v imply that fxr , x0  < 0, which contradicts 4.30.

Advances in Operations Research

15

Therefore there exists r such that xr < r. For any y ∈ K, we can choose 0 < t < 1 small enough such that xr  ty − xr  ∈ K ∩ Kr . From 4.29 it follows that    0 ≤ f xr , xr  t y − xr   ≤ tf xr , y  1 − tfxr , xr     tf xr , y .

4.31

Hence fxr , y ≥ 0, for all y ∈ K.

5. Conclusions The present work has been aimed to theoretically study the existence of solutions for variational-like inequality problems under a new concept relaxed ρ-θ-η-invariant pseudomonotone maps in reflexive Banach spaces. We have also obtained existence of solutions of equilibrium problems with ρ-θ-pseudomonotone mappings. More research and development activities is therefore needed on generalized monotonicity to demonstrate the equilibrium problem and variational inequality problem.

Acknowledgments The authors are very much thankful to the editor and referees for their suggestions which helped to improve the presentation of this paper. The work of the authors was partially supported by CSIR, New Delhi, Grant 25 0163/08/EMR-II.

References 1 M.-R. Bai, S.-Z. Zhou, and G.-Y. Ni, “Variational-like inequalities with relaxed η-α pseudomonotone mappings in Banach spaces,” Applied Mathematics Letters, vol. 19, no. 6, pp. 547–554, 2006. 2 E. Blum and W. Oettli, “From optimization and variational inequalities to equilibrium problems,” The Mathematics Student, vol. 63, no. 1–4, pp. 123–145, 1994. 3 Y. P. Fang and N. J. Huang, “Variational-like inequalities with generalized monotone mappings in Banach spaces,” Journal of Optimization Theory and Applications, vol. 118, no. 2, pp. 327–338, 2003. 4 B.-S. Lee and G.-M. Lee, “Variational inequalities for η, θ-pseudomonotone operators in nonreflexive Banach spaces,” Applied Mathematics Letters, vol. 12, no. 5, pp. 13–17, 1999. 5 D. T. Luc, “Existence results for densely pseudomonotone variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 254, no. 1, pp. 291–308, 2001. 6 N. Hadjisavvas and S. Schaible, “Quasimonotone variational inequalities in Banach spaces,” Journal of Optimization Theory and Applications, vol. 90, no. 1, pp. 95–111, 1996. 7 N. Behera, C. Nahak, and S. Nanda, “Generalized ρ-θ-η-invexity and generalized ρ-θ-η-invariantmonotonicity,” Nonlinear Analysis: Theory, Methods & Applications, vol. 68, no. 8, pp. 2495–2506, 2008. 8 X. M. Yang, X. Q. Yang, and K. L. Teo, “Generalized invexity and generalized invariant monotonicity,” Journal of Optimization Theory and Applications, vol. 117, no. 3, pp. 607–625, 2003. 9 Y.-Q. Chen, “On the semi-monotone operator theory and applications,” Journal of Mathematical Analysis and Applications, vol. 231, no. 1, pp. 177–192, 1999. 10 M.-R. Bai, S.-Z. Zhou, and G.-Y. Ni, “On the generalized monotonicity of variational inequalities,” Computers & Mathematics with Applications, vol. 53, no. 6, pp. 910–917, 2007. 11 K. Fan, “Some properties of convex sets related to fixed point theorems,” Mathematische Annalen, vol. 266, no. 4, pp. 519–537, 1984. 12 H. Brezis, Analyse Fonctionnelle: Th´eorie et Applications, Collection Math´ematiques Appliqu´ees pour la Maˆıtrise, Masson, Paris, France, 1983.