ki(x) = s(x|Ci), i = 1, 2, Â·Â·Â· , p. Then, ki is a convex function and. âki(x) = {w â Ci | ãw, xã = s(x|Ci)}, where âki is the subdifferentiable of ki (see [12]). Definition 2.1.
JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION Volume 4, Number 2, May 2008

Website: http://AIMsciences.org pp. 287–298

OPTIMALITY CONDITIONS, DUALITY AND SADDLE POINTS FOR NONDIFFERENTIABLE MULTIOBJECTIVE FRACTIONAL PROGRAMS

Xian-Jun Long and Nan-Jing Huang1 Department of Mathematics Sichuan University Chengdu, Sichuan 610064, China

Zhi-Bin Liu Department of Applied Mathematics Southwest Petroleum University Chengdu, Sichuan 610500, China

(Communicated by Soon-Yi Wu) Abstract. In this paper, a class of nondifferentiable multiobjective fractional programs is studied, in which every component of the objective function contains a term involving the support function of a compact convex set. KuhnTucker necessary and sufficient optimality conditions, duality and saddle point results for weakly efficient solutions of the nondifferentiable multiobjective fractional programming problems are given. The results presented in this paper improve and extend some the corresponding results in the literature.

1. Introduction. Multiobjective (fractional) programming problems have been studied by many authors in recent years (see, for example, [1]-[3], [6], [8]-[11], [13], [15], [16]). In particular, Bector et al. [2] obtained Fritz John and Karush-KuhnTucker necessary and sufficient optimality conditions for a class of nondifferentiable convex multiobjective fractional programming problems and established some duality theorems and saddle-point results for such problems. Based on the ideas of Bector et al. [2], Liu [8, 9] derived some necessary and sufficient optimality conditions and duality theorems for a class of nonsmooth multiobjective fractional programming problems involving pseudoinvex functions or (F, ρ)-convex functions. Recently, Liang et al. [10] introduced the concept of (F, α, ρ, d)-convexity and obtained some optimality conditions and duality results for fractional programming problems. Liang et al. [11] further obtained some optimality conditions and duality results for multiobjective fractional programming problems with the (F, α, ρ, d)convex functions. 2000 Mathematics Subject Classification. Primary: 90C26, 90C29; Secondary: 90C46. Key words and phrases. Nondifferentiable multiobjective fractional programming, KuhnTucker optimality condition, duality, saddle point, weakly efficient solution, (F, α, ρ, d)-convex function. This work was supported by the National Natural Science Foundation of China (10671135), the Specialized Research Fund for the Doctoral Program of Higher Education (20060610005) and the Open Fund (PLN0703) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University). 1 Corresponding author.

287

288

XIAN-JUN LONG, NAN-JING HUANG AND ZHI-BIN LIU

On the other hand, Mond and Schechter [12] studied non-differentiable symmetric duality, in which the objective functions contain a support function. Following the approaches of Mond and Schechter [12], Yang et al. [17] studied generalized dual problems for a class of nondifferentiable multiobjective programs. Very recently, Kim et al. [6] obtained some necessary and sufficient optimality conditions and duality results for nondifferentiable multiobjective fractional programming problems in which the objective function contains a support function. Inspired and motivated by [2, 6, 11], in this paper, we study a class of nondifferentiable multiobjective fractional programs in which each component of the objective function contains a term involving the support function of a compact convex set. We give some Kuhn-Tucker necessary and sufficient optimality conditions, duality and saddle-point results for weakly efficient solutions of nondifferentiable multiobjective fractional programming problems under the (F, α, ρ, d)-convexity assumptions. The results presented in this paper improve and extend some the corresponding results in the literature [2], [6] and [11]. 2. Preliminaries. In this paper, we consider the following multiobjective fractional programming problem: (MFP)

f1 (x) + s(x|C1 ) f2 (x) + s(x|C2 ) fp (x) + s(x|Cp ) , ,··· , ) g1 (x) g2 (x) gp (x) s.t. h(x) ≤ 0, x ∈ D. min

(

where D is an open subset of Rn , f = (f1 , f2 , · · · , fp ) : D → Rp , g = (g1 , g2 , · · · , gp ) : D → Rp and h = (h1 , h2 , · · · , hm ) : D → Rm are continuously differentiable functions over D. Suppose that f (x) ≥ 0 and g(x) > 0 for all x ∈ D; Ci , for each i ∈ {1, 2, · · · , p} is a compact convex set of Rn , and s(x|Ci ) denotes the support function of Ci evaluated at x, defined by s(x|Ci ) = max{hx, wi|w ∈ Ci }. Let C = C1 × C2 × · · · × Cp . Let S = {x ∈ D : h(x) ≤ 0} be the set of all feasible solutions and let I(x) := {i : hi (x) = 0} for any x ∈ D. Let ki (x) = s(x|Ci ),

i = 1, 2, · · · , p.

Then, ki is a convex function and ∂ki (x) = {w ∈ Ci | hw, xi = s(x|Ci )}, where ∂ki is the subdifferentiable of ki (see [12]). Definition 2.1. [14] A functional F : D × D × Rn → R is sublinear if, for any x, y ∈ D, F (x, y; a + b) ≤ F (x, y; a) + F (x, y; b),

∀a, b ∈ Rn ,

and F (x, y; αa) = αF (x, y; a),

∀α ∈ R,

α ≥ 0,

a ∈ Rn .

Definition 2.2. [10] Let F : D × D × Rn → R be a sublinear functional, let function φ : D → R be differentiable at x0 ∈ D, α : D × D → R+ \{0}, ρ ∈ R, and d : D × D → R. The function φ is said to be (F, α, ρ, d)-convex at x0 if φ(x) − φ(x0 ) ≥ F (x, x0 ; α(x, x0 )∇φ(x0 )) + ρd2 (x, x0 ),

∀x ∈ D.

The function φ is said to be (F, α, ρ, d)-convex on D if it is (F, α, ρ, d)-convex at every point in D.

OPTIMALITY CONDITIONS, DUALITY AND SADDLE POINTS

289

Remark 1. 1. If α(x, x0 ) = 1, then the (F, α, ρ, d)-convexity is the same as the (F, ρ)-convexity in [14]; 2. If F (x, x0 ; α(x, x0 )∇φ(x0 )) = α(x, x0 )∇φ(x0 )η(x, x0 ) for a certain mapping η : D × D → Rn , then the (F, α, ρ, d)-convexity is the same as the (V, ρ)convexity in [7]; 3. If F (x, x0 ; α(x, x0 )∇φ(x0 )) = ∇φ(x0 )η(x, x0 ) for a certain mapping η : D × D → Rn , then the (F, α, ρ, d)-convexity is the same as the ρ-convexity in [5]; 4. If F (x, x0 ; α(x, x0 )∇φ(x0 )) = ∇φ(x0 )η(x, x0 ) for a certain mapping η : D × D → Rn , and ρ = 0, then the (F, α, ρ, d)-convexity is the same as the invexity in [4]. Theorem 2.3. Assume that f and g are two real-valued differentiable functions defined on D such that f (x) + hw, xi ≥ 0 and g(x) > 0 for all x ∈ D. If f (·) + hw, ·i and −g(·) are (F, α, ρ, d)-convex at x0 ∈ D, then [f (·) + hw, ·i]/g(·) is (F, α, ρ, d)convex at x0 , where α(x, x0 ) =

α(x, x0 )g(x0 ) , g(x)

ρ = ρ(1 +

f (x0 ) + hw, x0 i ), g(x0 )

d(x, x0 ) =

d(x, x0 ) 1

g 2 (x)

.

Proof. Let k(x) = s(x|C) and w ∈ ∂k(x0 ). Then, for any x, x0 ∈ D, f (x) + hw, xi f (x0 ) + hw, x0 i − g(x) g(x0 )

=

f (x) + hw, xi − f (x0 ) − hw, x0 i g(x) g(x) − g(x0 ) − (f (x0 ) + hw, x0 i) . g(x)g(x0 )

Since f (·) + hw, ·i and −g(·) are (F, α, ρ, d)-convex and F is sublinear, we have f (x) + hw, xi f (x0 ) + hw, x0 i − g(x) g(x0 ) 1 ≥ (F (x, x0 ; α(x, x0 )∇(f (x0 ) + hw, x0 i)) + ρd2 (x, x0 )) g(x) f (x0 ) + hw, x0 i + (F (x, x0 ; −α(x, x0 )∇g(x0 )) + ρd2 (x, x0 )) g(x)g(x0 ) α(x, x0 ) d2 (x, x0 ) =F (x, x0 ; ∇(f (x0 ) + hw, x0 i)) + ρ g(x) g(x) α(x, x0 )[f (x0 ) + hw, x0 i] [f (x0 ) + hw, x0 i]d2 (x, x0 ) + F (x, x0 ; − ∇g(x0 )) + ρ g(x)g(x0 ) g(x)g(x0 ) α(x, x0 ) g(x0 )∇(f (x0 ) + hw, x0 i) − [f (x0 ) + hw, x0 i]∇g(x0 ) ≥F (x, x0 ; · ) g(x) g(x0 ) [f (x0 ) + hw, x0 i]d2 (x, x0 ) d2 (x, x0 ) +ρ +ρ g(x) g(x)g(x0 ) α(x, x0 )g(x0 ) g(x0 )∇(f (x0 ) + hw, x0 i) − [f (x0 ) + hw, x0 i]∇g(x0 ) =F (x, x0 ; · ) g(x) g 2 (x0 ) d2 (x, x0 ) [f (x0 ) + hw, x0 i]d2 (x, x0 ) +ρ +ρ g(x) g(x)g(x0 ) α(x, x0 )g(x0 ) f (x0 ) + hw, x0 i f (x0 ) + hw, x0 i d2 (x, x0 ) =F (x, x0 ; ∇( )) + ρ(1 + ) . g(x) g(x0 ) g(x0 ) g(x)

290

XIAN-JUN LONG, NAN-JING HUANG AND ZHI-BIN LIU

Denote α(x, x0 ) =

α(x, x0 )g(x0 ) , g(x)

ρ = ρ(1 +

f (x0 ) + hw, x0 i ), g(x0 )

d(x, x0 ) =

d(x, x0 ) 1

g 2 (x)

.

It follows that f (x) + hw, xi f (x0 ) + hw, x0 i − g(x) g(x0 ) f (x0 ) + hw, x0 i 2 )) + ρd (x, x0 ) ≥F (x, x0 ; α(x, x0 )∇( g(x0 ) for all x ∈ D. This completes of the proof. 3. Optimality conditions. In this section, we derive the Kuhn-Tuhn necessary and sufficient optimality condition for weakly efficient solutions of (MFP). By using the similar method as the proof of Theorem 2.2 in Kim et al. [6], we can easy to get the following Kuhn-Tuhn necessary optimality condition for (MFP). Theorem 3.1. (Kuhn-Tuhn Necessary Optimality Condition) Let x0 ∈ S be a weakly efficient solution of (MFP) and assume that there exists z ∗ ∈ Rn such that h∇hj (x0 ), z ∗ i > 0 for j ∈ I(x0 ). Then, there exist λi ≥ 0, i = 1, 2, · · · , p, µj ≥ 0, j = 1, 2, · · · , m, and wi ∈ Ci , i = 1, 2, · · · , p, such that p m X X fi (x0 ) + hwi , x0 i λi ∇( )+ µj ∇hj (x0 ) = 0, gi (x0 ) i=1 j=1 hwi , x0 i = s(x0 |Ci ), wi ∈ Ci , i = 1, 2, · · · , p, m X µj hj (x0 ) = 0, (λ1 , λ2 , · · · , λp ) 6= (0, 0, · · · , 0). j=1

Now we establish the Kuhn-Tuhn sufficient optimality condition for (MFP) under (F, α, ρ, d)-convexity assumptions as follows. Theorem 3.2. (Kuhn-Tuhn Sufficient Optimality Condition) Let x0 ∈ S be a feasible solution of (MFP). Assume that there exist λi ≥ 0, i = 1, 2, · · · , p, and µj ≥ 0, j = 1, 2, · · · , m, such that p m X X fi (x0 ) + hwi , x0 i λi ∇( )+ µj ∇hj (x0 ) = 0, (1) gi (x0 ) i=1 j=1 hwi , x0 i = s(x0 |Ci ), wi ∈ Ci , i = 1, 2, · · · , p,

(2)

m X

(3)

µj hj (x0 ) = 0,

j=1

(λ1 , λ2 , · · · , λp ) 6= (0, 0, · · · , 0).

(4)

If fi (·) + hwi , ·i and −gi (·)(i = 1, 2, · · · , p) are (F, αi , ρi , di )-convex at x0 , hj (·)(j = 1, 2, · · · , m) are (F, βj , ηj , cj )-convex at x0 , and p X

2

m c2j (x, x0 ) di (x, x0 ) X λi ρi + µj ηj ≥ 0, αi (x, x0 ) βj (x, x0 ) i=1 j=1

(5)

where αi (x, x0 ) =

αi (x, x0 )gi (x0 ) , gi (x)

ρi = ρi (1 +

fi (x0 ) + hwi , x0 i ), gi (x0 )

di (x, x0 ) =

di (x, x0 ) 1

gi2 (x)

,

OPTIMALITY CONDITIONS, DUALITY AND SADDLE POINTS

291

then x0 is a weakly efficient solution of (MFP). Proof. Suppose that x0 is not a weakly efficient solution of (MFP). Then, there exists x ∈ S such that fi (x) + s(x|Ci ) fi (x0 ) + s(x0 |Ci ) < . gi (x) gi (x0 ) Since hwi , x0 i = s(x0 |Ci ) for i = 1, 2, · · · , p, we have fi (x) + hwi , xi fi (x) + s(x|Ci ) fi (x0 ) + s(x0 |Ci ) fi (x0 ) + hwi , x0 i ≤ < = . gi (x) gi (x) gi (x0 ) gi (x0 ) From Theorem 2.3, for each i, 1 ≤ i ≤ p, that x0 , i.e., fi (x) + hwi , xi fi (x0 ) + hwi , x0 i − gi (x) gi (x0 )

fi (·)+hwi ,·i gi (·)

(6)

is (F, αi , ρi , di )-convex at

F (x, x0 ; αi (x, x0 )∇(

fi (x0 ) + hwi , x0 i )) gi (x0 )

2

+ ρi di (x, x0 ), where αi (x, x0 ) =

αi (x, x0 )gi (x0 ) fi (x0 ) + hwi , x0 i di (x, x0 ) . , ρi = ρi (1 + ), di (x, x0 ) = 1 gi (x) gi (x0 ) gi2 (x)

Since αi (x, x0 ) > 0 and F is sublinear, we get fi (x) + hwi , xi fi (x0 ) + hwi , x0 i 1 ( − ) αi (x, x0 ) gi (x) gi (x0 )

≥ F (x, x0 ; ∇(

fi (x0 ) + hwi , x0 i )) gi (x0 )

2

+ ρi

di (x, x0 ) . αi (x, x0 )

It follows from (6) and the above inequality that 2

F (x, x0 ; ∇(

fi (x0 ) + hwi , x0 i di (x, x0 ) )) + ρi < 0, i = 1, 2, · · · , p. gi (x0 ) αi (x, x0 )

Hence, p X

λi F (x, x0 ; ∇(

i=1

2 p X fi (x0 ) + hwi , x0 i di (x, x0 ) )) + λi ρi < 0. gi (x0 ) αi (x, x0 ) i=1

(7)

By the sublinearity of F , we obtain from (4), p X

p X fi (x0 ) + hwi , x0 i fi (x0 ) + hwi , x0 i λi F (x, x0 ; ∇( )) ≥ F (x, x0 ; λi ∇( )). (8) g (x ) gi (x0 ) i 0 i=1 i=1

It follows from (7) and (8) that F (x, x0 ;

p X i=1

λi ∇(

2 p X fi (x0 ) + hwi , x0 i di (x, x0 ) )) + λi ρi < 0. gi (x0 ) αi (x, x0 ) i=1

(9)

Substituting (1) into (9), we obtain m X

p X

2

di (x, x0 ) F (x, x0 ; − µj ∇hj (x0 )) + λi ρi < 0. αi (x, x0 ) j=1 i=1

(10)

292

XIAN-JUN LONG, NAN-JING HUANG AND ZHI-BIN LIU

Again by the sublinearity of F and (5), we have 2 m X di (x, x0 ) F (x, x0 ; − µj ∇hj (x0 ))+ λi ρi + F (x, x0 ; µj ∇hj (x0 )) αi (x, x0 ) j=1 i=1 j=1 p X

m X

+

m X

µj ηj

j=1

p X

c2j (x, x0 ) βj (x, x0 )

2

λi ρi

i=1

m c2j (x, x0 ) di (x, x0 ) X + µj ηj αi (x, x0 ) βj (x, x0 ) j=1

≥0. It follows from (10) that F (x, x0 ;

m X

µj ∇hj (x0 )) +

j=1

m X

µj ηj

j=1

c2j (x, x0 ) > 0. βj (x, x0 )

(11)

On the other hand, by the (F, βj , ηj , cj )-convexity of hj , j = 1, 2, · · · , m, we obtain hj (x) − hj (x0 ) ≥ F (x, x0 ; βj (x, x0 )∇hj (x0 )) + ηj c2j (x, x0 ). Since µj ≥ 0, βj (x, x0 ) > 0 and F is sublinear, we have m X j=1

µj

m m X X c2j (x, x0 ) hj (x) − hj (x0 ) ≥ F (x, x0 ; µj ∇hj (x0 )) + µj ηj . βj (x, x0 ) βj (x, x0 ) j=1 j=1

Also, since x0 is a feasible solution of (MPF), it follows (3)that m X j=1

µj

hj (x) − hj (x0 ) ≤0 βj (x, x0 )

and so F (x, x0 ;

m X j=1

µj ∇hj (x0 )) +

m X j=1

µj ηj

c2j (x, x0 ) ≤ 0, βj (x, x0 )

which contradicts to (11). Therefore, x0 is a weakly efficient solution of (MFP). Remark 2. 1. If s(x|C) = 0, then Theorems 3.1 and 3.2 reduce to the corresponding ones in [11], respectively; 2. If F (x, x0 ; α(x, x0 )∇φ(x0 )) = α(x, x0 )∇φ(x0 )η(x, x0 ) for a certain mapping η : D × D → Rn , then Theorems 3.1 and 3.2 reduce to the corresponding ones in [6], respectively. 4. Duality results. In this section, we propose the following Mond-Weir type dual (MFD) to the primal problem (MFP):  f (u)+hwp ,ui 1 ,ui  max ( f1 (u)+hw ,··· , p ),   Pp g1 (u) fi (u)+hwi ,uigp (u)Pm  s.t. ) + j=1 µj ∇hj (u) = 0, i=1 λi ∇( gi (u) (12) Pm  µ h (u) ≥ 0, w  j j i ∈ Ci , i = 1, 2, · · · , p, j=1   µj ≥ 0, j = 1, 2, · · · , m, λ = (λ1 , λ2 , · · · , λp ) ∈ Λ+ , where

Λ+ = {λ ∈ Rp : λ ≥ 0, λT e = 1, e = (1, 1, · · · , 1) ∈ Rp }. In the following, we shall prove the weak duality and strong duality results.

OPTIMALITY CONDITIONS, DUALITY AND SADDLE POINTS

293

Theorem 4.1. (Weak Duality) Let x ∈ S and (u, λ, w, µ) be the feasible solutions of (MFP) and (MFD), respectively. Assume that fi (·) + hwi , ·i and −gi (·)(i = 1, 2, · · · , p) are (F, αi , ρi , di )-convex at u, and hj (·)(j = 1, 2, · · · , m) are (F, βj , ηj , cj )convex at u. If p X

2

λi ρi

i=1

m c2j (x, u) di (x, u) X + µj ηj ≥ 0, αi (x, u) βj (x, u) j=1

(13)

where αi (x, u) =

αi (x, u)gi (u) , gi (x)

ρi = ρi (1 +

fi (u) + hwi , ui ), gi (u)

di (x, u) =

di (x, u) 1

,

gi2 (x)

then the following cannot hold: f (u) + hw, ui f (x) + s(x|C) < . g(x) g(u)

(14)

Proof. Since x is a feasible solution of (MFP) and (u, λ, w, µ) is a feasible solution of (MFD), we have m X

µj hj (x) ≤ 0 ≤

j=1

m X

µj hj (u).

j=1

By the (F, βj , ηj , cj )-convexity of hj (j = 1, 2, · · · , m) at u, it follows that hj (x) − hj (u) ≥ F (x, u; βj (x, u)∇hj (u)) + ηj c2j (x, u). Using µj ≥ 0, βj (x, u) > 0 and the sublinearity of F , we have m X µj (hj (x) − hj (u)) j=1

βj (x, u)

≥ F (x, u;

m X

µj ∇hj (u)) +

j=1

m X

µj ηj

j=1

c2j (x, u) βj (x, u)

and so F (x, u;

m X

µj ∇hj (u)) +

i=1

m X i=1

µj ηj

c2j (x, u) ≤ 0. βj (x, u)

(15)

We now suppose that inequality (14) holds. Since s(x|Ci ) ≥ hw, xi, we have, for all i ∈ {1, 2, · · · , p}, fi (x) + hwi , xi fi (x) + s(x|Ci ) fi (u) + hwi , ui ≤ < . gi (x) gi (x) gi (u)

(16)

By Theorem 2.3, we have fi (x) + hwi , xi fi (u) + hwi , ui − gi (x) gi (u) fi (u) + hwi , ui 2 ≥F (x, u; αi (x, u)∇( )) + ρi di (x, u), gi (u) where αi (x, u) =

αi (x, u)gi (u) , gi (x)

ρi = ρi (1 +

fi (u) + hwi , ui ), gi (u)

di (x, u) =

di (x, u) 1

gi2 (x)

.

294

XIAN-JUN LONG, NAN-JING HUANG AND ZHI-BIN LIU

Since λ ∈ Λ+ and αi (x, u) > 0, λi fi (x) + hwi , xi fi (u) + hwi , ui ( − ) αi (x, u) gi (x) gi (u) 2

≥F (x, u; λi ∇(

fi (u) + hwi , ui di (x, u) )) + λi ρi . gi (u) αi (x, u)

(17)

It follows from (16) and (17) that p X

F (x, u; λi ∇(

i=1

2 p X fi (u) + hwi , ui di (x, u) )) + λi ρi < 0. gi (u) αi (x, u) i=1

By the sublinearity of F , we obtain F (x, u;

p X

λi ∇(

i=1

2 p X fi (u) + hwi , ui di (x, u) )) + λi ρi < 0. gi (u) αi (x, u) i=1

(18)

Again by the sublinearity of F , it follows from (12), (13), (15) and (18) that 0 = F (x, u; 0) =F (x, u;

≤F (x, u;

p X i=1 p X

λi ∇(

m X fi (u) + hwi , ui )+ µj ∇hj (u)) gi (u) j=1

λi ∇(

m X fi (u) + hwi , ui )) + F (x, u; µj ∇hj (u)) gi (u) j=1

i=1

p X

0, j ∈ I(x), then there exists λ ∈ Rp , µ ∈ Rm , and w ∈ C such that (x, λ, w, µ) is a feasible solution for (MFD) and hx, wi = s(x|C). If the assumptions in Theorem 3.2 are satisfied, then (x, λ, w, µ) is a weakly efficient solution of (MFD). Proof. By Theorem 3.1, there exist λ ∈ Rp , µ ∈ Rm , and w ∈ C such that (x, λ, w, µ) is a feasible solution for (MFD) and hx, wi = s(x|C). If (x, λ, w, µ) is not a weakly efficient solution of (MFD), then there must exist a feasible solution (x∗ , λ∗ , w∗ , µ∗ ) of (MFD) such that f (x) + s(x|C) f (x∗ ) + hw, x∗ i < . g(x) g(x∗ ) which contradicts the result of Theorem 3.2. This completes the proof.

OPTIMALITY CONDITIONS, DUALITY AND SADDLE POINTS

295

5. Saddle points. We will characterize the weakly efficient solution by using saddle point of a vector-valued Lagrangian function of (MFP). To this aim, we need the following concepts. m A vector-valued Lagrangian function L : D × R+ −→ Rp for (MFP) is defined by L(x, µ) = (L1 (x, µ), L2 (x, µ), · · · , Lp (x, µ)), where Li (x, µ) = [fi (x) + s(x|Ci ) + µT h(x)]/gi (x),

i = 1, 2, · · · , p.

m Definition 5.1. A point (x0 , µ0 ) ∈ D × R+ is said to be a vector saddle point of the vector-valued Lagrangian function L(x, µ) if m L(x0 , µ) ≤ L(x0 , µ0 ) ≤ L(x, µ0 ), (x, µ) ∈ D × R+ .

By using the approaches of Bector et al. (see the proof of Theorem 8.2 in [2]), it is easy to prove the following theorem and so we omit the proof. Theorem 5.2. If (x0 , µ0 ) is a vector saddle point of L(x, µ), then x0 is a feasible solution to problem (MFP), µT0 h(x0 ) = 0, and x0 is a weakly efficient solution of (MFP). Theorem 5.3. Let x0 be a weakly efficient solution of (MFP) and z ∗ ∈ Rn such that h∇hj (x0 ), z ∗ i > 0 and j ∈ I(x0 ). Suppose that fi (·) + hwi , ·i and −gi (·)(i = 1, 2, · · · , p) are (F, αi , ρi , di )-convex at x0 , and h(·) is (F, β, η, c)-convex at x0 . If p X

2

λi ρi

i=1

di (x, x0 ) c2 (x, x0 ) + µ0 η ≥ 0, αi (x, x0 ) β(x, x0 )

(19)

where αi (x, x0 ) =

αi (x, x0 )gi (x0 ) , gi (x)

ρi = ρi (1 +

fi (x0 ) + hwi , x0 i ), gi (x0 )

di (x, x0 ) =

di (x, x0 ) 1

gi2 (x)

m then there exists µ0 ∈ R+ such that (x0 , µ0 ) is a vector saddle point of L(x, µ).

Proof. Since x0 is a weakly efficient solution of (MFP) and z ∗ ∈ Rn such that h∇hj (x0 ), z ∗ i > 0 and j ∈ I(x0 ), Theorem 3.1 implies that there exist λ ∈ Rp with m λ > 0, µ0 ∈ R+ , and wi ∈ Ci , i = 1, 2, · · · , p, such that p X i=1

λi ∇(

m X fi (x0 ) + hwi , x0 i )+ µj0 ∇hj (x0 ) = 0, gi (x0 ) j=1

hwi , x0 i = s(x0 |Ci ), wi ∈ Ci , i = 1, 2, · · · , p, m X µj0 hj (x0 ) = 0. j=1

Since s(x|Ci ) ≥ hwi , xi for all i = 1, 2, · · · , p and x ∈ D,

= ≥

Li (x, µ0 ) − Li (x0 , µ0 ) fi (x) + s(x|Ci ) + µT0 h(x) fi (x0 ) + s(x0 |Ci ) + µT0 h(x0 ) − gi (x) gi (x0 ) fi (x) + hwi , xi + µT0 h(x) fi (x0 ) + hwi , x0 i + µT0 h(x0 ) − . gi (x) gi (x0 )

(20)

,

296

XIAN-JUN LONG, NAN-JING HUANG AND ZHI-BIN LIU

By the (F, αi , ρi , di )-convexity of f (·) + hw, ·i and −g(·), Theorem 2.3 gives that fi (x) + hwi , xi fi (x0 ) + hwi , x0 i − gi (x) gi (x0 ) fi (x0 ) + hwi , x0 i 2 ≥F (x, x0 ; αi (x, x0 )∇( )) + ρi di (x, x0 ), gi (x0 ) where

αi (x, x0 ) =

αi (x, x0 )gi (x0 ) , gi (x)

ρi = ρi (1 +

fi (x0 ) + hwi , x0 i ), gi (x0 )

di (x, x0 ) =

di (x, x0 ) 1

gi2 (x)

Since αi (x, x0 ) > 0 and F is sublinear, we get p X i=1

fi (x) + hwi , xi fi (x0 ) + hwi , x0 i λi ( − ) αi (x, x0 ) gi (x) gi (x0 )

≥ F (x, x0 ;

p X i=1

λi ∇(

2 p X fi (x0 ) + hwi , x0 i di (x, x0 ) )) + λi ρi . gi (x0 ) αi (x, x0 ) i=1

(21)

It follows from the (F, β, η, c)-convexity of h and µT0 h(x0 ) = 0 that gi (x) µT h(x) µT0 h(x0 ) gi (x) µT h(x) − µT0 h(x0 ) ·( 0 − )= · 0 β(x, x0 ) gi (x) gi (x0 ) β(x, x0 ) gi (x) 1 = · (µT0 h(x) − µT0 h(x0 )) β(x, x0 ) 1 ≥ (F (x, x0 ; β(x, x0 )µT0 ∇h(x0 )) β(x, x0 ) + µ0 ηc2 (x, x0 )) =F (x, x0 ; µT0 ∇h(x0 )) + µ0 η

Let

ε(x) = max{

p X i=1

λi gi (x) , }. αi (x, x0 ) β(x, x0 )

c2 (x, x0 ) . β(x, x0 )

(22)

.

OPTIMALITY CONDITIONS, DUALITY AND SADDLE POINTS

297

Then ε(x) > 0 for all x ∈ D. From (19)-(22), we have ε(x)(Li (x, µ0 ) − Li (x0 , µ0 )) ≥

p X i=1

λi fi (x) + hwi , xi fi (x0 ) + hwi , x0 i ( − ) αi (x, x0 ) gi (x) gi (x0 )

gi (x) µT h(x) µT0 h(x0 ) ·( 0 − ) β(x, x0 ) gi (x) gi (x0 ) p X fi (x0 ) + hwi , x0 i ≥F (x, x0 ; λi ∇( )) gi (x0 ) i=1 +

2

p X

di (x, x0 ) + λi ρi αi (x, x0 ) i=1 + F (x, x0 ; µT0 ∇h(x0 )) + µ0 η ≥F (x, x0 ;

p X

λi ∇(

i=1

p X

c2 (x, x0 ) β(x, x0 )

fi (x0 ) + hwi , x0 i ) + µT0 ∇h(x0 )) gi (x0 )

2

di (x, x0 ) c2 (x, x0 ) + λi ρi + µ0 η αi (x, x0 ) β(x, x0 ) i=1 ≥0 and so L(x, µ0 ) ≥ L(x0 , µ0 ). On the other hand, it is easy to see that L(x0 , µ) − L(x0 , µ0 ) = µT h(x0 )/gi (x0 ) ≤ 0,

i = 1, 2, · · · , p.

Therefore, (x0 , µ0 ) is a vector saddle point of L(x, µ). This completes the proof. Remark 4. Theorems 5.2 and 5.3 extend the corresponding results in [2]. Acknowledgements. We would like to thank the referees very much for their valuable comments and suggestions. REFERENCES [1] G. Bigi, Componentwise versus global approaches to nonsmooth multiobjective optimization, J. Ind. Manag. Optim., 1 (2005), 21–32. [2] C. R. Bector, S. Chandra and I. Husain, Optimality conditions and duality in subdifferentiable multiobjective fractional programming, J. Optim. Theory Appl., 79 (1993), 105–125. [3] S. Chandra, B. D. Craven and B. Mond, Vector-valued Lagrangian and multiobjective fractional programming duality, Numer. Funct. Anal. Optim., 11 (1990), 239–254. [4] M. A. Hanson, On sufficiency of the Kuhn-Tucker conditions, J. Math. Anal. Appl., 80 (1981), 545–550. [5] V. Jeyakumar, Strong and weak invexity in mathematical programming, Meth. Oper. Res., 55 (1985), 109–125. [6] D. S. Kim, S. J. Kim and M. H. Kim, Optimality and duality for a class of nondifferentiable multiobjective fractional programming problems, J. Optim. Theory Appl., 129 (2006), 131– 146. [7] H. Kuk, G. M. Lee and D. S. Kim, Nonsmooth multiobjective programs with (V, ̺)-invexity, Indian J. Pure. Appl. Math., 29 (1998), 405–412. [8] J. C. Liu, Optimality and duality for multiobjective fractional programming involving nonsmooth pseudoinvex functions, Optimization, 37 (1996), 27–39. [9] J. C. Liu, Optimality and duality for multiobjective fractional programming involving nonsmooth (F, ρ)-convex pseudoinvex functions, Optimization, 36 (1996), 333–346. [10] Z. A. Liang, H. Huang and P. M. Pardalos, Optimality conditions and duality for a class of nonlinear fractional programming problems, J. Optim. Theory Appl., 110 (2001), 611–619.

298

XIAN-JUN LONG, NAN-JING HUANG AND ZHI-BIN LIU

[11] Z. A. Liang, H. Huang and P. M. Pardalos, Efficiency conditions and duality for a class of multiobjective fractional programming problems, J. Global Optim., 27 (2003), 447–471. [12] B. Mond and M. Schechter, Nondiferentible symmetric duality, Bull. Austral. Math. Soc., 53 (1996), 177–188. [13] R. N. Mukherjee, Generalized convex duality for multiobjective fractional programs, J. Math. Anal. Appl., 162 (1991), 309–316. [14] V. Rreda, On efficiency and duality for multiobjective programs, J. Math. Anal. Appl., 166 (1992), 365–377. [15] S. Schaible, Fractional programming, In “Handbook of Global Optimization” (eds. R. Horst, and P.M. Pardalos), Kluwer Academic Publishers, Dordrecht, The Netherlands, 1995, 495– 608. [16] T. Weir, A dual for a multiobjective fractional programming problem, J. Inform. Optim. Sci., 7 (1986), 261–269. [17] X. M. Yang, K. L. Teo and X. Q. Yang, Duality for a class of nondifferentiable multiobjective programming problems, J. Math. Anal. Appl., 252 (2000), 999–1005.