Saddle Point Optimality Conditions in Fuzzy ... - Semantic Scholar

8 downloads 0 Views 178KB Size Report
Corresponding Author: Hassan Mishmast Nehi is with the Depart- ..... H. M. Nehi and A. Daryab: Saddle Point Optimality Conditions in Fuzzy Optimization ...
International Journal of Fuzzy Systems, Vol. 14, No. 1, March 2012

11

Saddle Point Optimality Conditions in Fuzzy Optimization Problems Hassan Mishmast Nehi and Ali Daryab Abstract1

searchers such as in [6, 11, 12, 16]. The saddle point optimality conditions for fuzzy optimization problems were The Karush–Kuhn–Tucker (KKT) optimality con- obtained in [11] by using a special kind of partial orderditions and saddle point optimality conditions in ing on the set of fuzzy numbers and introducing the fuzzy programming problems have been studied in fuzzy scalar product and a solution concept that is essenliterature by various authors under different condi- tially similar to the notion of Pareto solution in multiobtions. In this paper, by considering a partial order jective optimization problems. In [16], the concept of relation on the set of fuzzy numbers, and convexity saddle point for a fuzzy mapping were discussed and the with differentiability of fuzzy mappings, we have ob- obtained results were used to the Lagrangian dual of tained the Fritz John (FJ) constraint qualification fuzzy programming by using the fuzzy scalar and suband KKT necessary conditions for a fuzzy optimiza- differential of convex fuzzy mappings. In [12], the sadtion problem with fuzzy coefficients, for first time. dle point optimality conditions were discussed by conOwing to the help of the KKT optimality conditions, sidering a general partial ordering on the set of fuzzy we then discuss, the saddle point optimality condi- numbers and introducing two solution concepts for fuzzy tions, associated with a fuzzy optimization problem optimization problems. In [6], firstly by considering a under convexity and differentiability of fuzzy map- total ordering on the set of fuzzy numbers the fuzzy Lapings. grangian function of a fuzzy optimization problem were proposed, and then, the saddle point of fuzzy Lagrangian Keywords: Comparable fuzzy mapping, Convex fuzzy function, and its optimality condition were dicussed. mapping, Differentiable fuzzy mapping, Fuzzy LaIn this paper, the saddle point optimality conditions in grange mapping, Fuzzy numbers, Saddle point. fuzzy optimization problem with fuzzy coefficients, by considering a partial order relation on the set of fuzzy 1. Introduction numbers and base on the concept of convexity with differentiability of fuzzy mappings are investigated. Fuzzy mathematical programming was developed for In section 2 we provide some basic properties of fuzzy formulating real world problems where the problems are numbers. In section 3 we introduce some concepts of the usually vague, imprecise, and not well defined .The im- fuzzy differential calculus which will be needed in the precise and fuzzy data occurring in the optimization sequel. In the paper we accept the concept of differentiproblems, will be categorized as the fuzzy optimization ability and convexity of fuzzy mapping due to Panigrahi problems. et al. [9]. In section 4, we first develop the Fritz John Bellman and Zadeh [2] first proposed the basic con- constraint qualification for a fuzzy minimization probcept of fuzzy decision making. Zimmermann [17] for- lem, and then we drive the KKT necessary optimality mulated fuzzy linear programming problems by the use conditions without any convexity assumption. Under of both the minimum operator and the product operator. suitable convexity assumption, the KKT necessary optiSince that, there come out a lot of papers to investigate mality conditions are also sufficient for optimality, the fuzzy optimization problems edited by Slowinski [10] which the results are presented in this section. In section and Delgado et al. [3] summarized the main ideas on this 5, the saddle point problem associated with a fuzzy optopic. Lai and Hwang [7, 8] also gave the insightful sur- timization problem, and its optimality condition is disvey. cussed. In this section we show that a saddle point of the The concept of saddle point for a fuzzy mapping and fuzzy Lagrangian mapping associated with the fuzzy its optimality conditions have been discussed by re- optimization problem (if it exists) yields an optimal solution to the fuzzy optimization problem and that under certain conditions an optimal solution to the fuzzy optiCorresponding Author: Hassan Mishmast Nehi is with the Department of Mathematics, Faculty of Science, Sistan and Baluchestan mization problem provides a saddle point of the associated fuzzy Lagrangian mapping. The conditions for exUniversity, Zahedan, Iran. E-mail: [email protected] istence of a saddle point and its relation with the KKT Manuscript received 23 Aug. 2010; revised 4 April 2011; accepted 11 conditions are then driven. Dec. 2011.

© 2012 TFSA

International Journal of Fuzzy Systems, Vol. 14, No. 1, March 2012

12

a~ ( x) = sup{α : a∗ (α ) ≤ x ≤ a ∗ (α )}

2. Fuzzy Numbers In this section we give some definitions and properties from fuzzy numbers. Definition 2.1 [9]: Let R denote the set of all real numbers. A fuzzy number is a mapping a~ : R → [0,1] with the following properties: 1. a~ is normal, that is, there exists x0 ∈ R such that a~ ( x0 ) = 1 , 2. a~ is upper semi-continuous, that is, the set {x ∈ R : a~ ( x) ≥ α } for all α ∈ [0,1] is a closed subset in R. 3. a~ is convex, that is, for all x, y ∈ R , λ ∈ [0,1] , a~ (λx + (1 − λ ) y ) ≥ min{a~ ( x), a~ ( y )} 4. The support of a~ , Supp a~ = {x ∈ R : a~ ( x) > 0} and its closure cl ( Supp a~ ) is compact. Let F(R) be the set of all fuzzy numbers on R. The α -level set of a fuzzy number a~ ∈ F ( R) , 0 ≤ α ≤ 1 , denoted by a~[α ] , is defined as ⎧{x ∈ R : a~ ( x) ≥ α } if 0 < α < 1, a~[α ] = ⎨ ~ if α = 0. ⎩cl ( Supp a ) It is clear that the α -level set of a fuzzy number is a closed and bounded interval [a∗ (α ), a ∗ (α )] , where a∗ (α ) denotes the left-hand endpoint of a~[α ] and a ∗ (α ) denotes the right-hand endpoint of a~[α ] . Also each y ∈ R can be regarded as a fuzzy number ~ y defined by ⎧1 if t = y, ~ y (t ) = ⎨ ⎩0 if t ≠ y.

From this characteristic of fuzzy numbers, we see that a fuzzy number is determined by the endpoints of the intervals a~[α ] . Thus a fuzzy number a~ can be identified by a parameterized triples {( a∗ (α ), a ∗ (α ), α ) : 0 ≤ α ≤ 1} . This leads to the following characteristic of a fuzzy number in terms of the two "endpoint" functions a∗ (α ) and a ∗ (α ) . Lemma 2.2 [5]: Assume that I = [0,1] , and a∗ : I → R and a ∗ : I → R satisfy the conditions: 1 a∗ : I → R is a bounded increasing function, 2 a ∗ : I → R is a bounded decreasing function, 3 a∗ (1) ≤ a ∗ (1) , 4 for , and 0 < k ≤1 lim α → k − a∗ (α ) = a∗ ( k ) limα →k − a ∗ (α ) = a ∗ (k ) ,

5

limα →0+ a∗ (α ) = a∗ (0) and limα →0+ a ∗ (α ) = a ∗ (0) .

Then a~ : R → I defined by

is a fuzzy number with parameterization given by {( a∗ (α ), a ∗ (α ), α ) : 0 ≤ α ≤ 1} . Moreover, if a~ : R → I is a fuzzy number with parameterization given by {( a∗ (α ), a ∗ (α ), α ) : 0 ≤ α ≤ 1} , then functions a∗ (α ) and a ∗ (α ) satisfy conditions (1)-(5). Using the extension principle presented by Zadeh ~ [13-15], the fuzzy addition for two any a~, b ∈ F ( R) is defined as follows: ~ (a~ + b )( x) = sup min{a~ ( y ), a~ ( x − y )}, x ∈ R y∈R

We also, define for every a~ ∈ F ( R) the scalar multiplication as follows:

~

⎧a~ ( x / k ) k > 0, (ka~ )( x) = ⎨~ k = 0, ⎩0

where 0 ∈ F ( R ) and x ∈ R . To deal with subtraction, Goetschel and Voxman [5] define the opposite of a fuzzy number a~ by {( − a∗ (α ),− a ∗ (α ), α ) : 0 ≤ α ≤ 1} . That is − a~ ( x) = (−1)a~ ( x) . We accept the subtraction of fuzzy numbers as defined by Dubois and Prade [4]. For this, define the opposite of a fuzzy number a~ to be the fuzzy number − a~ satisfying (−a~ )( x) = a~(− x) . In other words, if a~ is represented by the parametric form {( a∗ (α ), a ∗ (α ), α ) : 0 ≤ α ≤ 1} , then − a~ is represented by the corresponding parametric form {( − a ∗ (α ),− a∗ (α ), α ) : 0 ≤ α ≤ 1} . ~ ~ Definition 2.3 [9]: For a~, b ∈ F ( R ) , we say that a~ ≺b if for each α ∈ [0,1] , a∗ (α ) ≤ b∗ (α ) , a ∗ (α ) ≤ b ∗ (α ) . If ~ ~ ~ ~ ~ a~ ≺b , b ≺a~ , then a~ = b . We say that a~ ≺ b , if a~ ≺b and such that or ∃α 0 ∈ [0,1] a∗ (α 0 ) ≤ b∗ (α 0 ) ~ ~ ∗ ∗ ~ ~ a (α 0 ) ≤ b (α 0 ) . For a , b ∈ F ( R) , if either a ≺b or ~ ~ b ≺a~ , then we say that a~ and b are comparable, otherwise non-comparable. Note that ≺ is a partial order relation on F(R). ~ ~ Sometimes we may write b a~ instead of a~ ≺ b . ~ Definition 2.4: For a~ ∈ F ( R) , we say that a~ 0 if for ~ each α ∈ [0,1] , a* (α ) ≥ 0 , a * (α ) ≥ 0 . Similarly, a~ ≺ 0 ~ if − a~ 0 . Definition 2.5 [9]: A fuzzy number * ~ a = (a* (α ), a (α ), α ) is said to be a triangular fuzzy number if a* (1) = a * (1) and for each α ∈ [0,1] both a∗ (α ) , a ∗ (α ) are linear. We denote a triangular fuzzy

H. M. Nehi and A. Daryab: Saddle Point Optimality Conditions in Fuzzy Optimization Problems

number by 〈 a* (0), a* (1), a* (0)〉 . For example for the fuzzy number a~ = 〈1,3,5〉 , we have a~[α ] = [1 + 2α ,5 − 2α ] for α ∈ [0,1] .

3. Definitions of Fuzzy Differential Calculus We quote some elementary definitions of fuzzy differential calculus in this section. Definition 3.1: ~a is said to be an n-dimensional fuzzy vector if the components of ~a are composed by n fuzzy numbers, denote by ~a = (a~1 , a~2 , … , a~n ) t . The set of all n-dimensional fuzzy vectors is denoted by F n (R) . The α -cut set of a fuzzy vector ~a = (a~1 , a~2 , … , a~n ) t is defined as ~ a[α ] = (a~1 [α ], a~2 [α ],… , a~n [α ]) t , and a ∗ (α ) = (a1∗ (α ), a 2 ∗ (α ),…, a n ∗ (α )) t , ∗





a ∗ (α ) = ( a1 (α ), a 2 (α ), … , a n (α )) t . Definition 3.2: Let ~a = (a~1 , a~2 , … , a~n ) t ∈ F n ( R )

and x = ( x1 , x 2 , … , x n ) ∈ R be an n-dimensional fuzzy vector and an n-dimensional real vector, respectively. We define the product of a fuzzy vector with a real vector as n ~ a t x = ∑ a~i xi , which is a fuzzy number. t

n

i =1

Definition 3.3: For a fuzzy vector ~ ~ a 0 , if for a = (a~1 , a~2 ,… , a~n ) t ∈ F n ( R) , we say that ~ ~ ~ i = 1,2,…, n , a~i 0 . Similarly, ~ a ≺ 0 , if for i = 1,2,…, n , ~ a~i ≺ 0 . ~

Definition 3.4 [9]: Let f : Ω → F ( R ) be a fuzzy mapping, where Ω ⊆ R n and F(R) is the set of fuzzy num~ bers. The α -cut of f at x ∈ Ω , denoted by ~ f ( x)[α ] = [ f ∗ (x, α ), f ∗ ( x, α )] which is a closed and ~ bounded interval, where f ∗ (x, α ) = min { f (x)[α ]} and ~ ~ f ∗ (x, α ) = max { f ( x)[α ]} . Thus, f can be understood by the two functions f ∗ (x, α ) and f ∗ (x, α ) which are functions from Ω × [0,1] to the set of real numbers R, f ∗ (x, α ) is a bounded increasing function of α and f ∗ ( x, α ) is a bounded decreasing function of α . Moreover, f ∗ (x, α ) ≤ f ∗ (x, α ) for each α ∈ [0,1] . ~ Definition 3.5 [9]: Let f : Ω ⊆ R n → F ( R ) be a fuzzy ~ mapping. Then, f is said to be continuous at x ∈ Ω , if for each α ∈ [0,1] , both f ∗ (x, α ) and f ∗ (x, α ) are continuous functions of x . ~ Definition 3.6: Let f , be a fuzzy mapping from the set of all real numbers R to the set of all fuzzy numbers, let

13

~ f (t )[α ] = [ f∗ (t ,α ), f * (t ,α )] . Assume that the partial derivatives of f ∗ (x, α ) , f ∗ (x, α ) with respect to t ∈ R for each α ∈ [0,1] exist and are, respectively, denoted by ′ f ∗ (x, α ) . For t ∈ R , ′ ′ Γ(t , α ) = [ f ∗ (t , α ), f ∗ (t , α )] . If Γ(t , α ) ′ f ∗ ( x, α ) ,

α ∈ [0,1]

let

defines the ~ α -cut of a fuzzy number for each t ∈ R , then f (t ) is said to be differentiable and is written as ′ ~ ′ f ′[α ] = Γ(t , α ) = [ f ∗ (t , α ), f ∗ (t , α )] ,

for

all

t∈R ,

α ∈ [0,1] . Throughout this paper we have accepted the fuzzy differentiability concept for a fuzzy mapping due to Panigrahi et al. [9]. ~ Definition 3.7 [9]: Let f : Ω → F ( R ) be a fuzzy map-

ping, where Ω is an open subset of R n . Let x = ( x1 , x 2 , … , x n ) ∈ Ω . Let D xi , for i = 1,2, …, n stand for the "partial differentiation" with respect to the ith variable xi . Assume that for all α ∈ [0,1] , f ∗ (x, α ) , f ∗ ( x, α ) have continuous partial derivatives so that D xi f ∗ (x, α ) , D xi f ∗ (x, α ) are continuous. Define for i = 1,2, …, n , and α ∈ [0,1] ~ ~ D xi f ( x)[α ] = [ D xi f ∗ ( x, α ), D xi f ∗ ( x, α )]

(1)

If for each i = 1,2, …, n , (1) defines the α -cut of a fuzzy number, then we will say that the gradient of the ~ fuzzy mapping f at x exists and we write ~~ ~ ~ ~ ~ ~ ~ ∇f (x) = ( Dx1 f ( x), Dx 2 f (x),…, Dx n f (x))t . Thus, from Lemma 2.2, the sufficient conditions that ~ the gradient of f at x exists are for each i = 1,2,…, n , α ∈ [0,1] , 1. The partial derivatives of f ∗ (x, α ) and f ∗ (x, α ) with respect to xi exist, 2. D xi f ∗ (x, α ) is a continuous increasing function of α,

3.

D xi f ∗ (x, α ) is a continuous decreasing function of

α, D xi f ∗ (x,1) ≤ D xi f ∗ (x,1) . ~~ Note that ∇f (x) is an n-dimensional fuzzy vector. A ~ fuzzy mapping f is said to be differentiable at x if ~~ ∇f ( x) exists and both f ∗ (x, α ) , f ∗ ( x, α ) for each

4.

α ∈ [0,1] are differentiable at x . ~ Definition 3.8: A is said to be a fuzzy matrix if the en~ tries of A are composed by fuzzy numbers, denote by

International Journal of Fuzzy Systems, Vol. 14, No. 1, March 2012

14

a~1 n ⎤ ⎡ a~11 ~ ⎥ A = ⎢⎢ . ⎥ ~ ~ ⎢⎣ a m 1 a mn ⎥⎦ m × n ~ The α -cut set of the fuzzy matrix A is defined as ⎡ a~11 [α ] ~ A[α ] = ⎢⎢ ⎢⎣a~m1 [α ]

a~1n [α ] ⎤ ⎥, ⎥ a~mn [α ]⎥⎦

⎡ a (α ) ⎢ A (α ) = ⎢ ⎢a ∗ (α ) ⎣ m1 ∗ 11



~

a1n ∗ (α ) ⎤ ⎥, ⎥ amn ∗ (α )⎥⎦ ∗ a1n (α ) ⎤ ⎥ ⎥. ∗ amn (α )⎥⎦

~ f ( x) ~ subject to ~ g ( x) ≺ 0 , where x ∈ Ω , ~g = ( g~1 , g~2 , … , g~m ) , and Minimize

stand for the "second-order partial differentiation" with respect to the ith variable xi and jth variable x j . As~

~ sume that ∇ f ( x) exists and for all α ∈ [0,1] , f ∗ (x, α ) , ∗ f ( x, α ) have continuous second-order partial derivatives so that D xi x j f ∗ (x, α ) , D xi x j f ∗ (x, α ) are continuous

Dxi x j f∗ (x,α ) = Dx j xi f* (x,α )

,

and

Dxi x j f ( x, α ) = Dx j x i f ( x, α ) ). Define for i, j = 1,2,…, n , *

*

and α ∈ [0,1] ~ ~ Dx x f ( x)[α ] = [ Dx x f∗ ( x,α ), Dx x f ∗ ( x,α )] i

j

i

j

i

j

(2)

If for each i, j = 1,2,…, n , (2) defines the α -cut of a fuzzy number, then we define the Hessian of the fuzzy mapping (in the matrix notation) as follows: ~ ~ ~ ~ ∇ 2 f ( x ) = ( D xi x j f ( x )) n×n , i, j = 1,2, … , n .

~ We will say that f is twice differentiable at x , if

the Hessian of the fuzzy mapping exists and both f ∗ (x, α ) , f ∗ ( x, α ) are twice differentiable at x . The sufficient conditions that the Hessian of the fuzzy map~ ping f at x exists are for each i, j = 1,2,…, n , α ∈ [0,1] , ~~ 1. ∇f (x) exists, 2. the second-order partial derivatives of f ∗ (x, α ) and f ∗ ( x, α ) with respect to xi , x j exist, 3.

D xi x j f ∗ (x, α ) is a continuous increasing function of

α,

4.

D xi x j f ∗ ( x, α ) is a continuous decreasing function of

α,

4. The Optimality Conditions of Fuzzy Minimization Problems ~

Definition 3.9 [9]: Let f : Ω → F ( R) be a fuzzy mapping, where Ω is an open subset of R n . Let x = ( x1 , x 2 , … , x n ) ∈ Ω and D xi x j , for i, j = 1,2,…, n

(here

D xi x j f ∗ ( x,1) ≤ D xi x j f ∗ ( x,1) .

Let Ω be an open set in R n , and f : Ω → F ( R ) be a fuzzy mapping. Also let ~g : Ω → F m ( R ) be an m-dimensional fuzzy mapping. Consider the following fuzzy programming problem, which we call the fuzzy minimization problem with fuzzy coefficients:

and ⎡ a11∗ (α ) A∗ (α ) = ⎢⎢ ⎢⎣am1∗ (α )

5.

(3) ~ g~i (x) ≺ 0

for

each i = 1,2, … , m . ~ Definition 4.1 [9]: Let f : Ω ⊆ R n → F ( R) be a func~ tion with fuzzy coefficients. Then, f is said to be a comparable fuzzy mapping if for each pair x 1 ≠ x 2 ∈ Ω , ~ ~ ~ f (x 1 ) and f (x 2 ) are comparable. Otherwise, f is said to be a non-comparable fuzzy mapping. Let Ε denote the set of all comparable fuzzy mappings. Example 4.2: Consider the fuzzy mapping ~ 2 ~ f : R → F ( R ) , defined by f ( x1 , x2 ) = 〈1,3,5〉 ( x12 − x2 ) . The α -cut is given by f ( x1 , x2 )[α ] = [1 + 2α ,5 − 2α ] × ( x12 − x2 ) , for each α ∈ [0,1] . Then, for α ∈ [0,1] we have, f* ( x1 , x2 , α ) = (1 + 2α )( x12 − x2 ) and f * ( x1 , x2 , α ) = (5 − 2α ) ×

( x12 − x2 ) .

Then,

for

each

pair

x = ( x1 , x2 ) ∈ R 2 ,

y = ( y1 , y2 ) ∈ R 2 , where x ≠ y , and for α ∈ [0,1] , we can

write

f* ( x1 , x2 , α ) = (1 + 2α )( x12 − x2 ) and

(5 − 2α )( x12 − x2 ) .

f * ( x1 , x2 , α ) =

f* ( y1 , y2 ,α ) = (1 + 2α )( y12 − y2 ) and

f * ( y1 , y2 ,α ) = (5 − 2α )( y12 − y2 ) . Now if ( x12 − x2 ) ≥ ( y12 − y2 ), then f* ( x1 , x2 , α ) ≥ f* ( y1 , y2 , α ) and f * ( x1 , x2 , α ) ≥ f * ( y1 , y2 , α ) ,

if ( x12 − x2 ) ≤ ( y12 − y2 ) , then f* ( x1 , x2 ,α ) ≤ f* ( y1 , y2 ,α ) , and f * ( x1 , x2 , α ) ≤ f * ( y1 , y2 ,α ) . ~ Therefore, f is a comparable fuzzy mapping. Example 4.3: Consider the fuzzy mapping g~ : R 2 → F ( R ) , where g~ ( x1 , x2 ) = 〈1,3,5〉 x12 − 〈1,3,5〉 x2 . Then the α -cut for each α ∈ [0,1] is given by . g~ ( x1 , x2 )[α ] = [1 + 2α ,5 − 2α ] x12 + [ −5 + 2α ,−1 − 2α ] x2 2 Hence, g* ( x1 , x2 ,α ) = (1 + 2α ) x1 + ( 2α − 5) x2 , and also g * ( x1 , x2 ,α ) = (5 − 2α ) x12 − (1 + 2α ) x2 . Now let x = (1,2) , y = (0,1) . Then, g* (1,2,α ) = 6α − 9 and g * (1, 2, α ) = 3 − 6α . g* (0,1,α ) = 2α − 5 and g * (0,1,α ) = −1 − 2α . Thus, g~ (1,2) = 〈−9,−3,3〉 , g~(0,1) = 〈−5,−3,−1〉 . Clearly, and

H. M. Nehi and A. Daryab: Saddle Point Optimality Conditions in Fuzzy Optimization Problems

g~ (1,2) and g~ (0,1) are not comparable. Hence, g~ is a non-comparable fuzzy mapping. We are going to obtain Fritz John (FJ) constraint qualification and the Karush–Kuhn–Tucker (KKT) optimality conditions for the problem (3). ~ Definition 4.4: Let S = {x ∈ Ω : ~g (x) ≺ 0} be the feasible region for the problem (3), and let x 0 ∈ S . The cone of feasible direction of S at x 0 , denoted by D , is defined as D = {d : d ≠ 0, x 0 + λd ∈ S , ∀λ ∈ (0, δ ), δ > 0} , each non-zero vector d ∈ D is called a feasible direction. Now, we develop a necessary optimality condition for the problem (3). Theorem 4.5: Consider the problem (3). Let x 0 be a ~ feasible solution. Suppose that f : Ω ⊆ R n → F ( R) is ~ differentiable at x 0 , and f ∈ Ε . If x 0 is a local opti, where mal solution, then F0 ∩ D = ∅ ~ ~~ t F0 = {d : ∇f ( x 0 ) d ≺ 0} , and D is the cone of feasible directions of S at x 0 . Proof: By contradiction, suppose that there exists a vector d ∈ F0 ∩ D . Then, d ∈ F0 , d ∈ D . Since d ∈ F0 , then by definition of F0 we have ~ ~~ ∇f (x 0 ) t d ≺ 0 , which implies that for each α ∈ [0,1] ∇f ∗ ( x 0 , α ) t d ≤ 0, ∇f ∗ ( x 0 , α ) t d ≤ 0,

and there exists an α 0 ∈ [0,1] such that ∇f ∗ (x 0 , α 0 ) t d < 0 or ∇f ∗ (x 0 , α 0 ) t d < 0 . Thus by Theorem 3.12 (Panigrahi et al [9]), there exists a δ 1 > 0 such that ~ ~ (4) f ( x 0 + λd) ≺ f ( x 0 ) for each λ ∈ (0, δ 1 ) . Furthermore, since d ∈ D , by Definition 4.4, there exists a δ 2 > 0 such that (5) x 0 + λd ∈ S for each λ ∈ (0, δ 2 ) . Now, let xˆ = x 0 + λd for each λ ∈ (0, δ ) , where ~ ~ δ = min{δ 1 , δ 2 } , then by (5) we have f ( xˆ ) ≺ f (x 0 ) . This is not compatible with the assumption that x 0 is a local optimal solution to the problem (3). The proof is complete. It is necessary to mention that in necessary condition for local optimality at x 0 is that, F0 ∩ D = ∅ , D is the cone of feasible directions, which is not necessarily defined in terms of the gradients of the mappings involved. This precludes us from converting the geometric optimality condition F0 ∩ D = ∅ into a more usable algebraic statement involving equations. As Lemma 4.6 below indicates, we can define an open cone G0 in terms of the gradients of the binding constraints at x 0 , such that G0 ⊆ D . Since F0 ∩ D = ∅ must hold at x 0 , and

15

since G0 ⊆ D , then F0 ∩ G0 = ∅ is also a necessary optimality condition at x 0 . Lemma 4.6: Let x 0 ∈ S , be a feasible point and ~ I = {i : g~i (x 0 ) = 0} be the index set for the binding or active constraints, and assume that g~i for i ∈ I are differentiable at x 0 and that the g~i 's for i ∉ I are continuous at x 0 . Define the set G0 = {d : ∇gi (x 0 )t d ≺ 0, for each i ∈ I . Then we have G0 ⊆ D . Proof: Let d ∈ G0 , then d ≠ 0 . Since x 0 ∈ Ω , and Ω is an open set in R n , there exists a δ 1 > 0 such that (6) x 0 + λd ∈ Ω for each λ ∈ (0, δ 1 ) . ~ ~ Also, since g i (x 0 ) ≺ 0 for i ∉ I , then we have for each α ∈ [0,1] ∗ (7) g i ∗ (x 0 , α ) ≤ 0, g i (x 0 , α ) ≤ 0, for i ∉ I , and there exists an α 0 ∈ [0,1] , such that for i ∉ I , g i ∗ (x 0 , α 0 ) < 0 or g i ∗ ( x 0 , α 0 ) < 0 . Without loss of generality assume that (8) g i ∗ (x 0 , α 0 ) < 0 for i ∉ I . ~ Furthermore, since g i is continuous at x 0 for i ∉ I , by Definition 3.5, both functions g i ∗ (x, α ) , g i ∗ (x, α ) are continuous for each α ∈ [0,1] , i ∉ I . Thus by (7), there exist δ 2 , δ 3 > 0 such that for each α ∈ [0,1] g i ∗ (x 0 + λd, α ) ≤ 0 for λ ∈ (0, δ 2 ) and for i ∉ I , (9) ∗ g i ( x 0 + λd, α ) ≤ 0 for λ ∈ (0, δ 3 ) and for i ∉ I , (10)

and also by continuity of function g i ∗ (x, α ) for i ∉ I , and by (8), there exits a δ 4 > 0 such that ∗ g i ( x 0 + λd, α 0 ) < 0 for λ ∈ (0, δ 4 ) and for i ∉ I . (11) From (9)-(11), we get ~ g~i ( x 0 + λd, α ) ≺ 0 for λ ∈ (0, δ ′) and for i ∉ I , (12) where δ ′ = min{δ 2 , δ 3 , δ 4 } . ~ ~ Furthermore, since d ∈ G0 , then ∇g~i (x 0 ) t d ≺ 0 for each i ∈ I ; and by Theorem 3.12 (Panigrahi et al [9]), there exists a δ 5 > 0 such that ~ g~i (x0 + λd,α ) ≺ g~i (x0 ) = 0 for λ ∈ (0,δ5 ),i ∈ I . (13) Now, from (6), (12) and (13), we get x 0 + λd ∈ S for each λ ∈ (0, δ ) , where δ = min{δ 1 , δ ′, δ 5 } , and d ∈ D since d ≠ 0 . We have shown that d ∈ G0 implies that d ∈ D , hence, G0 ⊆ D . The proof is complete. Theorem 4.7: Let x 0 be a feasible point of (3), and de~ ~ note I = {i : g~i (x 0 ) = 0} . Furthermore, suppose that f and g~i for i ∈ I are differentiable at x 0 and that g~i for i ∉ I are continuous at x 0 . If x 0 is a local optimal ~ ~~ solution, then F0 ∩ G0 = ∅ , where F0 = {d : ∇ f ( x 0 ) t d ≺ 0} ~ ~ and G0 = { d : ∇g~i (x 0 )t d ≺ 0 , for each i ∈ I } .

International Journal of Fuzzy Systems, Vol. 14, No. 1, March 2012

16

Proof: The result follows from Theorem 4.5 and Lemma 4.6, immediately. Now, since both F0 and G0 are defined in terms of the gradient vectors, we will use the condition F0 ∩ G0 = ∅ in this section to develop the constraint qualification credited to Fritz John (FJ). With mild additional assumption, the conditions reduce to the well-known Karush-Kuhn-Tucker (KKT) optimality conditions. Theorem 4.8: (The Fritz John constraint qualification). Consider the problem (3), where Ω is a non-empty ~ open set in R n , and let f : Ω ⊆ R n → F ( R) is a fuzzy mapping, and ~g : Ω ⊆ R n → F m ( R ) is an m-dimensional fuzzy mapping. Let x 0 be a feasible solution, and de~ ~ note I = {i : g~i (x 0 ) = 0} . Furthermore, suppose that f and g~i for i ∈ I are differentiable at x 0 and that g~i for i ∉ I are continuous at x 0 , and we also have for each i ∈ I , j = 1,2,…, n and α ∈ [0,1] , ~ ~ Dx j f (x 0 )[α ] = [h∗ (α ) f j ( x 0 ), h∗ (α ) f j (x 0 )] , (14) ~ D g~ (x )[α ] = [h (α ) g (x ), h∗ (α ) g (x )] , (15) xj

i

0



ij

0

ij

0

which in (14), (15), both h∗ (α ) , h (α ) are functions in terms of α , and both are positive (or negative) for each α ∈ [0,1] , at the same time. If x 0 is a local optimal solution, then there exist scalars u 0 and u i for i ∈ I such that u0∇ f ( x 0 ) + ∑ ui∇ g i ( x 0 ) = 0 ∗

i∈ I

u 0 , u i ≥ 0, (u 0 , u I ) ≠ (0, 0 ) where u I is the vector whose components are u i for i ∈ I . Furthermore, if g~i for i ∉ I are also differentiable at x 0 , then the foregoing conditions can be written in the following equation form: m

u0∇f (x 0 ) + ∑ ui ∇gi (x 0 ) = 0, i =1

ui gi (x0 ) = 0, and u0 , ui ≥ 0, for i = 1, 2,… , m (u0 , u) ≠ (0, 0). where u is the vector whose components are u i for i = 1,2,… , m . Proof: Since x 0 is a local optimal solution for the problem (3), then by Theorem 4.7, there exists no vector ~ ~ ~~ ~ d such that ∇f (x 0 ) t d ≺ 0 and ∇g~i (x 0 ) t d ≺ 0 for each ~ i ∈ I . Now, let A be the fuzzy matrix whose rows are ~ ~ ~ ∇f (x 0 ) t and ∇g~i (x 0 ) t for i ∈ I . Then, the necessary optimality condition of Theorem 4.7 is equivalent to the ~ ~ statement that the system Ad ≺ 0 is inconsistent. Now, let f (x 0 ) = ( f1 (x 0 ), f 2 (x 0 ),…, f n (x 0 )) and gi (x0 ) = ( gi1 (x0 ),

gi 2 (x 0 ),… , gin (x 0 )) for i = 1,2, … , m .

~ ~ But, by definition 2.3, and by (14), (15), Ad ≺ 0 is equivalent to Ad < 0 , where A = [f (x 0 ), g i ( x0 )]T for i = 1,2, … , m , therefore the system Ad < 0 is also inconsistent. By Gordan's theorem ([1], Theorem 2.4.9), there exists a non-zero vector p ≥ 0 such that A t p = 0 , which implies that h∗ (α ). At p = 0, h∗ (α ). At p = 0, for each α ∈ [0,1] . Thus, we have A∗ (α ) t p = 0 , A ∗ (α ) t p = 0 . Hence, ~t ~ A p = 0 . Denoting the components of p by u 0 , u i for i ∈ I , the first part of the result follows. The equivalent form of the necessary condition is readily obtained by letting u i = 0 for i ∉ I , and the proof is complete. Suppose that all the coefficients of the problem (3), are unique positive (or negative) fuzzy number u~ , then we will have ~ Minimize f (x) = u~. f (x), (16) ~ subject to ~ g (x) = u~.g (x) ≺ 0 , then for each α ∈ [0,1] , f* (x,α ) = u* (α ). f (x),

f * (x,α ) = u * (α ). f (x)

thus, ∇f* (x,α ) = u* (α )∇f (x), ∇f * (x,α ) = u * (α )∇f (x) . ~ Therefore, if f is differentiable, hence ~~ (17) ∇f (x) = u~.∇f ( x) . ~ Similarly, if the fuzzy mapping g (x) is differentiable, then ~ (18) ∇~ g ( x) = u~.∇g ( x) . Now, if x 0 is a local optimal solution for the problem (16), then by (17), (18), we will have ~~ ⎡∇f ( x 0 ) ⎤ ~ ⎡ ∇f ( x ) ⎤ A = ⎢ ~ ~ 0 ⎥ = u~ ⎢ ⎥, ⎣∇ g ( x 0 ) ⎦ ⎣⎢∇ g (x 0 )⎦⎥

therefore, the assumptions of Theorem 4.6 is satisfied, particularly the conditions (14) and (15). Thus, there exist scalars u 0 , u i for i = 1,2, … , m such that x 0 satisfies in FJ constraint qualification conditions. Definition 4.9: we say the collection of fuzzy vectors ~ v 1 , ~v 2 , … , ~v k ∈ F n ( R ) , are linear independent, if for each α ∈ [0,1] the left-hand and right-hand α -level vectors v1∗ (α ), v 2 ∗ (α ),…, v k ∗ (α ) and v 1∗ (α ), v 2 ∗ (α ), … , v k ∗ (α ) are linear independent. Otherwise, we call them linear dependent. Theorem 4.10: (KKT Necessary Optimality Conditions) Let x 0 be a feasible solution of (3), and denote ~ ~ I = {i : g~ (x ) = 0} . Suppose that f and g~ for i ∈ I i

0

i

are differentiable at x 0 and that g~i for i ∉ I are con~ tinuous at x 0 , and f , ~ g satisfy (14), (15), respec-

H. M. Nehi and A. Daryab: Saddle Point Optimality Conditions in Fuzzy Optimization Problems

~

17

~

tively. Furthermore, suppose that ∇g~i (x 0 ) for i ∈ I are linearly independent. If x 0 is a local optimal solution for the problem (3), then there exist scalars ui for i ∈ I such that ~ ~~ ~ ∇f (x 0 ) + ∑ ui ∇g~i (x 0 ) = 0

Theorem 4.13 [9]: Let f be a fuzzy mapping on an ~ open convex set Ω ⊆ R n . Let f be differentiable at ~ x 0 ∈ Ω . If f is convex on Ω , then for each x ∈ Ω and α ∈ [0,1] , we have

ui ≥ 0 for i ∈ I In addition to the above assumptions, if g~i for each i ∉ I is also differentiable at x 0 , then foregoing conditions can be written in the following equivalent form: m ~ ~~ ~ ∇f (x ) + u ∇g~ (x ) = 0

f ∗ ( x, α ) − f ∗ ( x 0 , α ) ≥ ∇f ∗ ( x 0 , α ) t ( x − x 0 ) . Theorem 4.14: (KKT Sufficient optimality conditions). Consider the problem (3). Let x 0 ∈ Ω ⊆ R n , let Ω be ~ open, and let f and ~ g be differentiable and convex at

i∈I

0



i

i

0

i =1 ~ ui g~i (x 0 ) = 0 , and ui ≥ 0 for i = 1,2,…, m. Proof: By Theorem 4.8, there exist scalars u 0 and uˆi for i ∈ I , not all equal to zero, such that ~ ~~ ~ (19) u 0 ∇f (x 0 ) + ∑ uˆ i ∇g~i (x 0 ) = 0 ,

i∈I

for i ∈ I , u0 , uˆi ≥ 0 then, we must have u 0 > 0 . Otherwise, if u 0 = 0 , then by (19) we have ∑ uˆi∇gi* (x0 ,α ) = 0, and ∑ uˆi∇gi* (x0 ,α ) = 0, i∈ I

i∈I

for each α ∈ [0,1] , where uˆi for i ∈ I , not all equal to zero. This would contradict the assumption of linear in~ dependence of ∇g~i (x 0 ) for i ∈ I . The first part of the theorem then follows by letting u i = uˆ i u 0 for each i ∈ I . The equivalent form of the necessary conditions follows by letting u i = 0 for i ∉ I . This completes the proof. The scalars u i for i = 1,2, … , m , in the Theorem 4.10, are usually called Lagrangian, or Lagrange multipliers. ~ Remark: Linear independent assumption of ∇g~i (x 0 ) for i ∈ I in Theorem 4.10 is called linear independent constraint qualification. ~ Definition 4.11 [9]: Let f : Ω ⊆ R n → F ( R ) be a fuzzy ~

mapping, where Ω is a convex subset of R n . f is said to be convex on Ω , if for each α ∈ [0,1] both f ∗ (x, α ) , f ∗ ( x, α ) are convex on Ω , that is, for 0 ≤ λ ≤ 1 , x, y ∈ Ω , f ∗ (λx + (1 − λ )y , α ) ≤ λf ∗ (x, α ) + (1 − λ ) f ∗ (y , α ) and f ∗ (λx + (1 − λ ) y , α ) ≤ λf ∗ ( x, α ) + (1 − λ ) f ∗ ( y , α ) . ~ ~ f is said to be concave if − f is convex. ~ Example 4.12: Consider the Fuzzy mapping f in Example 4.2. It can be easily checked that both f* ( x1, x2 ,α ) and f * ( x1 , x2 ,α ) are convex functions for each ~

α ∈ [0,1] . Thus, f is a convex fuzzy mapping on R 2 .

f ∗ ( x, α ) − f ∗ ( x 0 , α ) ≥ ∇ f ∗ ( x 0 , α ) t ( x − x 0 ) ,

~ ~ x 0 . Let f ∈ Ε and the Lagrangian mapping L ( x, u ) be

comparable in terms of x. If (x 0 , u 0 ) satisfies the KKT necessary optimality conditions, then x 0 is a solution of the problem (3). Proof: See [9]. Example 4.15: Consider the following fuzzy minimization problem. ~ Minimize f ( x1, x2 ) = 〈0,2,4〉( x1 − 1)2 + 〈0,2,4〉( x2 − 2)2 , ~ (20) Subject to g~1( x1, x2 ) = 〈0,1,2〉( x12 − x2 )≺0, ~ g~2 ( x1, x2 ) = 〈0,4,8〉( x1 + x2 − 2)≺0, ~ where f , g~1 , g~2 : Ω = {( x1 , x2 ) : x1 > 0, x2 > 0} → F ( R) . ~ Then, the α -cut of f , g~1 , g~2 for each α ∈ [0,1] are given by, f ( x1 , x2 )[α ] = [2α (( x1 − 1) 2 + ( x2 − 2) 2 ), (4 − 2α )(( x1 − 1) 2 + ( x2 − 2) 2 )], g~1 ( x1 , x2 )[α ] = [α ( x12 − x2 ), ( 2 − α )( x12 − x2 )], g~2 ( x1 , x2 )[α ] = [4α ( x1 + x2 − 2), (8 − 4α )( x1 + x2 − 2)]. Thus, for each α ∈ [0,1] ,

f* ( x1 , x2 ,α ) = 2α (( x1 − 1) 2 + ( x2 − 2) 2 ), f * ( x1 , x2 ,α ) = (4 − 2α )(( x1 − 1) 2 + ( x2 − 2) 2 ),

and g1* ( x1 , x2 , α ) = α ( x12 − x2 ),

g1* ( x1 , x2 ,α ) = (2 − α )( x12 − x2 ),

and

g 2* ( x1 , x2 ,α ) = 4α ( x1 + x2 − 2),

g ( x1 , x2 , α ) = (8 − 4α )( x1 + x2 − 2). ~ ~ Let L ( x1 , x2 , u1 , u2 ) = f ( x1 , x2 ) + u1 g~1 ( x1 , x2 ) + u2 g~2 ( x1 , x2 ) . Then, L* ( x1 , x2 , u1 , u2 , α ) = 2α ( x1 − 1) 2 + 2α ( x2 − 2) 2 * 2

+ u1α ( x12 − x2 ) + 4u2α ( x1 + x2 − 2), L* ( x1 , x2 , u1 , u2 , α ) = (4 − 2α )( x1 − 1) 2 + (4 − 2α )( x2 − 2) 2 + u1 (2 − α )( x12 − x2 ) + u2 (8 − 4α )( x1 + x2 − 2), ∇ x1 L* ( x1 , x2 , u1 , u2 , α ) = 4α ( x1 − 1) + 2u1αx1 + 4u2α , ∇ x 2 L* ( x1 , x2 , u1 , u2 ,α ) = 4α ( x2 − 2) − u1α + 4u2α ,

International Journal of Fuzzy Systems, Vol. 14, No. 1, March 2012

18

∇ x1 L* ( x1 , x2 , u1 , u2 , α ) = 2(4 − 2α )( x1 − 1) + 2u1 (2 − α ) x1 + u2 (8 − 4α ), ∇ x2 L ( x1 , x2 , u1 , u2 , α ) = 2(4 − 2α )( x2 − 2) *

of x ∈ X ⊆ Ω and u ∈U ⊆ R m . A point (x 0 , u 0 ) is said ~ to be a saddle point of φ (x, u ) , if for all x ∈ X and u ∈U

~

~

~

φ (x 0 , u) ≺ φ (x 0 , u 0 ) ≺ φ (x, u 0 ) . − u1 (2 − α ) + u2 (8 − 4α ), Now for solving the problem (20), by Theorem 4.14, ~ In other words, a saddle point of the fuzzy mapping φ ( x, u ) is a point (x 0 , u 0 ) that minimize the fuzzy we have to solve ~ mapping φ (x, u ) in X for fixed u 0 ∈U , and maximize ∇ x1 L( x1 , x2 , u1 , u2 ) = 0 = ∇ x2 L( x1 , x2 , u1 , u2 ), ~ the fuzzy mapping φ (x, u ) in U for fixed x 0 ∈ X , siu1 g1 ( x1 , x2 ) = 0 = u2 g 2 ( x1 , x2 ), ~ multaneously. v~ = φ (x 0 , u 0 ) is then called a fuzzy sadg1 ( x1 , x2 )≺ 0, g 2 ( x1 , x2 )≺ 0, u1 , u2 ≥ 0. ~ dle value, of φ (x, u ) . But, the above system is equivalent to The Lagrangian fuzzy mapping associated with the ∇ x L* ( x1 , x2 , u1 , u2 , α ) = 0 = ∇ x L* ( x1 , x2 , u1 , u2 , α ), fuzzy optimization (32), is given by, ~ ~ ∇ x L* ( x1 , x2 , u1 , u2 , α ) = 0 = ∇ x L* ( x1 , x2 , u1 , u2 , α ), L (x, u) = f (x) + u t ~ g ( x) m * where u ∈ R is called the vector of Lagrange multipliu1 g1* ( x1 , x2 ) = 0 = u1 g1 ( x1 , x2 ), ers and ~g (x) = ( g~1 (x), g~2 (x), … , g~m (x)) t . u2 g 2* ( x1 , x2 ) = 0 = u2 g 2* ( x1 , x2 ), The corresponding fuzzy saddle point problem is to g1* ( x1 , x2 ) ≤ 0, g1* ( x1 , x2 ) ≤ 0, g 2* ( x1 , x2 ) ≤ 0, find a pair (x 0 , u 0 ) , such that for all x ∈ Ω ⊆ R n , g 2* ( x1 , x2 ) ≤ 0, u1 , u2 ≥ 0. 0 ≤ u∈ Rm ~ ~ ~ t t That is to solve (33) f (x0 ) + ut ~ g(x0 ) ≺ f (x0 ) + u0 ~ g(x0 ) ≺ f (x) + u0 ~ g(x) (21) 4α ( x1 − 1) + 2αu1 x1 + 4αu2 = 0, ~ Lemma 5.1: Let L (x, u ) be the fuzzy Lagrangian map2(4 − 2α )( x1 − 1) + 2(2 − α )u1 x1 + (8 − 4α ) = 0, (22) ping associated with the fuzzy optimization problem (32). (23) Let L~ (x, u) ∈ Ε . Suppose that, (x , u ) is a saddle point 4α ( x2 − 2) − αu1 + 4αu2 = 0, 0 0 2(4 − 2α )( x2 − 2) − (2 − α )u1 + (8 − 4α )u2 = 0, (24) of ~ L ( x, u ) . Then, the following results hold true: (25) 1. x is a feasible solution to the fuzzy minimization u1α ( x12 − x2 ) = 0 = u1 ( 2 − α )( x12 − x2 ), 0 4u2α ( x1 + x2 − 2) = 0 = u2 (8 − 4α )( x1 + x2 − 2), (26) problem (32). (27) 2. u 0 t ~g (x 0 ) = ~0 . α ( x12 − x2 ) ≤ 0, (28) Proof: 1. Since (x 0 , u 0 ) is a saddle point of the fuzzy (2 − α )( x12 − x2 ) ≤ 0, (29) mapping L~ (x, u ) , from (33) we have for all x ∈ Ω , 4α ( x1 + x2 − 2) ≤ 0, (30) α ∈ [0,1] and u ≥ 0 (8 − 4α )( x1 + x2 − 2) ≤ 0, t (31) u1 , u2 ≥ 0. f∗ (x 0 , α ) + ut g ∗ (x 0 , α ) ≤ f∗ (x 0 , α ) + u 0 g ∗ (x 0 , α ) (34) t Solving (21)–(31), we get x1 = 1 / 2 , x2 = 3 / 2 , u1 = 0 ≤ f∗ (x, α ) + u 0 g ∗ (x, α ), and u2 = 1 / 2 . Thus, the minimum value of the problem and t is found to be 1 / 2〈0,2,4〉 . f ∗ (x 0 , α ) + u t g ∗ (x 0 , α ) ≤ f ∗ (x 0 , α ) + u 0 g∗ (x 0 , α ) (35) t ≤ f ∗ (x, α ) + u 0 g ∗ (x, α ), 5. Saddle point optimality conditions in fuzzy opFrom the left hand inequalities in (34), (35), we have timization problems for all u ≥ 0 and α ∈ [0,1] , t (36) u t g ∗ (x 0 , α ) ≤ u 0 g ∗ (x 0 , α ), Consider the problem, ~ t ∗ t ∗ (37) Minimize f (x) u g (x 0 , α ) ≤ u 0 g (x 0 , α ), (32) ~ ~ and hence, the inequalities (36), (37), are hold true for subject to g (x) ≺ 0 ~ ~ where f , g , are differentiable fuzzy mappings of u = u 0 + e i , where e i is the ith unit m-vector. Thus, we have for each α ∈ [0,1] and i = 1,2,…, m , x ∈ Ω ⊆ R n . It can be shown that under suitable assump* tions, the fuzzy optimization problem (32) can be trans- gi∗ (x0 , α ) ≤ 0, and gi (x0 , α ) ≤ 0. formed into an equivalent fuzzy saddle point problem. Repeating this process for all i, we get for each ~ Let φ (x, u ) be a comparable fuzzy mapping in terms α ∈ [0,1] (38) g ∗ (x 0 , α ) ≤ 0, and g* (x 0 , α ) ≤ 0. 1

2

1

2

H. M. Nehi and A. Daryab: Saddle Point Optimality Conditions in Fuzzy Optimization Problems

~

From (38), we conclude that ~g (x 0 ) ≺ 0 , and then, x 0 is a feasible solution of the fuzzy minimization problem (32). 2. Since u 0 ≥ 0 , from (38), we have for each α ∈ [0,1] (39) u0t g∗ (x0 ,α ) ≤ 0, and u0t g* (x0 ,α ) ≤ 0. But, from (36), (37), with u = 0 , we have for each α ∈ [0,1] (40) u 0t g∗ (x0 , α ) ≥ 0, and u 0t g* (x0 , α ) ≥ 0. By (37), (40), we get for each α ∈ [0,1] (41) u 0t g∗ (x0 , α ) = 0, and u 0t g* (x0 , α ) = 0. ~

(41) implies that u 0 t ~g (x 0 ) = 0 . This completes the proof. ~ Theorem 5.2: Let L (x, u ) be the fuzzy Lagrangian mapping associated with the fuzzy optimization problem ~ ~ (32), where f ∈ Ε . Let L (x, u) ∈ Ε . If (x 0 , u 0 ) is a sad~ dle point of L (x, u ) , then x 0 is an optimal solution of the problem (32). Proof: By Lemma 5.1, part (1), x 0 is a feasible solution of the problem (32). Therefore, it is enough to show that ~ ~ f (x 0 ) ≺ f (x) , for each feasible solution x, of the problem (32). By Lemma 5.1, part (2), the right hand inequality of (33), becomes ~ ~ t f ( x 0 ) ≺ f ( x) + u 0 ~ g ( x)

~ t t u0 ~ g ( x) ≺ 0 , which implies that u 0 g ∗ ( x,1) ≤ 0 .

~

Hence, by (42) we get f ∗ (x 0 ,1) ≤ f ∗ (x,1) . Since f ∈ Ε , ~ ~ we conclude that f (x 0 ) ≺ f (x) , for each feasible point x . The proof is complete. ~ ~ ~ Theorem 5.3: If f 1 , f 2 , … , f m are convex fuzzy mappings defined on non-empty set Ω ⊆ R n , then ~ ~ ~ λ1 f1 + λ2 f 2 + + λm f m is a convex fuzzy mapping on Ω for λi ≥ 0 ( i = 1,2, … , m ). ~ ~ ~ Proof: Since f 1 , f 2 , … , f m are convex fuzzy mappings, then by Definition 4.11, the real valued functions, f1∗ (x, α ), f 2 ∗ (x, α ),…, f m ∗ (x, α ) and f1∗ (x, α ), f 2∗ (x, α ),… , f m∗ ( x, α ) are convex for each α ∈ [0,1] . It can be easy to that

the

functions



m

i =1

non-empty open convex set in R n . Let the differentiable ~ fuzzy mappings f , ~g are convex on Ω . Let x 0 is a ~ feasible solution of the problem (32), and f , ~g satisfy (14), (15), respectively. Furthermore, suppose that the ~ fuzzy lagrangian mapping L (x, u) associated with the problem (32) is comparable. If x 0 is a optimal solution to the problem (32), and the linear independence constraint qualification holds true, then there exists an ~ u 0 ≥ 0 such that (x 0 , u 0 ) is a saddle point of L (x, u ) . Proof: Since x 0 is an optimal solution to the problem (32) and the linear independence constraint qualification is satisfied, thus the assumptions of the Theorem 4.10 are hold true, and the KKT conditions are applicable which implies that there exists an u 0 ≥ 0 such that ~ ~~ t~ (43) ∇f ( x 0 ) + u 0 ∇ ~ g (x 0 ) = 0 ~ t~ (44) u 0 g (x 0 ) = 0 Now, by Theorem 5.3, the fuzzy mapping ~ ~ t L ( x, u 0 ) = f ( x ) + u 0 ~ g ( x) is convex on Ω , thus by dif~ ferentiability of the fuzzy mappings f , ~g , and by Theorem 4.13, we have for all x ∈ Ω , L∗ (x, u 0 ,1) ≥ L∗ ( x 0 , u 0 ,1) + ∇ x L∗ (x 0 , u 0 ,1)t ( x − x 0 ) But, by (43) we have ∇ x L∗ ( x 0 , u 0 ,1) = ∇f ∗ ( x 0 ,1) + u 0 ∇g ∗ ( x 0 ,1) = 0 ~ Hence, L∗ (x, u 0 ,1) ≥ L∗ (x 0 , u 0 ,1) , and since L (x, u) is a t

which implies that for each feasible point x, and for α0 =1 t (42) f ∗ ( x 0 ,1) ≤ f ∗ (x,1) + u 0 g ∗ (x,1) Now, since u 0 ≥ 0 , for feasible point x , we have

show

19

λ i f i ∗ ( x, α )

and

∑i=1 λi f i ∗ (x, α ) are convex on Ω for each α ∈[0,1] m

and λi ≥ 0 ( i = 1,2, … , m ). Thus, the fuzzy mapping ~ ~ ~ λ1 f1 + λ2 f 2 + + λm f m is also a convex mapping, and the proof is complete. Theorem 5.4: Consider the problem (32), where Ω is a

comparable fuzzy mapping, thus

~ ~ L ( x 0 , u 0 ) ≺ L ( x, u 0 )

(45) Now, since L∗ (x 0 , u,1) is linear in terms of u, then L∗ (x 0 , u,1) = L∗ (x 0 , u 0 ,1) + ∇ u L∗ (x 0 , u 0 ,1) t (u − u 0 ) (46) But, we have ∇ u L∗ (x 0 , u 0 ,1) = g ∗ (x 0 ,1) , and also from (44) we have u 0 t g ∗ ( x 0 ,1) = 0 , thus ∇ u L∗ (x 0 , u 0 ,1) t (u − u 0 ) = ∇ u L∗ (x 0 , u 0 ,1) t u = g ∗ (x 0 ,1) t u

Furthermore, since u ≥ 0 , g ∗ (x 0 ,1) ≤ 0 , then by (46) we get L∗ (x 0 , u,1) ≤ L∗ (x 0 , u 0 ,1) . Since by assumption ~ L (x, u) ∈ Ε , then ~ ~ (47) L (x 0 , u) ≺ L (x 0 , u 0 ) From (45), (47), we conclude that L(x0 , u) ≺ L(x0 , u0 ) ≺ ~

L(x, u0 ) , (x 0 , u 0 ) is a saddle point of L (x, u ) . The proof is complete. We now derive the conditions for existence of a sad~ dle point for a fuzzy mapping φ ( x, u ) , x ∈ R n , 0 ≤ u∈ Rm .

~

Theorem 5.5: (Necessity) Let φ ( x, u ) is a comparable fuzzy mapping, and let (x 0 , u 0 ) is a saddle point of ~ φ ( x, u ) for u ≥ 0 . Suppose that for each i = 1,2, … , m ~ and α ∈ [0,1] , φ (x 0 , u) satisfies the following condition

International Journal of Fuzzy Systems, Vol. 14, No. 1, March 2012

20

~~ ∂φ (x0 , u0 ) [α ] = [h* (α )φi (x0 , u0 ), h* (α )φi (x0 , u0 )], (48) ∂ui

such that both h* (α ) , h* (α ) are functions in terms of α , and both are positive (or negative) for each α ∈ [0,1] , ~ at the same time. If φ (x, u ) is a differentiable fuzzy mapping, then (x 0 , u 0 ) satisfies the following conditions: ~ ~ ~ ∇ xφ ( x 0 , u 0 ) = 0 ~ ~ ~ ∇ uφ ( x 0 , u 0 ) ≺ 0 ~ ~ ~ ∇ uφ ( x 0 , u 0 ) t u 0 = 0 u0 ≥ 0

∇ xφ∗ (x 0 , u 0 , α ) = 0, ∇ xφ ∗ (x 0 , u 0 , α ) = 0 Thus, by (52), (53), we get φ∗ (x, u 0 ,1) ≥ φ∗ (x 0 , u 0 ,1) ~ ~ ~ But, ∇ uφ ( x 0 , u 0 ) ≺ 0 implies that

(53) (54)

(55) ∇ uφ∗ ( x 0 , u 0 , α ) ≤ 0, ∇ uφ ∗ ( x 0 , u 0 , α ) ≤ 0 ~ ~ ~ t for each α ∈ [0,1] . Also, ∇ uφ (x 0 , u 0 ) u 0 = 0 implies that

(49) ~

Proof: Since (x 0 , u 0 ) is a saddle point of φ ( x, u ) , thus ~ φ (x, u 0 ) has a local minimum at x 0 , and since φ ( x, u) is differentiable, then by Theorem 3.13 (Panigrahi, [9]), ~ ~ ~ we have ∇ xφ ( x 0 , u 0 ) = 0 . Also, since (x 0 , u 0 ) is a sad~ dle point of the fuzzy mapping φ ( x, u) for u ≥ 0 , then ~ u 0 ≥ 0 . But, since (x 0 , u 0 ) is a saddle point of φ ( x, u ) , ~ then (x 0 , u 0 ) maximize the fuzzy mapping φ (x 0 , u) subject to u ≥ 0 . In the other words u 0 is the optimal solution to the following constrained fuzzy maximization problem: ~ Maximize φ (x 0 , u) (50) subject to u ≥ 0

~

α ∈ [0,1]

Since, φ ( x 0 , u) satisfies (48), and the constraints of the problem (50) are linear, then u 0 satisfies the KKT necessary conditions, that is, there exists a vector v ≥ 0 such that ~~ ∇φu (x 0 , u 0 ) + v = 0, (51) v t u 0 = 0, v, u 0 ≥ 0. From (51), it can be easy to conclude that ~ ~ ~ ~ ~ ~ ∇uφ (x 0 , u 0 ) ≺ 0, ∇uφ (x 0 , u 0 )t u 0 = 0 . The proof is complete. ~ Theorem 5.6: (Sufficiency) Let φ ( x, u ) is a comparable ~ fuzzy mapping. Let φ ( x, u ) is differentiable at (x 0 , u 0 ) . ~ Suppose that the fuzzy mapping φ (x, u 0 ) is convex at ~ x 0 . If φ (x 0 , u) is concave at u 0 , and satisfies (48), then the conditions (49) are both necessary and sufficient for ~ (x 0 , u 0 ) to be a saddle point of φ ( x, u ) . ~ Proof: Since, the fuzzy mapping φ ( x, u ) is differentiable at (x 0 , u 0 ) , then both functions φ∗ (x, u, α ) , φ ∗ (x, u, α ) are also differentiable at (x 0 , u 0 ) for each ~ α ∈ [0,1] . Therefore, since the fuzzy mapping φ (x, u 0 ) is convex at x 0 , we have by Theorem 4.13, and for α 0 = 1 φ∗ (x, u 0 ,1) ≥ φ∗ (x 0 , u 0 ,1) + ∇ xφ∗ (x 0 , u 0 ,1) t (x − x 0 ) (52) ~ ~ ~ But, since ∇ xφ (x 0 , u 0 ) = 0 , then we have for each

∇ uφ∗ (x 0 , u 0 , α ) t u 0 = 0, ∇ uφ ∗ (x 0 , u 0 , α ) t u 0 = 0 (56)

for each α ∈ [0,1] . Therefore, since u ≥ 0 , then by (55), (56), we get (57) ∇ uφ∗ (x 0 , u 0 ,1) t (u − u 0 ) ≤ 0 Now, from (54), (57), we get φ∗ (x, u0 ,1) ≥ φ∗ (x0 , u0 ,1) (58) ≥ φ∗ (x0 , u0 ,1) + ∇uφ∗ (x0 , u0 ,1)t (u − u0 ) ~

But, since the fuzzy mapping φ (x 0 , u) is concave at u 0 , then by Definition 2.3, and by Theorem 4.13, we have for α 0 = 1 , and for all u ≥ 0 φ∗ ( x 0 , u 0 ,1) + ∇ uφ∗ ( x 0 , u 0 ,1) t (u − u 0 ) ≥ φ∗ ( x 0 , u,1) (59) Thus, by (58), (59), we have for all u ≥ 0 φ∗ (x 0 , u,1) ≤ φ∗ (x 0 , u 0 ,1) ≤ φ∗ (x, u 0 ,1) ~ Hence, since φ ( x, u) is a comparable fuzzy mapping, then we get ~ ~ ~ φ (x0 , u)≺φ (x 0 , u 0 )≺φ (x, u 0 ) . Thus, (x 0 , u 0 ) is saddle point of the fuzzy mapping ~ φ ( x, u) . The proof is complete. Example 5.7: Consider the same problem in Example 4.15. The fuzzy Lagrangian mapping is then given by, ~ ~ L ( x1, x2 , u1 , u2 ) = f ( x1, x2 ) + u1g~1 ( x1, x2 ) + u2 g~2 ( x1, x2 ) . We are going to show that ( x1, x2 , u1, u2 ) = (1/ 2,3 / 2,0,1/ 2) , is a saddle point of the fuzzy Lagrangian mapping ~ L ( x, u ) . From Example 4.15, ( x1 , x2 ) = (1/ 2,3 / 2) is an optimal solution of the problem (20). It is not hard to see that, the other conditions of the Theorem 5.4 are hold true for this problem. Therefore, there exists a vector u 0 = (u1 , u2 )t ≥ 0 such that ( x1 , x2 , u1 , u2 ) = (1 / 2,3 / 2, u1 , u2 ) is a saddle point of the problem (20). But, from Example 4.15, it can be easily calculated that the point ( x1 , x2 , u1 , u2 ) = (1/ 2,3 / 2,0,1/ 2) satisfies the conditions (49). Thus by Theorem 5.6, (1 / 2,3 / 2,0,1 / 2) is a saddle ~ point of the fuzzy Lagrangian mapping L ( x, u) .

6. Conclusion The Karus-Kuhn-Tucker (KKT) optimality conditions and saddle point optimality conditions in fuzzy programming problems with fuzzy coefficients are sug-

H. M. Nehi and A. Daryab: Saddle Point Optimality Conditions in Fuzzy Optimization Problems

gested in this paper by introducing a partial order relation on the set of fuzzy numbers, and convexity with differentiability of fuzzy mappings. We have obtained the Fritz John (FJ) constraint qualification and KKT necessary conditions for a fuzzy optimization problem with fuzzy coefficients, for first time. Owing to the help of the KKT optimality conditions, we then discuss, the saddle point optimality conditions, associated with a fuzzy optimization problem under convexity and differentiability of fuzzy mappings.

Acknowledgment The authors thank the two anonymous reviewers for the useful remarks and valuable suggestions which helped to improve the first version of the paper.

References

21

[11] H.-C. Wu, “Saddle point optimality conditions in fuzzy optimization problems,” Fuzzy Optimization and Decision Making, vol. 2, no. 3, pp. 261-273, 2003. [12] H.-C. Wu, “Duality theorems and saddle point optimality conditions in fuzzy nonlinear programming problems base on different solution concepts,” Fuzzy Sets and Systems, vol. 158, pp. 1588-1607, 2007. [13] L. A. Zadeh, “The concept of linguistic variable and its application to approximate reasoning I,” Inform. Sci, vol. 8, pp. 199-249, 1975. [14] L. A. Zadeh, “The concept of linguistic variable and its application to approximate reasoning II,” Inform. Sci, vol. 8, pp. 301-357, 1975. [15] L. A. Zadeh, “The concept of linguistic variable and its application to approximate reasoning III,” Inform. Sci, vol. 9, pp. 43-80, 1975. [16] C. Zhang, X.-H. Yuan, and E. S. Lee, “Duality theory in fuzzy mathematical programming problems with fuzzy coefficients,” Computers and Mathematics with Application, vol. 49, pp. 1709-1730, 2005. [17] H. Zimmermann, “Fuzzy programming and linear programming with several objective functions,” Fuzzy Sets and Systems, vol. 1, pp. 46-51, 1978.

Mokhtar S. Bazaraa, Hanif D.R. Sherali, and C. M. Shetty, Nonlinear Programming Theory and Algorithms, Second edition, John Wiley and Sons Pvt. Ltd, 2004. [2] R. E. Bellman and L. A. Zadeh, “Decision making in a fuzzy environment,” Management Science, vol. 17B, pp. 141-164, 1970. [3] M. Delgado, J. Kacprzyk, J. L. Verdegay, and M. A. Hassan Mishmast Nehi received his Villa, “Fuzzy optimization,” Recent Advances, B.Sc. degree in Mathematics at Tarbiat Physica-Verlag, New York, 1994. Moallem University, Tehran, Iran, in [4] D. Dubois and H. Prade, “Operations on fuzzy 1989. He received M.Sc. and Ph.D numbers,” International Journal of Systems Scidegrees in Applied Mathematics from ence, vol. 9, no. 6, pp. 613-626, 1978. university of Kerman, Iran, in 1992 and [5] R. Goetschel Jr. and W. Voxman, “Elementary 2003, respectively. He is currently an fuzzy calculus,” Fuzzy Sets and Systems, vol. 18, Associate professor in Mathematics in pp. 31-43, 1986. faculty of Mathematics at university of Sistan and Baluchestan, Zahedan, Iran. His research interests [6] Z.-T. Gong and H.-X. Li, “Saddle point optimality include operations reaserch, fuzzy optimization and soft comconditions in fuzzy optimization problems,” Adputing methods. vances in Intelligent and Soft Computing, vol. 54, pp. 7-14, 2009. [7] Y.-J. Lai and C.-L. Hwang, “Fuzzy mathematical programming: methods and applications,” In Lecture Notes in Economics and Mathematical Systems 394, New York: Springer-Verlag, 1992. [8] Y.-J. Lai and C.-L. Hwang, “Fuzzy multiple objective decision making: methods and applications,” In Lecture Notes in Economics and Mathematical Systems 404, New York, Springer-Verlag, 1993. [9] M. Panigrahi, G. Panda, and S. Nanda, “Convex fuzzy mapping with differentiability and its application in fuzzy optimization,” European Journal of Operational Research, vol. 185, pp. 47-62, 2008. [10] R. Slowinski (Ed.), Fuzzy Sets in Decision Analysis, Operations Research and Statistics, Kluwer Academic Publishers, Dordrecht, 1998.

[1]