New second-order optimality conditions in

1 downloads 0 Views 90KB Size Report
tion in multiobjective optimization problems: Differentiable case, J. Optimization Theory Appl., ...... O. L. Mangasarian, Nonlinear programming, McGraw-Hill (1969).
J. Indian Inst. Sci., May–June 2006, 86, 279–286 © Indian Institute of Science.

New second-order optimality conditions in multiobjective optimization problems: Differentiable case

M. M. RIZVI1,* AND M. NASSER† 1

Research Centre for Mathematical and Physical Sciences (RCMPS), University of Chittagong, Chittagong 4331, Bangladesh. † Statistics Department, Rajshahi University, Rajshahi, Bangladesh. email: [email protected]; Phone: 0088-031-614259. Received on February 1, 2005; Revised on October 10, 2005 and February 6, 2006. Abstract To get positive Lagrange multipliers associated with each of the objective function, Maeda [Constraint qualification in multiobjective optimization problems: Differentiable case, J. Optimization Theory Appl., 80, 483–500 (1994)], gave some special sets and derived some generalized regularity conditions for first-order Karush–Kuhn– Tucker (KKT)-type necessary conditions of multiobjective optimization problems. Basing on Maeda’s set, Bigi and Castellani [Second order optimality conditions for differentiable multiobjective problems, RAIRO, Op. Res., 34, 411–426 (2000)], tried to get the same result for second-order optimality conditions but their treatment was not convincing. In this paper, we have generalized these regularity conditions for second-order optimality conditions under different sets and obtained positive Lagrange multipliers for the objective function. Keywords: Multiobjective optimization, local vector minimum point, regularity conditions, second-order necessary conditions.

1. Introduction Investigation of optimality conditions has been one of the most interesting topics in the theory of multiobjective optimization problems. Many authors have derived the first- and second-order necessary conditions for vector minimum solution under the same constraint qualification as used in scalar-valued objective function [1], but none could obtain positive Lagrange multipliers associated with the vector-valued objective function. So it is possible that due to some zero multipliers the corresponding components of the vector valued objective functions have no role in the necessary conditions of multiobjective problem. To avoid this undesirable situation getting positive Lagrange multipliers, Maeda [2] gave some special sets and derived some generalized regularity conditions for the first-order KKT-type necessary conditions that ensure the existence of positive Lagrange multipliers for firstorder multiobjective optimality conditions. For getting positive Lagrange multipliers, some authors analyzed these conditions for second-order KKT-type necessary conditions [3–5]. In particular, basing on Maeda’s sets, Bigi and Castellani [5] generalized these regularity conditions for second-order optimality conditions, but their treatment is not convincing. *Author for correspondence. Permanent address: 60, Hemsen Lane, Askerdeghi West, Chittagong 4000, Bangladesh.

280

M. M. RIZVI AND M. NASSER

In this paper, we have also generalized Maeda-type regularity conditions for secondorder KKT-type necessary conditions, but have generalized these conditions under more general sets, later called “Proposed sets”. As a result, we have ensured positive Lagrange multipliers, associated with the objective function and derived second-order KKT-type necessary conditions for both equality and inequality constraints. Some notations, definitions, and preliminary results are given in Section 2. In Section 3, we have given the comparison between Maeda’s sets and proposed sets, generalization of Maeda’s regularity conditions and also derived KKT-type necessary conditions for secondorder optimality conditions with an important remark. 2. Preliminaries In this section, we introduce some notations and definitions, which are used throughout the paper [6]. Let En be n-dimensional Euclidean space. For x, y ∈ En, we use the following conventions. x ¥ y,

iff

xi ¥ yi,

x ¥ y,

iff

x¥y

and

x > y,

iff

xi > yi

i = 1, …, n.

i = 1, …, n, x ≠ y,

Now, we consider the following multiobjective optimization problem P: min f(x), subject to the set X: x ∈ X = {x ∈ En|g(x) œ 0, h(x) = 0} –

Let, f : En → El , g : En → Em and h : En → Ek be twice continuously differentiable vector-valued functions. Assume that I ( x ) = { j : g j ( x ) = 0} for j = 1, …, m. For any twice continuously differentiable function g : En → Em and for any vector y ∈ Em, we denote by ∇g ( x ) and ∇ 2 g ( x )( y , y ) , respectively, the m × n Jacobian matrix and the m-dimensional vector whose ith component is yT ∇ 2 gi ( x )y . Now, we shall define the nonempty sets Mi and M by M i ≡ {x ∈ En x ∈ X, fi (x) œ fi ( x )}, i = 1, 2, …, l l

and M ≡ {x ∈ En x ∈ X, f (x ) < f ( x )} = ∩ M i = Set of vector minimum point. i =1

For any two vectors x = ( x1 , x2 ) and y = ( y1 , y2 )T in E2, we use the following conventions: x œlex y means that x1 < y1 holds or x1 = y1 and x2 œ y2 hold. T

x 0, that is fi ( x ) > fi (x) for all i. Now we define two kinds of second-order approximation sets to the feasible region. Definition 2.2. The second-order tangent set to X at x ∈ X is the set defined by 1 T 2 ( X ; x ) ≡ {(y , z ) ∈ E2 n ∃xn ∈ X , ∃ tn → + 0 such that xn = x + tn y + tn2 z + ο (tn2 )} 2 where ο (tn2 ) is a vector satisfying

|| ο (tn2 ) || tn2

→ 0.

Definition 2.3. The second-order linearizing set to M at x ∈ M is the set defined by (y , z ) ∈ E2 n (∇f i ( x )T y , ∇f i ( x )T z + ∇ 2 fi ( x )(y , y ))T < (0,0)T , i = 1,..., l  lex     2 2 T T T< T L ( M ; x ) = (∇g j ( x ) y , ∇g j ( x ) z + ∇ g j ( x )(y , y )) lex (0, 0) , j ∈ I ( x )    T T 2 T T and (∇h p ( x ) y , ∇h p ( x ) z + ∇ h p ( x )(y , y )) = (0, 0) p = 1,..., k  A first-order sufficient conditions for vector minimum point is that the following system has no nonzero solution y: ∇f ( x )T y < 0, ∇g I ( x)T y < 0 , ∇h( x )T y = 0 (1) The Kuhn–Tucker-type condition for optimality is equivalent to the inconsistency of the following system:

∇f ( x )T y < 0 ,

∇g I ( x)T y < 0 ,

∇h( x )T y = 0.

(2)

The gap between (1) and (2) is caused by the following directions:

∇f ( x )T y œ 0, ∇fi ( x )T y = 0 at least one i, ∇g I ( x)T y œ 0 , ∇h( x )T y = 0.

(3)

A direction y that satisfies (3) is called a critical direction. For the sake of simplicity, we use the following notations:

Fi (y, z ) = (∇fi ( x )T y, ∇fi ( x )T z + ∇ 2 fi ( x )(y, y ))T G j (y , z ) = (∇g j ( x )T y , ∇g j ( x )T z + ∇ 2 g j ( x )(y , y ))T H p ( y , z ) = (∇h p ( x )T y , ∇hp ( x )T z + ∇ 2 h p ( x )(y , y ))T 3. Generalized regularity conditions To get positive Lagrange multipliers for each of the objective function, Maeda [2] gave the following generalized Guignard regularity conditions (GGRC) for the first-order KKT-type necessary conditions under the set Qi;

282

M. M. RIZVI AND M. NASSER

l

Ω( M ; x ) ⊆ I cl convT (Q i ; x ) , i =1

where T ( X , x ) is Bouligand tangent cone and Ω( X ; x ) is a first-order linearizing cone. In this section, we have generalized these conditions for the second-order KKT-type necessary conditions under different sets Mi. Comparison between Maeda’s sets and the proposed sets: Maeda’s sets: Qi ≡ { ∈ En

∈ X , f k ( ) œ f k ( ), k = 1, 2, ..., l and k ≠ i}

Proposed sets: Mi

{x

En x

X, fi (x) œ fi ( x )} , i = 1, 2, ..., l

The relationship between the two types of sets is l

Qi = ∩ M k , i = 1, ..., l. k =1 k ≠i

For generalizing the Maeda’s [2] regularity conditions, we first show that the relationship between the tangent sets T 2 ( M i ; x ) and linearzing set L2 ( M ; x) . Lemma 3.1: We assume that x is a feasible solution to problem P then we have l

∩ T 2 ( M i ; x ) ⊆ L2 ( M ; x ) .

i =1

Proof: Let (y, z) be any element in T 2 ( M i ; x ) then there exist xn ∈ M i and tn → +0 such that xn = x + tn y + 12 tn2 z + ο (tn2 ) . By the Taylor expansion, fi (x n ) − fi ( x ) = tn ∇fi ( x )T y + 12 tn2 (∇fi ( x )T z + ∇ 2 fi ( x )(y , y )) + ο (tn2 ) , i = 1, 2, … , l g I (x n ) − g I ( x ) = tn ∇g I ( x)T y + 12 tn2 (∇g I ( x)T z + ∇ 2 g I ( x )(y , y )) + ο (tn2 ) h(x n ) − h( x ) = tn ∇h( x )T y + 12 tn2 (∇h( x )T z + ∇ 2h( x )(y , y )) + ο (tn2 ) . Then for all n we have, 1   fi (x n ) = fi  x + tn y + tn2 z  œ fi ( x ) , 2  

i = 1,2, …, l

(4)

[Since M i ≡ {x ∈ En | x ∈ X , fi (x) œ fi ( x )} , i = 1, 2, …, l] 1   g I (x n ) = g I  x + t n y + t n2 z  œ 0 = g I ( x ) 2  

(5)

SECOND-ORDER OPTIMALITY CONDITIONS

283

1   h(x n ) = h  x + tn y + tn2 z  = 0 = h ( x ). 2  

(6)

tn ∇fi ( x )T y + 12 tn2 (∇fi ( x)T z + ∇ 2 fi ( x )(y , y )) + ο (tn2 ) œ 0 , i = 1, 2, …, l

(7)

and Now, from (4), we have

Now, if ∇fi ( x )T y = 0 , i = 1, 2, …, l then, from (7), we have 2ο (t 2 )  1 2  tn (∇fi ( x )T z + ∇ 2 fi ( x ) ( y , y )) + 2 n  œ 0 2  tn  as n → ∞, we have 2ο (t 2 )   lim (∇fi ( x )T z + ∇ 2 fi ( x )(y , y )) + 2 n  = (∇fi ( x )T z + ∇ 2 fi ( x )(y , y )) œ 0. n →∞  tn   Also, from (7), we have 2ο (t 2 )  1 2  2 T tn  ∇fi ( x ) y + (∇fi ( x )T z + ∇ 2 fi ( x )(y , y )) + 2 n  œ 0. 2  tn tn  Now, if ∇fi ( x )T y < 0 , i = 1, 2, …, l and as n → ∞ we have

(∇f i ( x )T z + ∇ 2 f i ( x )(y, y )) œ 0, i.e.

(∇f i ( x )T y , ∇f i ( x )T z + ∇ 2 f i ( x )(y , y ))T œlex (0, 0)T , i = 1, 2, …, l

Similarly,

(∇g j ( x )T y , ∇g j (x )T z + ∇ 2 g j ( x )(y , y ))T œlex (0, 0)T , j ∈ I ( x ),

and

(∇h p ( x )T y , ∇h p ( x )T z + ∇ 2 h p ( x )(y , y ))T = (0, 0)T p = 1, ..., k ,

which implies that (y, z ) ∈ L2 ( M i ; x ) ⇒ T 2 ( M i ; x ) ⊆ L2 ( M i ; x ) , ∀i . Since L2 ( M i ; x) is closed convex set and i is arbitrary, we have l

l

i =1

i =1

∩ T 2 ( M i ; x ) ⊆ ∩ L2 ( M i ; x ) = L2 ( M ; x ).

For closeness of L2 ( M i ; x) we can also write l

∩ cl convT 2 ( M i ; x ) ⊆ L2 ( M ; x )

i =1 2

i

where cl convT ( M ; x ) denotes the closure of convex hull of T 2 ( M i ; x ) . Remark: 3.1. In general, the converse inclusion in lemma 3.1 does not hold. So to obtain the necessary conditions for a feasible point to problem P be a local vector minimum point, it is reasonable to assume that

284

M. M. RIZVI AND M. NASSER

l

L2 ( M ; x ) ⊆ I T 2 ( M i ; x )

(8)

i =1

l

L2 ( M ; x ) ⊆ I cl convT 2 ( M i ; x ) .

and

(9)

i =1

Conditions (8) and (9) are considered, respectively, as a generalized Abadie second-order regularity condition (GASORC) and generalized Guignard second-order regularity condition (GGSORC). For checking optimality along the corresponding curves, we achieve the impossibility of a family of nonhomogeneous system as necessary optimality conditions. These systems depend upon a descent direction for f at the considered optimal point x ∈ X and involve only the components of f, for which this direction is stationary at x ; therefore, given any direction y ∈ En , let P(y ) = {i ∈ {1,..., l}: ∇fi ( x )T y = 0} . Now, we are in a position to state the primal form of our second-order necessary conditions. Theorem 3.1. Let x be a local vector minimum point to Problem P, and assume that the second-order (GASORC) holds at x ∈ X . Then, the following system has no solution (y, z): ∀i,

(10)

at least one i where i ∈ P(y )

(11)

∀j ∈ I ( x ) ,

(12)

∀p

(13)

Fi (y, z ) œlex 0 Fi (y, z ) 0, y 2 ¥ 0 but never both. The proof is identical with [3,6]. Remark 3.1: To get the positive Lagrange multipliers wi > 0, i ∈ P(y ) in [5], Bigi and Castellani gave the lemma 2.1. Using this lemma, they tried to prove their theorem 5.5 for getting wi positive, but it is not possible, because the lemma provides semi-positive w, i.e. (w ≥ 0, w ≠ 0). In a recent paper [3], they establish theorem 3.2 by using SMFRC conditions. They restrict w by ||w|| = 1, but ||w|| = 1 does not necessarily imply that all the components of w are not equal to zero. Applying Lemma 3.2 and Theorem 3.1, we have deduced the following KKT-type necessary conditions, which ensure the existence of positive Lagrange multipliers of the objective functions. Here, we consider those components of f, for which the direction y ∈ En is stationary at x ; i.e. P(y ) = {i ∈ {1,..., l}: ∇fi ( x )T y = 0}.

286

M. M. RIZVI AND M. NASSER

Theorem 3.2: Let x satisfy the assumptions made in Theorem 3.1. Then, for each critical direction y, there exist multipliers w ∈ El , u ∈ Em and v ∈ Ek such that l

m

k

i =1

j =1

p =1

∑ wi ∇fi ( x) + ∑ u j ∇g j ( x) + ∑ v p ∇hp ( x) = 0 m k  l   ∑ wi ∇ 2 fi ( x ) + ∑ u j ∇ 2 g j ( x) + ∑ v p ∇ 2 h p ( x )  ( y , y ) ¥ 0  i =1  j =1 p =1  

wi > 0 i ∈ P(y ) , wi = 0 for all i ∉ P (y ), u j ¥ 0 j ∈ I (y ), u j = 0 for all j ∉ I (y )

P(y ) = {i ∈ {1,..., l}: ∇fi ( x )T y = 0} ,

I (y ) = { j ∈ {1,..., m} : g j ( x ) = 0, ∇g j (x )T y = 0}

Proof: Let y be a critical direction. Then, the system ∇f P ( y ) ( x )T z + ∇ 2 f P ( y ) ( x )( y, y) ≤ 0 ∇g I ( y ) ( x )T z + ∇ 2 g I ( y ) ( x )( y, y ) œ 0

∇h( x )T z + ∇2 h( x)( y, y) = 0 has no solution z. By lemma 3.2, there exist multipliers

w ∈ El , u ∈ E m and v ∈ E k such that,

l

m

k

i =1

j =1

p =1

∑ wi ∇fi ( x) + ∑ u j ∇g j ( x) + ∑ v p ∇hp ( x) = 0 m k  l   ∑ wi ∇ 2 fi ( x ) + ∑ u j ∇ 2 g j ( x) + ∑ v p ∇ 2 h p ( x )  ( y , y ) ¥ 0  i =1  j =1 p =1  

wi > 0 i ∈ P ( y ) , wi = 0 for all i ∉ P(y ) , u j ¥ 0 j ∈ I (y ) , u j = 0 for all j ∉ I (y ). This completes the proof. Since GASORC ⇒ GGSORC, theorems 3.1 and 3.2 hold for GGSORC also. References 1. B. Aghezzaf, and M. Hachimi, Second-order optimality conditions in multiobjective optimization problems, J. Optimization Theory Applic., 102, 37–50 (1999). 2. T. Maeda, Constraint qualifications in multiobjective optimization problems: Differentiable case, J. Optimization Theory Applic., 80, 483–500 (1994). 3. G. Bigi, and M. Castellani, Uniqueness of KKT multipliers in multiobjective optimization, Appl. Math. Lett., 17, 1285–1290 (2004). 4. B. Jiménez, and V. Novo, First and second order sufficient conditions for strict minimality in multiobjective programming, Numerical Functional Anal. Optimization, 23, 303–322 (2002). 5. G. Bigi, and M. Castellani, Second order optimality conditions for differentiable multiobjective problems, RAIRO Op. Res., 34, 411–426 (2000). 6. O. L. Mangasarian, Nonlinear programming, McGraw-Hill (1969).