Second-order global optimality conditions for convex ... - Springer Link

0 downloads 0 Views 1MB Size Report
finite valued convex function and a twice strictly differentiable function by ... Mathematical programming problems with the constraint structure as in VI have.
Mathematical Programming 81 (1998) 327 347

Second-order global optimality conditions for convex composite optimization X.Q. Yang Department oj' Mathematics, University of Western Australia, Nedlands, WA 6009, Australia Received 6 March 1995; revised manuscript received 10 December 1996

Abstract In recent years second-order sufficient conditions of an isolated local minimizer for convex composite optimization problems have been established. In this paper, second-order optimality conditions are obtained of a global minimizer for convex composite problems with a nonfinite valued convex function and a twice strictly differentiable function by introducing a generalized representation condition. This result is applied to a minimization problem with a closed convex set constraint which is shown to satisfy the basic constraint qualification. In particular, second-order necessary and sufficient conditions of a solution for a variational inequality problem with convex composite inequality constraints are obtained. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.

Keywords: Convex composite function; Second-order global optimality; Second-order duality; Variational inequality

1. Introduction Consider the convex composite optimization problem (CP)

minimize

f(x)

subject to x ~ R", where f(x) = g(F(x)),g: R" -~ R U { + o c } is a lower semi-continuous convex function, dora(g) = {.V c R'n: g(y) < + o c } and F : R" -~ ~ " is a vector-valued function. It is well k n o w n that convex composite model problem CP includes most o f nonlinear optimization problems in the literature. Second-order sufficient conditions o f an isolated local minimizer for C P have been given in [1 6] by enforcing the inequality in the necessary condition part to be a strict inequality on a larger critical direction set. In these conditions, F is assumed to be twice continuously differentiable. Second-order sufficient conditions of an isolated local minimizer are useful in studying (local) convergence analysis o f optimization algorithms and sensitivity analysis o f solution points in nonlinear programs. On the other hand, it has been shown [7-10] that global optimality S1383-7621/98/$19.00 © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V. PHS 1 3 8 3 - 7 6 2 1 ( 9 7 ) 0 0 0 4 9 - X

X.Q. Yang / Mathematical Programming 81 (1998) 327 347

328

conditions are practical, e.g., in the following aspects: in nonconvex (concave) optimization it usually amounts to characterize a global minimizer and in engineering applications a global minimizer is needed via, e.g., variational inequality problems. To the best of our knowledge, second-order sufficient conditions of a global minimizer have not been studied. In this paper, we obtain second-order global sufficient conditions for CP by retaining the inequality in second-order necessary conditions and assuming a generalized representation condition on F. To establish second-order necessary conditions for CP twice continuous differentiability assumption on F is reduced to be C 1,1 in [11]. Using the results from second-order nonsmooth analysis, see [12-14], we show that this assumption can be further relaxed to that o f F being continuously differentiable in terms of a Taylor expansion given in [12]. These results are presented in Section 2. In Section 3, a second-order dual is proposed for CP and corresponding duality results are established under a simplified condition and the generalized representation condition. This part of the study is motivated by the work in [15] and shows further application of the generalized representation condition. Mangasarian [15] formulated a second-order dual problem for a nonlinear program and established various duality results using a so-called "inclusion condition". It will be shown that variant results in [15] can be established for convex composite optimization CP and that by employing a generalized representation condition the condition we are going to use is much simpler than the one used in [15] and can be easily verified. We point out that Mond [16] derived certain conditions where second-order derivatives are involved such that duality results in [15] hold and that Jeyakumar [17] obtained duality results for the same second-order dual pair using generalized convexity assumptions. It turns out that the second-order duality to be presented in this paper does not include the one in [17] as a special case, vice versa. We also study the following variational inequality (VI)

F i n d x 0 E A , s u c h t h a t (H(xo),X-Xo) >~0

VxEA,

where A = {x E ~": x ~ C, gi(F~(x)) ~ 0 Vu E K(xo).

(1)

Proof. Note that the mapping x ~ L°°(x; u, v) is upper semi-continuous and that the following Taylor expansion for a differentiable function h: R" ~ N holds

h(y) i 0, i = 1 , . . . , s . There does not exist much difference between D P and DP1 (m = s + 1). The only difference between D P and DP1 is that D P has an additional term g* (y*). This is due to the definition of the Lagrangian L(x,y*) of CP. Theorem 4.1. Consider the problem CP and its dual DP. Assume that the generalized

representation condition (2) holds. We have." (i) Weak duality: Let x and (u,y*,p) be feasible for CP and DP, re,spectively. If there exist constants s(u,y*) and S(u,y*) such that VZ ~ [~xn,zTD2Lfu,y*)2 > s ( u , y * ) II z

II DaL(u,Y *) II ~< S(u,y*)

and

II2,

0 < s(u,y*) ~ S(u,y*)llp

IIII x - u 11 + ½ s(u,y*)l1,7(x,

-

(

(S(u,y*)

=½s(u,y*) ))pll-}lx-ull~s(u,y.) X >/0.

liP{{- IIX - R'I{k ~

u)II 2 + ~J s(u,y*)llp

II2

(S(IA~y")2 ,,tl(x,u),[2.~l/2)) kS(U,y,) 2

hk kS(N,y,) 2

ilX __ UlI2 J

{{X-- Re{2 J

J

(condition (12) or (13))

Thus the weak duality is satisfied. (ii) Since x0 is a minimizer of CP, it follows from Theorem 3.1 that there exists y* E Lo(xo) such that y* E Og(F(xo)) and VL(xo,y*) = 0. Thus (xo,y*,p = 0) is a feasible point for DP. By p = 0, y* E Og(F(xo)) and Theorem 23.5 in [15], we have

L(xo,y*) - glpTD2L(u,y*)p = @*,F(xo)) - g*(y*) = g(F(xo)). From the weak duality

(xo,y*,p = 0) is a global maximizer of DP.

[]

For the nonlinear program NP, Mangasarian [15] employed the following inclusion condition

{K(u) (K(u)~ k~)K) '/2)'

Ilpll ~< IIx - uLL kk(u ) - ~,k(u)2 where

(16)

K(u) >t k(u) > 0 and k are such that Vz E IR", ~ E Ix, u],

IIV2L(u,y*){{ k(~)llzll ~, k = ~~0 such that

lw~V2Z,(u,,~*)vl ~K(u) then f ( x ) >/L1 (u, 2") - ½prV2Lt (u, 2*)p.

(19)

We give the following two examples to clarify the comparison between Theorem 4.1 and Theorem 4.2 for a nonlinear programming problem.

Example 4.1. Consider the problem minimize subject to

f0(x) = x 3 )q (x) = - x ~ 0.

Since fo,f~ are convex on the feasible region, P0 = P~ = 0. The corresponding dual problem DP1 is maximize

u3 - 2~u - 3up 2

subject to

3u 2 - 2 ~ + 6 p u = 0 ,

2~>0.

Let x ~> 0 be feasible for NP. Let (u, 2~,p) be a feasible solution of DP1. From (P0 +A~p~)>~K(u),K(u)= 0. From (18) and K ( u ) = 0, u = 0. The weak duality (19) follows from Theorem 4.2. In fact, we have X 3 ~ 0 ~- U 3 -- ,'~U --

3up 2.

When u = 0, V2L1 (u, 2~) = 0. There exists no s(u,y*) > 0 such that (10) is assured. Thus Theorem 4.1 is not applicable in this example.

X.Q. Yang / Mathematical Programming 81 (1998) 327-347

341

Example 4.2. Consider the nonlinear program minimize

f~(x) = sin(x)

subject to

f~(x)-=x-~ f2(x)=-x-~

7C

~