Constraint qualifications in convex vector semi

1 downloads 0 Views 209KB Size Report
Aug 30, 2015 - The model (1) includes ordinary convex (scalar and ... the vector%valued objective function f X! ' R3 is possibly ... for effi cient solutions while the equivalence can easily be proved for proper effi % ... s.t. a(5x & b5, t . ...... X, the following statements hold: (i) If D X) x! 9 G x! ' ;d . R2 ( f$. x) d!*#,i * $,...,p< , then x .
Constraint quali…cations in convex vector semi-in…nite optimization M.A. Gobernay, F. Guerra-Vazquezz, and M.I. Todorovx August 30, 2015

Abstract Convex vector (or multi-objective) semi-in…nite optimization deals with the simultaneous minimization of …nitely many convex scalar functions subject to in…nitely many convex constraints. This paper provides characterizations of the weakly e¢ cient, e¢ cient and properly e¢ cient points in terms of cones involving the data and Karush-Kuhn-Tucker conditions. The latter characterizations rely on di¤erent local and global constraint quali…cations. The results in this paper generalize those obtained by the same authors on linear vector semi-in…nite optimization problems.

1

Introduction

We consider convex optimization problems of the form P : " min " f (x) = (f1 (x); :::; fp (x)) s.t. gt (x)

0; t 2 T;

(1)

where x 2 Rn (the space of decisions), f (x) 2 Rp (the objective space), the index set T is a compact Hausdor¤ topological space, fi : Rn ! R is a convex function, i = 1; : : : ; p; gt is convex for each t 2 T; and the function (t; x) 7! gt (x) is continuous on T Rn : The continuity of f is consequence of the assumptions on its components f1 ; :::; fp : The model (1) includes ordinary convex (scalar and vector) optimization problems just taking the discrete topology on the (…nite) index set. Since the optimality theory for this class of problems has been thoroughly studied, we assume in the sequel that T is in…nite. When p 2; P is a convex vector semi-in…nite optimization (SIO in brief) problem; otherwise, P This research was partially supported by MICINN of Spain, Grant MTM2014-59179-C21-P, and Sistema Nacional de Investigadores, Mexico. y Dep. of Statistics and Operations Research, Alicante University, 03071 Alicante, Spain. E-Mail: [email protected]. z Dep. of Actuarial Sciences, Physics and Mathematics, UDLAP, 72820 San Andrés Cholula, Puebla, Mexico. E-Mail: [email protected]. x Dep. of Actuarial Sciences, Physics and Mathematics, UDLAP, 72820 San Andrés Cholula, Puebla, Mexico. On leave from IMI-BAS, So…a, Bulgaria. E-Mail: [email protected].

1

is a convex scalar SIO problem. Replacing in (1) the space of decisions Rn by an in…nite dimensional space (typically a locally convex Hausdor¤ topological vector space) one gets a convex (scalar or vector) in…nite optimization (IO in short) problem. We assume throughout the paper that p 2 and the feasible set of P; denoted by X; is non-empty. Obviously, X is a closed convex set whereas its image by the vector-valued objective function f (X) Rp is possibly non-convex and nonclosed. The vector SIO problem P can be reformulated as a vector optimization problem with the single convex constraint function '(x) := maxt2T gt (x); called marginal function: P : " min " f (x) = (f1 (x); :::; fp (x)) s.t. '(x)

0:

Throughout the paper we use the following notation. Given x; y 2 Rm ; we write x 5 y (x < y) when xi yi (xi < yi ; respectively) for all i = 1; :::; m: Moreover, we write x y when x 5 y and x 6= y: b2X An element x 2 X is said to be e¢ cient (weakly e¢ cient) if there is no x such that f (b x) f (x) (f (b x) < f (x); respectively). There are many notions of proper e¢ ciency in the literature, as those introduced by Geo¤rion, Benson, Borwein and Henig. Since P is convex, all these concepts are equivalent to the proper e¢ ciency in terms of linear scalarization (see, e.g., [9]), so that we recall only Geo¤rion’s de…nition: a feasible point x 2 X is said to be properly e¢ cient if there exists > 0 such that, for all i = 1; :::; p and x b 2 X satisfying fi (b x) < fi (x) fi (b x) fi (x); there exists j 2 f1; :::; pg such that fj (b : x) > fj (x) and fj (b x) fj (x) We denote by XpE ; XE ; and XwE the sets of properly e¢ cient points, e¢ cient points, and weakly e¢ cient points of P; respectively. Obviously, XpE XE XwE ; with X = XwE whenever one component of f is identically zero, and X = XpE in the trivial case that f is the null function. Moreover, it is known that f (XpE ) is dense in f (XE ) ([17]; see also [9, Theorem 3.17]). Given a (possibly non-convex) vector SIO problem P : " min " f (x) s.t. x 2 X; x 2 X is said to be locally (properly, weakly) e¢ cient solution of P if there exists a neighborhood N of x such that x is (properly, weakly) e¢ cient solution of PN : " min " f (x) s.t. x 2 X \ N : Global and local concepts coincide in convex vector SIO thanks to the convexity of X and the componentwise convexity of f: For instance, if x 2 X is not weakly e¢ cient there exists x b 2 X such that f (b x) < f (x) ; since f ( x b + (1 ) x) < f (x) for all 2 ]0; 1[ ; with x b + (1 ) x 2 X \ N for su¢ ciently small, x cannot be a locally weakly e¢ cient solution of P: The argument is similar for e¢ cient solutions while the equivalence can easily be proved for proper e¢ cient solutions via scalarization. For this reason, in convex vector SIO, we can characterize the (proper, weak) e¢ ciency on the basis of local information. The 2

known tests for non-linear vector optimization classify a given x 2 X as locally (properly, weakly) e¢ cient solution or not through conditions involving subsets of the objective space Rp or suitable scalarizations of P (see, e.g., [2], [9]). In this paper, on convex vector SIO, we give conditions for x 2 XpE ; x 2 XE ; and x 2 XwE which are expressed in terms of convex cones contained in the decision space Rn or in terms of the existence of Karush-Kuhn-Tucker (KKT in short) multipliers which can be computed from x and the data describing P: As a general rule, to obtain a checkable necessary optimality condition for a given constrained optimization problem, one needs to assume some property of the constraint system called constraint quali…cation (CQ in short). We consider in this paper four CQs which extend those used in our previous paper [12] on constraint quali…cations in linear vector SIO. The strongest one is the natural extension of the CQ introduced by M. Slater in a seminal work on scalar optimization published in 1950, which was adapted to linear scalar SIO by Charnes, Cooper and Kortanek in the 1960s. A weaker CQ for convex scalar SIO has been proposed in [23]. The locally Farkas-Minkowski CQ was …rst de…ned in [26] for linear scalar SIO, and then extended to convex scalar SIO in [13] and to convex scalar IO in [8]. CQs weaker than the locally Farkas-Minkowski one have been introduced in [23], for convex SIO problems, and in [22], for convex IO problems. The local Slater CQ, introduced in Section 3 of this paper, seems to be new while the extended Kuhn-Tucker CQ was introduced in [31] for convex IO as an extension of that used by H.W. Kuhn and A.W. Tucker in [19] for ordinary non-linear optimization problems. Section 1 of [22] reviews the state of the art on CQs in convex scalar optimization. Some of the previous works also deal with the so-called regularity (or closedness quali…cation) conditions involving the objective function and the constraints (see, e.g., the recent papers [30] and [29], dealing with IO problems with DC objective function and convex constraints, and references therein). The stability of linear and non-linear scalar SIO has been investigated since the last 1980s from di¤erent perspectives, e.g., the pseudo-Lipschitz property and the lower and upper semicontinuity of the e¢ cient set mapping under different types of perturbations, well-posedness, and generic stability (see, e.g., [4], [5], [6], [10], [32], [33]), while the existing literature on optimality conditions for vector SIO and vector IO problems is surprisingly limited. The main antecedent of this paper is [12], on linear vector SIO, which provides characterizations of the weakly e¢ cient, e¢ cient and properly e¢ cient solutions in terms of cones involving the data and KKT conditions. In [3], on a class of vector SIO problems involving di¤erentiable functions whose constraints satisfy certain invex-type conditions and are required to depend continuously on an index t ranging on some compact topological space T; KKT conditions for x 2 XpE ; x 2 XE and x 2 XwE are given. In [14], on non-convex differentiable vector SIO, the authors discuss constraint quali…cations as well as necessary and su¢ cient conditions for locally weakly e¢ cient points and present optimality conditions for properly e¢ cient points in the senses of Geo¤rion and of Kuhn and Tucker [19]. Finally, in [7], on non-smooth vector IO problems posed on Asplund spaces whose index set T has no topological structure, neces3

sary conditions as well as su¢ cient conditions for weakly e¢ cient solutions are obtained appealing to the machinery of non-smooth analysis and a certain CQ, for non-convex systems introduced in [4], which can be seen as an extension of the so-called basic CQs introduced in [21], for scalar IO problems posed in Banach spaces. The convex vector SIO problems considered in this paper arise in a natural way in robust linear vector optimization. Indeed, consider an uncertain linear vector optimization problem > > (LP ) " min " (c> 1 x; : : : ; cp x) s.t. at x

bt ; t 2 T;

where T is a …nite set, ci 2 Ui Rn ; i = 1; :::; p; and (at ; bt ) 2 Vt Rn+1 , t 2 T . The uncertainty sets Ui ; i = 1; :::; p; are arbitrary non-empty sets while Vt ; t 2 T; are non-empty compact sets. The robust minmax counterpart of (LP ) (term coined in [15]) enforces feasibility for any possible scenario and assumes that the cost of any (robust) feasible decision will be the worst possible, i.e., the problem to be solved is " min "

> max c> 1 x; : : : ; max cp x cp 2Up

c1 2U1

s.t. a> t x

bt ; 8(at ; bt ) 2 Vt ; t 2 T:

(2)

Observe that (2) is as (1), just taking fi (x) = maxci 2Ui c> i x (i.e., the support function of Ui ), i S = 1; :::; p; and expressing the constraints either as b a> x 0 for all (a; b) 2 Vt (a compact index set) or as gt (x) 0; with gt (x) = t2T

max b

a> x : (a; b) 2

S

t2T

Vt

for all t 2 T (a …nite index set equipped with

the discrete topology). This paper is organized as follows. Section 2 recalls basic concepts of convex analysis to be used later, applying some of them to characterize the so-called subdi¤erential cone and its interior, and to describe the relationships between several types of "tangent" cones which are closely related with the negative polar of the active cone. Section 3 extends to convex vector SIO four out of six constraint quali…cations introduced in [12] for linear vector SIO. The two exceptions, the Farkas-Minkowski and the local polyhedral constraint quali…cations, have not been considered in this paper as they are too strong in the convex framework. For methodological reasons, we give simple direct proofs of the lemmas in Section 3 even though most of them could be also obtained via linearization. The auxiliary Section 4 establishes di¤erent characterizations of the sets XpE ; XE ; and XwE in terms of the subdi¤erential cone; these characterizations do not involve constraint quali…cations, i.e., they are independent of the given representation of the closed convex feasible set X: Finally, Section 5 combines the results in Sections 3 and 4 to get characterizations of XpE ; XE ; and XwE in terms of KKT multipliers. Here the proofs are necessarily direct as the objective functions are not linear. These results are applied to the robust linear vector optimization problem (LP ):

4

2

Preliminaries

We start this section by introducing the necessary notations and concepts. Given Z Rn ; int Z; cl Z; and bd Z denote the interior, the closure, and the boundary of Z, respectively. The scalar product of x; y 2 Rn is denoted by x> y; the Euclidean norm of x by kxk ; the corresponding open ball centered at x and radius " > 0 by B (x; ") ; and the zero vector by 0n : We also denote by conv Z the convex hull of Z; while cone Z := R+ conv Z denotes the convex conical hull of Z [ f0n g. If Z is a convex cone, its positive (negative) polar cone is Z + := d 2 Rn : z > d 0 8z 2 Z (Z := d 2 Rn : z > d 0 8z 2 Z ; respectively). A convex cone is said to be pointed whenever it does not contain lines. We use frequently in this paper the topological interior of polar cones. We make three claims concerning cone Z; where Z is an arbitrary non-empty set of Rn : First, 0n 2 = conv Z , fcone Z is pointed and 0n 2 = Zg : (3)

We shall prove that 0n 2 conv Z if and only if cone Z contains lines or 0n 2 Z: If cone Z contains lines, there exists u 2 Rn n f0n g such that u 2 cone Z: m m X X Then, we can write u = i zi and u = i zi ; with 1 ; :::; m ; 1 ; :::; m 2 i=1

i=1

R+ ; zi 2 Z; i = 1; : : : ; m; m 2 N; so that ! 1 m m X X 0n = ( i + i) ( i=1

i

+

i ) zi

i=1

2 conv Z:

Alternatively, if 0n 2 Z; it is obvious that 0n 2 conv Z: Conversely, if 0n 2 conv Z; there exist i 2 R+ and zi 2 Z; i = 1; : : : ; n; n X n P such that i = 1 and i zi = 0n : Let j 2 f1; :::; ng be such that j > 0: If i=1 i=1 P i P j zj = zi 2 cone Z; zj 6= 0n ; putting := i > 0; it follows that u := i6=j

i6=j

so that cone Z contains the line spanned by u: Now, suppose that zj = 0n : Then X (4) i zi = 0 n : i6=j

If

k

i = 0 for all i 6= j; then 0n = zj 2 Z: Otherwise, there exists k 6= j such that > 0; in (4), and we repeat the argument above. Second, +

0n 2 = Z ) int (cone Z)

d 2 Rn : z > d > 0 8z 2 Z :

(5)

+

In fact, assume that 0n 2 = Z and d 2 int (cone Z) : Let " > 0 be such that + + z cl B (d; ") (cone Z) : Given z 2 Z; d " kzk 2 (cone Z) while z 2 cone Z; z so that z > d " kzk = z > d " kzk 0 and so z > d " kzk > 0: Obviously, if Z \ ( Z) 6= ;; then both members of the inclusion in (5) are empty.

5

Third, +

Z compact ) d 2 Rn : z > d > 0 8z 2 Z

int (cone Z) :

(6)

In fact, by assumption, there exists > 0 such that kzk for all z 2 Z: Let d 2 Rn be such that z > d > 0 for all z 2 Z: By the compactness of Z; n X " := minz2Z z > d > 0: Given x 2 cone Z; we can write x = 0 i zi ; with i i=1

and zi 2 Z; i = 1; :::; n: Then, given u such that kuk n

X " x> d + u =

n X

" > d+ u i zi

i=1

i

1; one has

"

"

z>u

0:

i=1

+

+

Thus, B d; " (cone Z) and d 2 int (cone Z) : The inclusion in (6) becomes an equation between non-empty sets whenever Z is compact and 0n 2 = conv Z (as cone Z turns out to be a pointed cone). The one-sided directional derivative of a real-valued function h : Rn ! R at x 2 Rn with respect to a vector d 2 Rn is de…ned to be the limit h(x + "d) h(x) ; "#0 " if it exists. If h is convex, then it is continuous, the directional derivative function at x 2 Rn ; h0 (x; :); is a …nite convex function too, and the subdi¤ erential n o @h(x) := 2 Rn : h(x) h(x) + > (x x) 8x 2 Rn ; h0 (x; d) = lim

is a non-empty compact convex such that

h0 (x; d) = max

2@h(x)

>

d

(7)

(see, e.g., [27, Theorems 23.1 and 23.4]). From (7) one gets easily [cone @h(x)] = fd 2 Rn : h(x; d)

0g :

(8)

On the other hand, if d is a descent direction of h at x there exists that h(x + "d) h(x) < 0 for all " 2 ]0; [

> 0 such (9)

and, so, max

2@h(x)

>

d = h0 (x; d) = lim "#0

h(x + "d) "

h(x)

0;

(10)

with 0n 2 = @h(x) and @h(x) compact. Then, h0 (x; d) < 0 (otherwise, by (10), > > there exists e 2 @h(x); e 6= 0n such that e d = 0 and h(x+"d) h(x) "e d = 0 for all " > 0; in contradiction with (9)). It is easy to prove that, under the assumptions on P; n o > X = x 2 Rn : > x y gt (y) ; 8 (t; y) 2 T Rn ; 8 2 @gt (y) : (11) 6

From (11) and [13, Theorem 9.3] one gets that X is compact if and only if ! S cone @gt (y) = Rn : This condition (expressed in terms of the data), (t;y)2T

Rn

guarantees the compactness of f (X) : More information on X can be obtained from the linearization (11) of X under the constraint quali…cations introduced in Section 3. Two convex cones involving the data of the vector SIO problem P in (1) are basic in our approach: the convex conical hull of the subdi¤erentials at x 2 X of the components of f; ! p [ @fi (x) ; G(x) := cone i=1

that we shall call subdi¤ erential cone at x; and the active cone at x 2 X; 0 1 [ A (x) := cone @ @gt (x)A ; t2T (x)

where T (x) := ft 2 T : gt (x) = 0g is the set of active indices at x: We are interested in the negative polar of both cones, G(x) and A (x) ; and their corresponding interiors, int G(x) and int A (x) : Lemma 1 Given x 2 X; the following statements hold: p p S S = @fi (x): (i) 0n 2 = conv @fi (x) if and only if G(x) is pointed and 0n 2 i=1

i=1

(ii) G(x) = fd 2 Rn : fi0 (x; d) p S (iii) If 0n 2 = @fi (x); then

0; i = 1; : : : ; pg :

i=1

int G(x)

=

>

d 2 Rn : n

= fd 2 R :

d

d; one has that

>

d < 0

for all 2 @fi (x) if and only if fi0 (x; d) < 0; i = 1; : : : ; p: Assuming that p S 0n 2 = @fi (x); from (5) and (6), one gets (12). The additional condition that i=1

0n 2 = conv

p S

@fi (x)

guarantees, by (i), that G(x) is pointed, which in turn

i=1

implies that int G(x) 6= ;: The KKT conditions will be obtained by analyzing the relationships between negative polar of the active cone A (x) and four "tangent" cones at x de…ned as follows: The cone of feasible directions at x is D (X; x) = fd 2 Rn : 9 > 0 such that x + d 2 Xg : It is known that D (X; x) A(x) ([13, Lemma 7.7]). The attainable cone at x; denoted by A(X; x); is formed by those d 2 Rn such that there exist > 0 and a vector function h 2 C 1 ([0; [ ; Rn ) with h(0) = x; h0 (0) = d; and h(s) 2 X for all s 2 [0; [ : The Bouligand tangent cone at x; denoted by T (X; x); is formed by those d 2 Rn such that there exist sequences sk k2N and dk k2N such that sk # 0, dk ! d as k ! 1 and x + sk dk 2 X for all k 2 N: In that case, since dk 2 D (X; x) for all k 2 N; d 2 cl D(X; x): The interior tangent cone at x; denoted by T i (X; x); is formed by those d 2 Rn such that there exist > 0 and a neighborhood N of d such that x + sN X for all s 2 ]0; [ : Lemma 2 ([2], [16], [20], [25], [27], [34]) Given x 2 X; the cones T i (Z; x); D (Z; x) ; A(Z; x); and T (Z; x) are all convex and satisfy T i (X; x) = int D (X; x)

D (X; x)

A(X; x) = T (X; x) = cl D(X; x): (14)

Example 3 Consider the closed convex set X = (x1 ; x2 ) 2 R2 : t(x1

2)2

t

x2

0 8t 2 [0; 1]

and the point x = (1; 0) 2 X: Since ' (x) =

x2 ; x21 4x1 + 3

1 x1 3; x2 ; otherwise,

one has X = x 2 R2 : ' (x)

0 = x 2 R2 : x2

max x21

4x1 + 3; 0

so that D(X; x) = d 2 R2 : 2d1 + d2 > 0; d2

A(X; x) = T (X; x) = d 2 R2 : 2d1 + d2 8

0; d2

0 [ f02 g ; 0 = cl D(X; x);

;

and T i (X; x) = d 2 R2 : 2d1 + d2 > 0; d2 > 0 = int D(X; x): Hence, the inclusions in (14) are strict. Observe that A(x) = cone f( 2t; 1) : t 2 [0; 1]g = cone f(0; 1) ; ( 2; 1)g is the negative polar of any of the cones considered in Lemma 2. Lemma 4 Given x 2 X; A(x) = fd 2 Rn : '0 (x; d)

0g

and int A(x) = fd 2 Rn : '0 (x; d) < 0g Proof: One has '0 (x; d)

> = maxt2T 8 (x) max 2@gt (x) d 9 < = [ > = max d: 2 @gt (x) : : ;

(15)

t2T (x)

From (15), we have '0 (x; d)

0 if and only if

if and only if d 2 A(x) : Similarly, by the compactness of >

3

d < 0 for all

2

[

t2T (x)

[

>

d

0 for all

2

[

@gt (x)

t2T (x)

@gt (x); '0 (x; d) < 0 if and only if

t2T (x)

@gt (x) if and only if d 2 int A(x) :

Constraint quali…cations

Next we introduce four constraint quali…cations which are frequently encountered in the SIO literature or are inspired in classical constraint quali…cations of non-linear optimization. When the constraint functions are a¢ ne, these constraint quali…cations collapse to those introduced, under similar names, in [12] (for linear vector SIO). Even more, the below CQ hold for the convex system fgt (x) 0; t 2 T g if and only if the corresponding linear versions hold for the linear system in (11). De…nition 5 We say that P satis…es the Slater constraint quali…cation (SCQ) if there is a Slater point x0 ; i.e. gt (x0 ) < 0; for all t 2 T: In other words, SCQ holds if and only if the marginal function ' takes a negative value at some point (observe that the marginal function of the linear system in (11) is also '). By continuity of '; SCQ implies that int X 6= ;; but the converse is not true. It is worth noting that, in contrast with the other three CQs to be introduced next, SCQ is not associated with a given feasible solution. 9

De…nition 6 We say that the locally Farkas-Minkowski constraint quali…cation (LFMCQ) holds at x 2 X if A(x) = D(X; x) : Obviously, if LFMCQ holds at x 2 X; then A(x) is closed. If x 2 int X; then A(x) = D(X; x) = f0n g: Therefore the LFMCQ should be investigated only at the boundary feasible points. Moreover, if this property holds and x 2 bd X; we have A(x) 6= f0n g; i.e. there are binding constraints at x: Lemma 7 ([13, Theorem 7.9]) SCQ implies LFMCQ at any feasible solution. Consequently, if P satis…es SCQ, then bd X = fx 2 X : A (x) 6= f0n gg : Moreover, geometric information on X in terms of the data can be obtained by combining (11) and [13, Theorem 5.9]. De…nition 8 We say that P satis…es the local Slater constraint quali…cation (LSCQ) at x 2 X when either T (x) = ; or there exists a vector d 2 Rn satisfying [ @gt (x): d> < 0 for all 2 (16) t2T (x)

Proposition 9 LSCQ holds at x 2 X if and only if 0 1 [ T (x) 6= ; ) 0n 2 = conv @ @gt (x)A : t2T (x)

Proof: We can assume that T (x) 6= ;: The direct statement is obvious while the converse statement is consequence of the assumptions on P: In fact, the continuity of [ t 7! gt (x) on the compact set T entails that T (x) is a compact @gt (x) (see e.g. [18, Theorem 4.4.2]). Due to the comset, as well as t2T (x)

0

pactness of conv @ 0

0n 2 = conv @

[

t2T (x)

[

t2T (x)

1

@gt (x)A and the separation theorem, the condition

1

@gt (x)A guarantees the ful…lment of LSCQ at x.

Corollary 10 If P satis…es LSCQ at x 2 X; then A (x) is a pointed closed cone. Proof: We can assume 0 T (x) 6= ; (otherwise A (x) = f0n g is closed). By 1 [ Proposition 9, since conv @ @gt (x)A is a compact convex set which does t2T (x)

10

0

not contain the origin, A (x) = cone @

[

t2T (x)

1

@gt (x)A is a pointed closed cone.

The following example shows that the converse statement of Corollary 10 does not hold. Example 11 Let n = 2 and gt (x) = (1 t) jx1 Then it is easy to see that X = x 2 R2 : gt (x)

1j+jx2 j 1+t for all T = [0; 1]:

0; t = 0; 1 = [0; 2]

f0g:

We have T (02 ) = [0; 1]; with @gt (02 ) = conv f(t Thus,

S

1; 1); (t

@gt (02 ) = [ 1; 0]

1; 1)g = ft

1g

[ 1; 1] and A (02 ) = cone

t2T (02 )

R

[ 1; 1] ; t 2 [0; 1]: S

@gt (02 )

t2T (02 )

R is closed. Finally, as

!

=

(0; 1) 2 @g0 (02 ); LSCQ fails at 02 :

De…nition 12 We say that P satis…es the extended Kuhn-Tucker CQ (EKTCQ) at x 2 X when fd 2 Rn : '0 (x; d)

0g

A(X; x):

(17)

As a consequence of the Io¤e and Tihkomirov’s theorem on the subdi¤erential of the supremum function (see e.g. [34, Theorem 2.4.18] and [16, Proposition 6.3]), when x 2 bd X; it holds that '0 (x; d) = max gt0 (x; d):

(18)

t2T (x)

The next lemma provides a useful approximations of the tangent cones to X at x 2 bd X in terms of the directional derivative function '0 (x; ) : Lemma 13 Let x 2 bd X. Then, fd 2 Rn : '0 (x; d) < 0g

T i (X; x)

T (X; x)

fd 2 Rn : '0 (x; d)

0g :

Proof: We …rst show that fd 2 Rn : '0 (x; d) < 0g T i (X; x): Let d 2 Rn be such that '0 (x; d) < 0: Then, there exists > 0 such that '(x + sd) < 0 for all s 2 ]0; [ : By continuity of '; there exist t0 > 0 and an open neighborhood N of d such that '(x + td) < 0 for all t 2 ]0; t0 [ and all d 2 N : That is, d 2 T i (fx 2 Rn : '(x) < 0g ; x)

The inclusion T i (X; x)

T i (fx 2 Rn : '(x)

0g ; x) = T i (X; x):

T (X; x) is well-known (see, e.g. in [28]). 11

Finally, consider d 2 T (X; x): Then, there exist sequences sk k2N and dk k2N such that sk # 0; dk ! d as k ! 1 and '(x + sk dk ) 0 for all k 2 N: Since '(x) = 0; one has '(x + sk dk ) '(x) '(x + sk dk ) = sk sk Now, taking limits as k ! 1; we conclude that '(x + sk dk ) ! '0 (x; d) sk

0 8k 2 N:

0:

Theorem 14 The following statements are true: (i) SCQ implies LSCQ at any x 2 X: (ii) If LSCQ holds at x 2 X and T (x) is a set of isolated points of T; then SCQ holds. (iii) If LSCQ holds at x 2 X; then LFMCQ holds at x: (iv) If LSCQ holds at x 2 X; then EKTCQ holds at x: Proof: (i) Let x0 be a Slater point and x 2 X: Let d := x0 for some t 2 T (x) : Since 0 > gt (x0 )

gt (x) +

>

x0

x and

2 @gt (x)

x = d> ;

d satis…es (16). (ii) We shall prove that, under the assumption (equivalent to assert that T (x) is …nite and T T (x) is compact), there exists a Slater point in the half-line emanating from x in some direction d satisfying (16). In fact, given t 2 T (x) ; as gt0 (x; d) = max > d < 0; there exists "t > 0 with gt (x + d) < 0 for any 2@gt (x)

2 ]0; "t [ : On the other hand, by continuity of the function maxt2T T (x) gt ; there exists a neighborhood N of x where maxt2T T (x) gt is negative. Taking a su¢ ciently small 0 > 0; we get gt (x + 0 d) < 0 for all t 2 T (x) and x + 0 d 2 N : So, x + 0 d is a Slater point. (iii) By a well-known result (see, e.g., [24, Proposition 5]), if '0 (x; d) < 0; then d 2 D (X; x) : This, combined with Lemma 4, yields D (X; x) ;

int A(x)

(19)

where int A(x) 6= ; by Corollary 10, as the negative polar of a pointed closed convex cone contains interior points. Taking negative polars in both members of (19) one gets, by the Farkas lemma for cones, D (X; x)

int A(x)

= A(x)

= cl A(x) = A(x):

We now prove the reverse inclusion by contradiction. Suppose that there exists 2 A(x) D (X; x) : Then there exists d 2 D (X; x) such that > d > 0;

12

with

=

that

> i0 d

Xm

i=1

i;

i

2 @gti (x); ti 2 T (x); i = 1; :::; m: Let i0 2 f1; :::; mg such

> 0: This means that for any " > 0; we have gti0 (x + "d) = gti0 (x + "d)

gti0 (x)

"

> i0 d

> 0;

so that d 2 = D (X; x) (contradiction). Thus, A(x) = D (X; x) : (iv) Let x 2 int X: If x 2 int X; then A(X; x) = Rn and (17) holds trivially. Thus we can assume without loss of generality (w.l.o.g. in short) that x 2 bd X: Let d 2 Rn satisfy (16). By (18), we have, from (15), 8 9 < = [ > '0 (x; d) = max d: 2 @gt (x) < 0: : ; t2T (x)

6 ; and '0 (x; ) is a …nite-valued convex function Since fd 2 Rn : '0 (x; d) < 0g = ([27, Theorem 23.4]), we get cl fd 2 Rn : '0 (x; d) < 0g = fd 2 Rn : '0 (x; d)

0g :

Then, by Lemma 13 and Lemma 2, fd 2 Rn : '0 (x; d)

0g = T (X; x) = A (X; x) ;

and so EKTCQ holds at x: Observe that the assumptions on T (x) and T T (x) in Theorem 14(ii) are not super‡uous (see [12, Example 4]) and imply the non-connectedness of T: In the particular case that T is …nite, SCQ and LSCQ are equivalent. Notice also that Lemma 7 follows straighforwardly from statements (i) and (iii) of Theorem 14. The next example shows that LFMCQ does not imply LSCQ (consequently does not imply SCQ). Example 15 Let n = 2 and gt (x) = kxk t for all t 2 T = [0; 1] : It follows that X = f02 g and T (02 ) = f0g : We have D(X; 02 )+ = R2 , @g0 (02 ) = cl B (02 ; 1) ; A (x) = cone @g0 (02 ) = R2 = D(X; 02 ) but (16) fails. So, LFMCQ holds at 02 while LSCQ fails. Example 16 ([12, Example 25]) Consider the linear vector SIO problem P : " min " f (x) = (x1 q s.t. tx1 + 1

x2 ; x2 ) (t

2

1)

x2

0; t 2 [0; 2];

whose feasible set is X = R2 : It follows that EKTCQ holds at 02 (see [12, Example 25]) and A (02 ) = (R++ R+ ) [ f02 g is non-closed. Thus, EKTCQ does not imply LFMCQ.

13

The following example shows that the converse statement of Theorem 14(iv) does not hold. Example 17 Consider Example 11, for which LSCQ is not ful…lled at 02 . We have '0 (02 ; d)

0 , gt0 (02 ; d) 0 8t 2 [0; 1] , (t 1)d1 + d2 0 and (t , d1 0; d2 = 0:

1)d1

d2

0 8t 2 [0; 1]

Let d 2 R2 be such that '0 (02 ; d) 0: Then the vector function h(s) := (sd1 ; sd2 ) = (sd1 ; 0) 2 X for all s 2 [0; 2=d1 ]; satis…es h(0) = 02 and h0 (0) = d: That is, d 2 A(X; 02 ): So, EKTCQ holds at 02 : Lemma 18 Let x 2 bd X. Then, P satis…es EKTCQ at x if and only if T (X; x) = fd 2 Rn : '0 (x; d)

0g :

(20)

Proof: Assume that P satis…es EKTCQ at x; i.e., fd 2 Rn : '0 (x; d)

0g

A (X; x) = T (X; x) :

(21)

By (21) and Lemma 13, (20) holds. The converse statement is trivial. The four constraint quali…cations introduced above fail in the next example. Example 19 Consider the following set X = x 2 R2 : gt (x1 ; x2 ) = tx21

t (1

t) + (1

t) x22 + x2

0; t 2 [0; 1] :

Since g0 (x1 ; x2 ) = x22 + x2

0 ) x2 2 [ 1; 0]

and g1 (x1 ; x2 ) = x21

0 ) x1 = 0; [ one gets X = f0g [ 1; 0] : As T (02 ) = f0; 1g ; 02 2

t2T (02 )

@gt (02 ) = f(0; 0) ; (0; 1)g

and so LSCQ fails at 02 : We also have A(02 ) = cone f(0; 1)g and D(X; 02 ) = cone f(0; 1)g ; so that A(02 ) D(X; 02 )+ ; which implies the failure of LFMCQ. 0 0 Since g0 (02 ; d) = d2 and g1 (02 ; d) = 0; fd 2 R2 : '0 (02 ; d) 0g = R R : On the other hand A(X; 02 ) = f0g R : So, EKTCQ fails to hold at 02 :

4

Cone conditions

It is well-known (see, e.g. [9, Theorem 3.21 and Corollary 3.23]) that, if T is …nite, fi ; i = 1; : : : ; p and gt ; t 2 T; are convex di¤erentiable functions, x 2 X; and 0n 2 = conv frf1 (x); :::; rfp (x)g ; then x is a weakly e¢ cient solution of the (ordinary) convex vector optimization problem P if and only if there exist 14

tj 2 T (x) ; j = 1; : : : ; q; as well as non-negative scalars satisfying p q X X = rf (x) i i j rgtj (x) 6= 0n : i=1

1; : : : ;

p;

1 ; :::;

q

(22)

j=1

In geometric terms, the KKT condition (22) asserts that G(x) \ A (x) 6= f0n g (it is su¢ cient to take q n by Carathéodory’s theorem applied to the convex cone A (x)). In this section we give similar conditions for convex vector SIO problems and di¤erent types of e¢ ciency. The characterizations of e¢ cient and weakly e¢ cient solutions in this section extend similar results on linear vector SIO in [12] to convex vector SIO. We start with two su¢ cient conditions for x 2 X to be e¢ cient and weakly e¢ cient solution independently of the constraints under assumptions which already appeared in Lemma 1. Proposition 20 If x 2 X; then the following statements are true: p S (i) If 0n 2 conv @fi (x) ; then x 2 XwE : (ii) if 0n 2

p S

i=1

i=1

@fi (x) and the components of f are strictly convex, then x 2 XE :

Proof: (i) Let 0n = p X

p X

i i;

with

i

0 and

i=1

i

= 1: Let

i=1

j

> 0; j 2 f1; :::; pg : Then,

j

=

i

2 @fi (x); i = 1; :::; p; and X

i j

i:

We can assume

i6=j

w.l.o.g. the existence of i 6= j such that i > 0 (otherwise, j = 0n and x 2 XwE because it is a minimizer of fj on Rn ; and so on X). If we suppose that there exists x b 2 X such that f (b x) < f (x) ; for every i = 1; : : : ; p and i 2 @fi (x); then > 0 > fi (b x) fi (x) x x) : i (b

Thus, 0>

> j

(b x

x) =

X i6=j

i > i j

(b x

x) > 0;

which is a contradiction whereby x 2 XwE : p S (ii) If 0n 2 @fi (x) and all the objective functions are strictly convex, then i=1

x is the unique minimizer of at least one of the objective functions on Rn (and so on X). Then, there is no x b 2 X such that f (b x) f (x) ; i.e., x 2 XE : We have stated and proved here Proposition 20(i) in order to motivate the assumption of Proposition 26. A stronger result can be shown easily via scalarization (see the forthcoming Theorem 25(ii)). Theorem 21 Given x 2 X; the following statements hold: (i) If D(X; x) \ G(x) fd 2 Rn : fi0 (x; d) = 0; i = 1; : : : ; pg ; then x 2 XE : p S (ii) If 0n 2 = @fi (x); then x 2 XwE if and only if D(X; x) \ int G(x) = ;: i=1

15

Proof: (i) Denote F (x) := fd 2 Rn : fi0 (x; d) = 0; i = 1; : : : ; pg : = XE : Then, there exist d 2 D(X; x); i0 2 f1; : : : ; pg; and Suppose that x 2 > 0 such that fi (x + "d) fi (x); i = 1; : : : ; p; (23) with fi0 (x + "d) < fi0 (x)

(24)

for all " 2 ]0; [ : From (23) it follows that fi0 (x; d) 0; i = 1; : : : ; p: Thus, again by Lemma 1(ii), d 2 D(X; x) \ G(x) : But, from (24) we get " > fi0 (x + i0 d > "d) fi0 (x) < 0 for all i0 2 @fi0 (x): That is, fi00 (x; d) = max d i0 < 0; so that d 2 = F (x): (ii) If 0n 2 conv

i0 2@fi0 (x)

p S

i=1

@fi (x) ; then x 2 XwE by Proposition 20. So, we

can assume that 0n 2 = conv

p S

i=1

@fi (x) ; in which case 0n 2 =

p S

@fi (x): Let

i=1

d 2 D(X; x)\ int G(x) : Then, again by Lemma 1(iii), fi0 (x; d) < 0; i = 1; : : : ; p: That is, d is a feasible descent direction for fi at x; i = 1; : : : ; p: So, x 2 = XwE : Now, suppose that x 2 = XwE ; i.e. there exists x 2 X such that fi (x) < fi (x); i = 1; : : : ; p: Let d = x x 2 D(X; x): Since d is a feasible descent direction for each fi at x; we have fi0 (x; d) = max i 2@fi (x) > i d < 0: Thus, d 2 int G(x) by Lemma 1(iii). Thus, D(X; x)\ int G(x) 6= ;; which completes the proof. The following example shows that x can be an e¢ cient solution and there exists d 2 D(X; x)\ G(x) such that d 2 = F (x): Example 22 Let n = 2; p = 2; f1 (x) = (x1 1)2 + (x2 + 1)2 ; f2 (x) = x21 2x1 + x22 ; and gt (x) = t2 jx1 j x2 for all t 2 [0; 1] : We have that X = x 2 R2 : gt (x)

0; t 2 [0; 1] = x 2 R2 : jx1 j

x2

0 ;

02 2 XE; d = (1; 1) 2 D(X; 02 ); f10 (02 ; d) = rf1 (02 )> d = 0 and f20 (02 ; d) = rf2 (02 )> d = 2: So, d 2 D(X; 02 )\ G(02 ) ; but d 2 = F (02 ): The following example shows that the assumption 0n 2 = rem 21 (ii) is not super‡uous.

p S

@fi (x) in Theo-

i=1

Example 23 Let n = 2; p = 2; f1 (x) = x21 + x22 ; f2 (x) = x21 gt (x) = tx21 x2 + t 1 for all t 2 [0; 1] : As gt grows with t;

2x1 + x22 ; and

X = x 2 R2 : gt (x)

0; t 2 [0; 1] = x 2 R2 : x2 x21 : S Since f1 and f2 are strictly convex and 02 = rf1 (02 ) 2 @fi (02 ); we have i=1;2 >

02 2 XwE : Since rf1 (02 ) = 02 ; it follows that rf1 (02 ) d = 0 for all d 2 16

n > = R2 \ d 2 R2 : rf2 (02 ) d

R2 : Thus G(02 )

>

D(X; 02 ): We have f20 (02 ; d) = rf2 (02 ) d = ;:

o 0 : Now, take d = (1; 1) 2

2: Then D(X; 02 )\int G(02 ) 6=

Corollary 24 Given x 2 X; x 2 XwE if and only if fd 2 Rn : fi0 (x; d) < 0; i = 1; : : : ; pg \ D(X; x) = ;: Proof: If 0n 2

p S

i=1

(25)

@fi (x; d); then x 2 XwE by Proposition 20 and there exists

i 2 f1; : : : ; pg such that fi0 (x; d) 0; so that (25) holds too. Otherwise, both statements are equivalent by Theorem 21(ii) and Lemma 1(iii). The next lemma reformulates well-known characterizations of XpE and XwE in terms of scalarizations of P and the cone of feasible directions. For the sake p X of brevity, given = ( 1 ; :::; p ) 2 Rp ; we denote i fi in matrix form as i=1

>

f:

Lemma 25 Given x 2 X; the following statements hold: + (i) x 2 XpE if and only if @ > f (x) \ D (X; x) 6= ; for some >

(ii) x 2 XwE if and only if @

> 0p :

+

f (x) \ D (X; x) 6= ; for some

0p :

Proof: We associate with P the parameterized (weighted) problem >

P ( ) : min

f (x) =

p X

i fi (x)

i=1

s.t. x 2 X;

(26)

where

0p is the weight vector and P ( ) is a convex SIO problem for each p X (we could aggregate i = 1). By Theorem 27.4 of [27] it follows that i=1

x is an optimal solution of P ( ) for some >

0p if and only if there exists

+

2@ f (x) \ D (X; x) : (i) According to the Geo¤rion Theorem ([11], see also [9, Theorem 3.15]), x is an optimal solution of P ( ) for some > 0p if and only if x 2 XpE : (ii) Similarly, by [9, Proposition 3.10], x is an optimal solution of P ( ) for some 0p if and only if x 2 XwE : Observe that, given x 2 X; 0n 2 conv consider now the case where 0n 2 = conv

p S

p S

@fi (x)

entails that XwE : We

i=1

@fi (x) :

i=1

Proposition 26 Let x 2 X be such that 0n 2 = conv

p S

i=1

@fi (x) : Then x 2

XwE if and only if G(x) \ D(X; x)+ 6= f0n g ; in which case x 2 bd X: 17

p S

Proof: Let 0n 2 = conv if there exist

0p ;

i=1 i

@fi (x) : Then, G(x) \ D(X; x)+ 6= f0n g if and only

2 @fi (x); i = 1; : : : ; p such that

and only if p X i=1

i @fi (x)

+

>

\ D (X; x) = @

p X

+

i i

i=1

2 D (X; x)

if

+

f (x) \ D (X; x) 6= f0n g

for some 0p : The …rst statement follows from Lemma 25(ii). + Finally, if x 2 int X; then D (X; x) = f0n g ; so that G(x) \ D(X; x)+ = f0n g : Hence x 2 = XwE by the …rst statement.

5

KKT conditions

We are in a position to obtain KKT optimality conditions. Theorem 27 Given x 2 X; the following statements hold: (i) If there exists 0p ( > 0p ) satisfying (KKT)

@

>

f (x) \ ( A(x)) 6= ;;

then x 2 XwE (x 2 XpE ; respectively). (ii) If x 2 XwE (x 2 XpE ) satis…es LFMCQ, then there exist respectively) such that (KKT) holds. If, additionally, 0n 2 = conv then the following stronger condition holds: @

>

0p ( > 0p ; p S @fi (x) ;

i=1

f (x) \ ( A(x)) 6= f0n g :

Proof : (i) Recall that D (X; x) +

D (X; x)

A (x) : Taking positive polars we get ++

A (x)

=

cl A (x)

A (x) ;

+

so that (KKT) implies that @ > f (x)\D (X; x) 6= ;: The conclusion follows from Lemma 25. + (ii) We are assuming that A (x) = D (X; x) = D (X; x) : The …rst part is straightforward consequence of Lemma 25 while the second one follows from the argument of Proposition 26. Lemma 28 If x 2 XwE ; then fd 2 Rn : fi0 (x; d) < 0; i = 1; : : : ; pg \ T (X; x) = ;: 18

(27)

Proof: Assume the contrary, that is, there exists d 2 T (X; x) satisfying fi0 (x; d) < 0; i = 1; : : : ; p:

(28) k

k

k

By d 2 T (X; x); there exist sequences fs gk2N and fd gk2N such that s # 0; dk ! d and x + sk dk 2 X for all k 2 N: Since x 2 XwE there exists (perhaps after passing to a subsequence) an index i0 2 f1; : : : ; pg such that fi0 (x + sk dk )

fi0 (x); k 2 N:

Since fi0 is directional di¤erentiable at x in the Hadamard sense (see e.g. [1, Proposition 2.126(v)(c)]), the latter inequalities provide fi0 (x + sk dk ) k!1 sk

fi00 (x; d) = lim

fi0 (x)

0

which contradicts (28). Observe that, since D(X; x) T (X; x); the direct part of Corollary 24 is immediate consequence of Lemma 28: We have shown in Theorem 27 that (KKT) is a necessary condition for weak e¢ ciency under LFMCQ (and, by Lemma 7, also under SCQ). Finally, we prove that this necessary condition still holds under the remaining two CQ introduced in Section 3, namely, LSCQ and EKTCQ, together with the closedness of the active cone recall that LSCQ entails the latter property according to Corollary 10). Theorem 29 Let 0n 2 = conv

p S

@fi (x)

i=1

and x 2 XwE satisfying one of the

following conditions: (i) LSCQ; (ii) EKTCQ and A (x) is closed. Then, there exists 0p satisfying (KKT). p S Proof: Since x 2 XwE and 0n 2 = conv @fi (x) ; x 2 bd X and i=1 ( ) p X n 0 d2R : 0p \ T (X; x) = ; i fi (x; d) < 0; for all

(29)

i=1

by Proposition 26 and Lemma 28, respectively. Combining the formulas (18) and (29) with Theorem 14(iv) and Lemma 18, we conclude that there is no d 2 Rn such that p X 0 0p (30) i fi (x; d) < 0 for all i=1

and

gt0 (x; d)

0 for all t 2 T (x) :

(31)

(i) Assume that LSCQ is satis…ed at x: Now, (30)-(31) is equivalent to: there is no d 2 Rn such that 19

>

d < 0 for all

>

2@

f (x) for all

0p :

(32)

and >

d

0 for all

2 @gt (x) for all t 2 T (x)

(33)

Since the homogeneous linear system formed by (33)-(32) is inconsistent, ! S > @gt (x) is closed @ f (x) is a compact convex set and A (x) := cone t2T (x)

(by Corollary 10), so that the Minkowski sum of both sets is closed, we can apply Motzkin’s Theorem [13, Theorem 3.5] to conclude that >

0n 2 @

f (x) + A (x) :

(ii) The proof is the same, taking into account that now A (x) is closed by assumption. Example 16 shows that the closedness assumption in Theorem 29(ii) is not super‡uous. Indeed, the unique solution of the system formed by the non-linear equations ! 2 X ti 1 0 q + 2 = ; t1 ; t2 2 [0; 2]; 1 i 2 i 1 1 1 (t 1) i=1 and the inequalities

i

is

1

=

2

=

1

=

2

0;

0; i = 1; 2;

i

(34)

= 0: Thus, (KKT) fails.

Example 30 Consider the robust counterpart problem in (2) with Ui = B (ci ; ") > Rn ; i = 1; :::; p: Then, fi (x) = maxci 2Ui c> i x = ci x + " kxk ; with ( n o x ; if x 6= 0n ; ci + " kxk @fi (x) = B (ci ; ") ; if x = 0n : Let X = x 2 Rn : p> qk ; k 2 K be the feasible set of (2) and let x 2 kx X: According to Theorem 14, Proposition 20, and Theorems 27 and 29, the following statements hold: (i) If 0n 2 conv solution.

p S

@fi (x) ; then x is a minmax robust weakly e¢ cient

i=1

(ii) Assume that 0n 2 = conv

p S

@fi (x)

and either LSCQ or EKTCQ holds

i=1

at x 6= 0n : Then, x is a weakly e¢ cient solution of (2) if and only if there exists p P x 0m such that i ci + " kxk 2 A (x) : i=1

20

Acknowledgments The authors are grateful to the referees and the editor for their constructive comments and helpful suggestions which have contributed to the …nal preparation of the paper.

References [1] J.F. Bonnans, A. Shapiro, Perturbation analysis of optimization problems, Springer, Berlin, 2000. [2] R.I. Bo¸t, S.-M. Grad, G. Wanka, Duality in vector optimization, Springer, Berlin, 2009. [3] G. Caristi, M. Ferrara, A. Stefanescu, Semi-in…nite multiobjective programming with generalized invexity, Math Reports 12 (2010) 217-233. [4] Chuong, T.D., Huy, N.Q., & Yao, J.C. (2009). Stability of semi-in…nite vector optimization problems under functional perturbations. J. Glob. Optim., 45, 583-595. [5] T.D. Chuong, N.Q. Huy, J.C. Yao, Pseudo-Lipschitz property of linear semi-in…nite vector optimization problems, Eur J Oper Res 200 (2010) 639-644. [6] T.D. Chuong, N.Q. Huy, J.C. Yao, (2010). Stability of semi-in…nite vector optimization problems under functional perturbations. J. Global Optim, 45, 583-595. [7] T.D. Chuong, D.S. Kim, Nonsmooth semi-in…nite multiobjective optimization problems, J Optim Theory Appl 160 (2014) 748-762. [8] N. Dinh, M.A. Goberna, M.A. López, T.Q. Son, New Farkas-type constraint quali…cations in convex in…nite programming, ESAIM Control Optim Calc Var 13 (2007) 580-597. [9] M. Ehrgott, Multicriteria optimization (2nd ed.), Springer, Berlin, 2005. [10] X. Fan, C. Cheng, H. Wang, Density of stable semi-in…nite vector optimization problems under functional perturbations, J Global Optim, to appear. [11] A. Geo¤rion, Proper e¢ ciency and the theory of vector maximization, J Math Anal Appl 22 (1968) 618-630. [12] M.A. Goberna, F. Guerra-Vazquez, M.I. Todorov, Constraint quali…cations in linear vector semi-in…nite optimization, Eur J Oper Res 227 (2013) 12-21. [13] M.A. Goberna, M.A. López, Linear semi-in…nite optimization, Chichester: Wiley, 1998.

21

[14] F. Guerra-Vazquez, J.-J. Rückmann, On proper e¢ ciency in multiobjective semi-in…nite optimization. in: H. Xu, K.L. Teo, Y. Zhang, (Eds.), Optimization and control techniques and applications, Springer, Berlin, 2014, pp. 115-135. [15] M. Ehrgott, J. Idec, A. Schöbel, Minmax robustness for multi-objective optimization problems, Eur J Oper Res 239 (2014) 17-31. [16] A. Hantoute, M.A. López, A complete characterization of the subdi¤erential set of the supremum of an arbitrary family of convex functions, J. Convex Anal 15 (2008) 831-858. [17] R. Hartley, On cone-e¢ ciency, cone-convexity and cone-compactness, SIAM J Appl Math 34 (1978) 211-222. [18] J.B. Hiriart-Urruty, C. Lemarechal, Convex analysis and minimization algorithms I. Springer, N.Y., 1993. [19] H. Kuhn, A. Tucker, Nonlinear programming, in: J. Newman (Ed.), Proceedings of the second Berkeley symposium on mathematical statistics and probability, Berkeley: University of California Press, 1951, pp. 481-492. [20] P.J. Laurent, Approximation et optimisation (French), Hermann, Paris, 1972. [21] C. Li, K.F. Ng, On constraint quali…cation for an in…nite system of convex inequalities in a Banach space, SIAM J Optim 15 (2005) 488-512. [22] C. Li, K.F. Ng, T.K. Pong, Constraint quali…cations for convex inequality systems with applications in constrained optimization, SIAM J Optim 19 (2008) 163-187. [23] C. Li, X. Zhao, Y. Hu, Quasi-Slater and Farkas–Minkowski quali…cations for semi-in…nite programming with applications, SIAM J Optim 23 (2013) 2208-2230. [24] J.E. Martínez-Legaz, M.I. Todorov, C.A. Zetina, Active constraints in convex semi-in…nite programming, Numer. Funct.l Anal. Optim. 35 (2014) 1078-1094. [25] D.W. Peterson, A review of constraint quali…cations in …nite-dimensional spaces, SIAM Review 15 (1973) 639-654. [26] R. Puente, V.N. Vera de Serio, Locally Farkas-Minkowski linear inequality systems, Top 7 (1999) 103-121. [27] R.T. Rockafellar, Convex analysis, Princeton U.P., Princeton, 1970. [28] O. Stein, First-order optimality conditions for degenerate index sets in generalized semi-in…nite optimization. Math. Oper. Res., 26 (2001) 565582. 22

[29] X.K. Sun, Regularity conditions characterizing Fenchel-Lagrange duality and Farkas-type results in DC in…nite programming, J. Math. Anal. Appl. 414 (2014) 590-611. [30] X.K. Sun, S.J. Li, D. Zhao, Duality and Farkas-type results for DC in…nite programming with inequality constraints, Taiwanese J. Math. 17 (2013) 1227-1244. [31] R.A. Tapia, M.W. Trosset, An extension of the Karush-Kuhn-Tucker necessity conditions to in…nite programming, SIAM Review 36 (1994) 1-17. [32] M.I. Todorov, Well-posedness in the linear vector semi-in…nite optimization, in G.H. Tzeng (Ed.), Multiple criteria decision making, Springer, N.Y., pp. 141-150, 1994. [33] M.I. Todorov, Kuratowski convergence of the e¢ cient sets in the parametric linear vector semi-in…nite optimization, Eur J Oper Res 94 (1996). 610-617. [34] C. Z¼ alinescu, Convex analysis in general vector spaces. World Scienti…c Publishing Co., New Jersey, 2002.

23