Optimality Conditions and Characterizations of the

1 downloads 0 Views 164KB Size Report
J Optim Theory Appl (2013) 158: 65–84 DOI 10.1007/s10957-012-0243-y manuscript No. (will be inserted by the ... for optimality in a set-constrained problem over a convex set. They are ...... Theory Appl. 104, 59–71 (2000). 11. Ivanov, V.I.: ...
J Optim Theory Appl (2013) 158: 65–84 DOI 10.1007/s10957-012-0243-y manuscript No. (will be inserted by the editor)

Optimality Conditions and Characterizations of the Solution Sets in Generalized Convex Problems and Variational Inequalities Vsevolod I. Ivanov

Received: date / Accepted: date

The final publication is available at http://link.springer.com

Abstract We derive necessary and sufficient conditions for optimality of a problem with a pseudoconvex objective function, provided that a finite number of solutions are known. In particular, we obtain that the gradient of the objective function at every minimizer is a product of some positive function and the gradient of the objective function at another fixed minimizer. We apply this condition to provide several complete characterizations of the solution sets of set-constrained and inequality-constrained nonlinear programming problems with pseudoconvex and second-order pseudoconvex objective functions in terms of a known solution. Additionally, we characterize the solution sets of the Stampacchia and Minty variational inequalities with a pseudomonotone-star map, provided that some solution is known. Keywords Global optimization · Pseudoconvex function · Variational inequality · Pseudomonotone-star map. Mathematics Subject Classification (2000) 90C46 · 90C26 · 26B25 · 47H05

1 Introduction Characterizations of the solution sets of the nonlinear programming problems with multiple solutions, provided that a fixed minimizer is known, play important role in optimization. Such results were first obtained for convex problems by Mangasarian [1]. Further investigation has been done by Burke and Ferris [2], where the objective function is extended real-valued, proper and convex, by Jeyakumar [3] (infinite dimensional convex program), by Jeyakumar and Yang [4] (convex composite multiobjective problem). In 1995, Jeyakumar and Yang [5] gave new characterizations for pseudolinear problems, which were extended to the nondifferentiable case by Lalitha and Mehta [6], and Smietanski [7]. Some results of Communicated by Vaithilingam Jeyakumar Department of Mathematics, Technical University of Varna, 9010 Varna, Bulgaria E-mail: [email protected]

2

this type in the case where the function is pseudoconvex, have been derived independently by Ivanov [8] and Wu [9]. Bianchi and Schaible [10] extended the results of Jeyakumar and Yang [5] to obtain characterizations of the solution set of PPM variational inequality problem. Similar characterizations for the nondifferentiable case were given by Lalitha and Mehta [6]. Jeyakumar and Yang’s type results involving higher-order Dini directional derivatives were obtained by Ivanov [11] for higher-order pseudoconvex problems. In some more papers, Mangasarian’s type characterizations were derived for several kinds of convex problems (see Wu and Wu [12], Jeyakumar, Lee and Dinh [13]), but in our knowledge nobody has extended the Mangasarian’s results to nonconvex functions. On the other hand, some similar results concerning variational inequality problems appeared in the papers of Bianchi and Schaible [10], Wu and Wu [12]. A similar result to those of Burke and Ferris in terms of the inward subdifferential were obtained by Penot [14]. Lagrange multiplier conditions, characterizing the solution set of the minimization problems with inequality constraints, were derived by Jeyakumar, Lee and Dinh [13] (coneconstrained convex problems), Dinh, Jeyakumar and Lee [15]) (pseudolinear problems), Lalitha and Mehta [16] (convex objective function and pseudolinear constraints). Mangasarian [1] has shown that the gradient of the objective function is constant over the solution set of a convex set-constrained program with multiple minimizers. This property does not hold anymore, when the objective function is not convex. On the other hand, we prove that, if the objective function is pseudoconvex, then its gradient at any solution remains collinear to the gradient of the objective function at some fixed minimizer, and both gradients have the same direction, provided that the last one is different from zero. In Section 2, we extend Mangasarian’s characterizations to set-constrained problems with a pseudoconvex objective function applying this condition. We obtain necessary and sufficient conditions for optimality in a set-constrained problem over a convex set. They are preferable than the known optimality conditions. In Section 3, we derive necessary and sufficient conditions for optimality of a given feasible point in terms of a finite number of known solutions. A more general condition appears in them in contrast to the case when exactly one minimizer is known. Additionally, we obtain second-order characterizations in Section 4, Lagrange multiplier conditions characterizing inequality-constrained problems in Sections 3 and 6. Characterizations of the solution sets of the Stampacchia and Minty variational inequalities are derived in Section 5.

2 Characterizations of the Solution Set of a Pseudoconvex Set-Constrained Program Throughout this paper, Rn is the real n-dimensional Euclidean vector space, Γ ⊆ Rn is an open set, S ⊆ Γ is an arbitrary subset, and f is a Fr´echet differentiable function, defined on Γ . The main purpose of this Section is to obtain characterizations of the solution set of the nonlinear programming problem: (P)

min

f (x) s.t. x ∈ S.

Denote the scalar product of the vectors a ∈ Rn and b ∈ Rn by ha, bi, the solution set ¯ and let S¯ be nonempty. We suppose that x¯ is a known arg min { f (x) : x ∈ S} of (P) by S, ¯ point from S. Recall the following well-known definitions and results, which we use in the paper (see Ref. [17]):

3

Definition 2.1 A Fr´echet differentiable function f : Γ → R, defined on some open set Γ , is called pseudoconvex on the convex subset S iff the following implication holds for all x, y ∈ S: f (y) < f (x) ⇒ h∇ f (x), y − xi < 0. Definition 2.2 A function f : S → R, defined on a convex set S ⊆ Rn , is called quasiconvex on S iff the following condition holds for all x, y ∈ S, t ∈ [0, 1]: f [x + t(y − x)] ≤ max{ f (x), f (y)}. Gordan’s Theorem of the Alternative ([18,17]) For each given matrix A, either the system Ax > 0 has a solution x, or the system AT y = 0, y ≥ 0, y 6= 0 has a solution y, but never both. Lemma 2.1 ([17]) A function f : S → R is quasiconvex on the convex set S ⊆ Rn if and only if its level sets L( f ; α) := {x ∈ S : f (x) ≤ α} are convex for all real numbers α. Lemma 2.2 ([17]) Let f be a Fr´echet differentiable function, which is pseudoconvex on a convex set S ⊆ Rn . Then f is quasiconvex on S. Consider the following notations of sets, in terms of x: ¯ S0 := {x ∈ S : h∇ f (x), ¯ x − xi ¯ = 0, ∇ f (x) = ∇ f (x)}; ¯ S10 := {x ∈ S : h∇ f (x), ¯ x − xi ¯ ≤ 0, ∇ f (x) = ∇ f (x)}. ¯ The following result is due to Mangasarian [1]: Proposition 2.1 Let f be a twice continuously differentiable convex function, which is defined on some open convex set Γ containing the convex set S. Let x¯ be any known point from ¯ Then S. S¯ = S0 = S10 . The following Example shows that Proposition 2.1 does not hold, when the function is not convex: Example 2.1 Consider the function of two variables f (x1 , x2 ) = x2 /x1 , which is often used in nonlinear programming. It is pseudoconvex on the convex set S = {(x1 , x2 ) ∈ R2 : 1 ≤ x1 ≤ 2, 0 ≤ x2 ≤ 1}, but not convex. The set of minimizers of f over S is the segment S¯ = {(x1 , x2 ) ∈ R2 : 1 ≤ x1 ≤ 2, x2 = 0}. ¯ We can immediately see that ∇ f (x) is not constant on S.

4

On the other hand, there exists a positive function p such that ∇ f (x) = p(x)∇ f (x) ¯ for all ¯ when the problem is pseudoconvex. We apply this condition to obtain complete points x ∈ S, characterizations of the solution set S¯ of (P). Let x¯ ∈ S¯ be a given point. Denote Sˆ := {x ∈ S : h∇ f (x), ¯ x − xi ¯ = 0, ∃p(x) > 0 : ∇ f (x) = p(x)∇ f (x)}; ¯ Sˆ1 := {x ∈ S : h∇ f (x), ¯ x − xi ¯ ≤ 0, ∃p(x) > 0 : ∇ f (x) = p(x)∇ f (x)}. ¯ In several lemmas, we use the following known notion [19, p. 253]: Definition 2.3 The vectors a1 , a2 ,. . . , as , s ≥ 1 in Rn are called positively linearly independent iff the following implication holds: s

∑ µi ai = 0, µi ≥ 0, i = 1, 2, . . . , s



µ1 = µ2 = · · · = µs = 0.

i=1

It is easy to see that a positively linearly independent system does not contain vectors, which are equal to zero. Each non-zero vector is a positively linearly independent system. Every subset of a positively linearly independent system is also a positively linearly independent system. Every linearly independent system is positively linearly independent. The following Example of a system, which shows that the converse is not true, is important for this paper. Example 2.2 Let a ∈ Rn be an arbitrary non-zero vector and ai = pi a, i = 1, 2, . . . , s, where pi are arbitrary positive numbers. Then the system a1 , a2 ,. . . , as is positively linearly independent, but it is not linearly independent. Lemma 2.3 Let a1 , a2 , . . . as be non-zero positively linearly independent vectors in the space Rn . Consider another non-zero vector b ∈ Rn , which satisfies the following implication: hai , di < 0, i = 1, 2, . . . s, d ∈ Rn ⇒ hb, di ≤ 0. (1) Then there exists a vector p = (p1 , p2 , . . . , ps ) such that pi ≥ 0 for all indices i, p 6= 0 and b = ∑si=1 pi ai . Proof Implication (1) is equivalent to the claim that the system hb, di > 0,

−hai , di > 0, i = 1, 2, . . . , s

has no a solution d. It follows from Gordan’s Theorem that there exist real numbers λi , i = 0, 1, . . . , s, not all zero, such that λi ≥ 0 and λ0 b − λ1 a1 − λ2 a2 − · · · − λs as = 0. It follows from positive linear independence and λ = (λ0 , λ1 , . . . , λs ) 6= 0 that λ0 6= 0. Then the numbers pi = λi /λ0 satisfy the condition b = ∑si=1 pi ai . At least one of the numbers pi is strictly positive, because b 6= 0. t u Lemma 2.4 Let Γ ⊆ Rn be an open convex set, S ⊆ Γ be an arbitrary convex set and f ¯ i = 1, 2, . . . , s be a differentiable quasiconvex function defined on Γ . Suppose that x, xi ∈ S, with ∇ f (x) 6= 0, ∇ f (xi ) 6= 0, i = 1, 2, . . . , s. Then the following implication is satisfied for arbitrary d ∈ Rn : h∇ f (xi ), di < 0, i = 1, 2, . . . , s



h∇ f (x), di ≤ 0.

5

Proof Assume, to the contrary, that there exists d ∈ Rn with h∇ f (xi ), di < 0, i = 1, 2, . . . , s and

h∇ f (x), −di < 0.

It follows from here that there exists τ > 0 with f (ui ) < f (xi ), i = 1, 2, . . . , s,

f (u) < f (x),

(2)

where ui = xi + τd ∈ Γ and u = x − sτd ∈ Γ . Let z = (u + u1 + u2 + · · · + us )/(s + 1). Therefore z = (x + x1 + x2 + · · · + xs )/(s + 1). We prove by induction that f (z) ≤ max{ f (u), f (u1 ), f (u2 ), . . . , f (us )}.

(3)

(3) follows directly from quasiconvexity if s = 1. We suppose that it is satisfied for some positive integer s − 1. We prove that (3) holds for s.   s u + u1 + · · · + us−1 1 us + f (z) = f ≤ max{ f (u), f (u1 ), f (u2 ), . . . , f (us )}. s+1 s+1 s Then we conclude from (2) that f (z) < max{ f (x), f (x1 ), f (x2 ), . . . , f (xs )}. On the other hand, according to the quasiconvexity of f , we obtain that S¯ is convex by Lemma 2.1, and f (z) = f (x) = f (xi ) for all indices i, which is a contradiction. t u Lemma 2.5 Let Γ ⊆ Rn be an open convex set, S ⊆ Γ be an arbitrary convex subset, and ¯ f be a differentiable pseudoconvex function defined on Γ . Consider the points x, xi ∈ S, i = 1, 2, . . . , s such that the vectors from the system ∇ f (xi ), i = 1, 2, . . . , s are non-zero and positively linearly independent, or at least one of these vectors is zero. Then there exists a vector p = (p1 , p2 , . . . , ps ), which depend on x, such that pi ≥ 0 for all indices i, p 6= 0, and s

∇ f (x) = ∑ pi (x)∇ f (xi ).

(4)

i=1

Proof We consider several different cases. Let ∇ f (x) = 0. Then x is a global minimizer of f over Γ , because f is pseudoconvex. It follows from xi ∈ S¯ that f (x) = f (xi ) for every index i. Therefore, xi is also a global minimizer of f over Γ . We conclude from here that ∇ f (xi ) = 0 and the condition (4) is satisfied for arbitrary p 6= 0 containing non-negative components. Suppose that there exists at least one point xi with ∇ f (xi ) = 0. Using similar arguments, we conclude from here that ∇ f (x) = 0 and ∇ f (xi ) = 0 for all i = 1, 2, . . . , s. Therefore, (4) is also satisfied for arbitrary p 6= 0 containing non-negative components. Consider the last possible case when ∇ f (x) 6= 0 and there is no xi such that ∇ f (xi ) = 0. Then the system ∇ f (xi ), i = 1, 2, . . . , s contains only positively linearly independent vectors. By Lemma 2.2, f is quasiconvex. Then, by Lemmas 2.3 and 2.4, there exists a vector p which satisfies condition (4). t u Corollary 2.1 Let Γ ⊆ Rn be an open convex set and S ⊆ Γ be an arbitrary convex subset. Consider a function f , which is differentiable and pseudoconvex on Γ . Let x¯ ∈ S¯ be a known solution of (P). Then, for all points x ∈ S¯ there exists a positive number p, which depend on x, such that ∇ f (x) = p(x)∇ f (x). ¯ (5)

6

The following lemma is known. We prove it for completeness. Lemma 2.6 Let Γ ⊆ Rn be an open convex set, S ⊆ Γ be an arbitrary convex set and f be ¯ Then a differentiable pseudoconvex function defined on Γ . Suppose that x, x¯ ∈ S. h∇ f (x), ¯ x − xi ¯ = 0. Proof It follows from Lemma 2.2 that f is quasiconvex on Γ . Then, by Lemma 2.1, S¯ is convex and we have x¯ + t(x − x) ¯ ∈ S¯ for all t ∈ [0, 1]. Therefore f [x¯ + t(x − x)] ¯ = f (x), ¯

∀t ∈ [0, 1],

because the solution set S¯ is convex. It follows from here that h∇ f (x), ¯ x − xi ¯ = 0.

t u

Theorem 2.1 Let Γ ⊆ Rn be an open convex set and S ⊆ Γ be an arbitrary convex subset. Assume that the function f is differentiable and pseudoconvex on Γ . If x¯ ∈ S¯ is a known solution of (P), then S¯ = Sˆ = Sˆ1 . ˆ Suppose that x ∈ S. ¯ We prove that x ∈ S. ˆ It follows from Lemma Proof We prove that S¯ ⊆ S. 2.6 that h∇ f (x), ¯ x − xi ¯ = 0. According to Corollary 2.1, there exists p(x) > 0, which satisfy (5). ¯ Let x ∈ Sˆ1 . Therefore h∇ f (x), It is trivial that Sˆ ⊆ Sˆ1 . We prove that Sˆ1 ⊆ S. ¯ x − xi ¯ ≤ 0. According to (5), we have that h∇ f (x), x¯ − x)i ≥ 0. Then, by the pseudoconvexity of f , we ¯ because x ∈ S. obtain that f (x) ¯ ≥ f (x). Hence x ∈ S, t u Remark 2.1 Consider the problem (P), where f is a differentiable function defined on some open set, containing the convex set S ⊆ Rn . It is well known that if f is pseudoconvex on S, then x¯ ∈ S is a global minimizer of f over S if and only if h∇ f (x), ¯ x − xi ¯ ≥ 0,

∀x ∈ S.

(6)

This inequality implies that x¯ is a solution of a nonlinear system with infinite number of inequalities, one inequality for each x ∈ S. Therefore, the sufficient condition is impossible to be applied to find the minimizers of (P). Sometimes, when the problem (P) has multiple solutions, we are interested in more than one solution. In the case, when f is pseudoconvex on some convex set S, and some minimizer x¯ is known, we derive from Theorem 2.1 simpler necessary and sufficient optimality conditions, which hold if the assumptions of Theorem 2.1 are satisfied: (i) A point x ∈ S is a minimizer if and only if there exists a number p > 0 such that (x, p) is a solution of the system of n + 1 equations in n + 1 unknowns: h∇ f (x), ¯ x − xi ¯ = 0,

∇ f (x) − p∇ f (x) ¯ = 0;

(7)

(ii) The point x¯ is the only solution of (P) if and only if the system (7) has no a solution (x, p) such that x is different from x. ¯ The following characterization of S¯ in terms of a known solution x¯ appeared in several papers under various generalized convexity and nonsmooth assumptions on the function f [6–9]: S¯ = {x ∈ S : h∇ f (x), x¯ − xi = 0}.

7

It is equivalent to the optimality condition for x, provided that x¯ is known: A point x ∈ S is a solution of (P) if and only if h∇ f (x), x¯ − xi = 0. (8) Conditions (7) are not so concise as (8), but (7) is simpler than (8), because ∇ f (x) and x − x¯ are separated in different equations in (7). Therefore, (8) is not preferable than (7). Example 2.3 Consider the function f : R2 → R defined by f (x1 , x2 ) = −x1 x2 . It is pseudoconvex on the set Γ = {x = (x1 , x2 ) ∈ R2 : x1 > 0, x2 > 0}. Let S = {x = (x1 , x2 ) ∈ R2 : x1 ≥ 1/2, x2 ≥ 1/2, x1 x2 ≤ 1}. The function f attains its minimal value −1 on S over the part of the curve x1 x2 = 1 between ¯ It is easy to see that the points (1/2, 2) and (2, 1/2). Let x¯ = (1, 1). Then Sˆ = {(1, 1)} = 6 S. only the hypothesis of Theorem 2.1 S is convex, does not hold. Therefore, this assumption is essential. Let S¯ be nonempty. Then the following Theorem contains necessary and sufficient conditions for optimality in the problem (P), which are generalizations of Condition (6): Theorem 2.2 Let S be convex and f be a differentiable pseudoconvex function defined on some open set containing S. Then the following claims are satisfied: ¯ ∀z ∈ S. (i) h∇ f (x), z − yi ≥ 0, ∀x, y ∈ S, (ii) If A is a subset of S and h∇ f (x), z − yi ≥ 0,

∀x, y ∈ A, ∀z ∈ S,

(9)

¯ then A ⊂ S. Proof We prove the Claim (i). Since S is convex, we have that h∇ f (y), z − yi ≥ 0 for all y ∈ S¯ and for all z ∈ S. Then it follows from Corollary 2.1, by the pseudoconvexity of f , that there exists p(x, y) > 0 with ∇ f (y) = p(x, y)∇ f (x). Therefore, h∇ f (x), z − yi ≥ 0,

¯ ∀z ∈ S. ∀x, y ∈ S,

¯ We We prove the Claim (ii). Let x be an arbitrary point from A. We prove that x ∈ S. conclude from (9) that h∇ f (x), z − xi ≥ 0 for all z ∈ S. According to the pseudoconvexity ¯ Since x ∈ A is arbitrary, then we we obtain that f (z) ≥ f (x) for all z ∈ S. Therefore x ∈ S. ¯ have A ⊂ S. t u Remark 2.2 Claims (i) and (ii) are respectively necessary and sufficient conditions for optimality, more general than (6). In the case when A consists of one point {x}, ¯ the Claim (ii) is the sufficient condition, expressed by (6).

3 Optimality Conditions for Pseudoconvex Set-Constrained Program in Terms of Finite Number of Known Solutions There are a lot of optimality conditions, which concern various kinds of problems. Some problems contain more than one solution, and we are interested in the whole solutions set. In this Section, we obtain optimality conditions for the problem (P), provided that several solutions x1 , x2 , . . . , xs are known.

8

Lemma 3.1 Let Γ ⊆ Rn be an open convex set, S ⊆ Γ be an arbitrary convex subset, and f be a differentiable pseudoconvex function defined on Γ . Let xi , i = 1, 2, . . . , s, s ≥ 2 be global solutions of the problem (P). If x is a feasible point, and there exists a vector p = (p1 , p2 , . . . , ps ) with pi ≥ 0 for all indices i, p 6= 0 such that (x, p) is a solution of the system of equations h∇ f (xi ), x − xi i = 0,

i = 1, 2, . . . , s,

(10)

s

∇ f (x) = ∑ pi ∇ f (xi ),

(11)

i=1

then x is also a global solution of (P). Proof We prove that h∇ f (x1 ), xs −xi = 0. By Lemma 2.2, f is quasiconvex on Γ . According to (10), we have h∇ f (x1 ), x1 − xi = 0. (12) Let i be an arbitrary index such that 1 ≤ i ≤ s − 1. By Lemma 2.6, h∇ f (x1 ), xi − x1 i = 0

and

h∇ f (x1 ), xi+1 − x1 i = 0.

It follows from here that h∇ f (x1 ), xi+1 − xi i = 0,

i = 1, 2, . . . , s − 1.

(13)

Adding all equations (12) and (13), we conclude that h∇ f (x1 ), xs − xi = 0. Using similar arguments, we can prove that h∇ f (xi ), xs − xi = 0,

i = 1, 2, . . . , s.

(14)

Multiplying each equation from (14) by pi , and adding all obtained equations, we conclude from (11) that h∇ f (x), xs − xi = 0. It follows from here, by the pseudoconvexity of f , that f (xs ) ≥ f (x). The last inequality ¯ implies, by x ∈ S, that x ∈ S. t u Theorem 3.1 Let Γ ⊆ Rn be an open convex set, S ⊆ Γ be an arbitrary convex subset, and ¯ i = 1, 2, . . . , s, f be a differentiable pseudoconvex function defined on Γ . Assume that xi ∈ S, s ≥ 1 are known solutions of the problem (P). Then, a feasible point x ∈ S is a solution of (P) if and only if there exists a vector p = (p1 , p2 , . . . , ps ) with pi ≥ 0 for all indices i, p 6= 0 such that (x, p) is a solution of the system of equations (10), (11). ¯ We prove that (10) and (11) are satisfied. It follows from Lemma Proof Suppose that x ∈ S. 2.2 that f is quasiconvex on Γ . Choose any point xi , i ∈ {1, 2, . . . , s}. Then, by Lemma 2.6, we obtain that h∇ f (xi ), x − xi i = 0. It follows from Corollary 2.1 and Example 2.2 that the vectors from the system ∇ f (xi ), i = 1, 2, . . . , s are non-zero and positively linearly independent or at least one of these vectors is zero. Then, according to Lemma 2.5, there exists a vector p(x), which depends on x, and it satisfies (11). ¯ t Let (x, p) be a solution of the system (10), (11). Then, by Lemma 3.1, we have x ∈ S. u

9

In the next result, we obtain optimality criteria for the problem with inequality constraints in terms of several known solutions. Let Γ ⊆ Rn be an open set, f : Γ → R and gi : Γ → R, i = 1, 2, . . . , m be given functions. Consider the problem with inequality constraints: (PI)

min

f (x) s.t. gi (x) ≤ 0,

i = 1, 2, . . . , m.

Let S be the constraint set, that is S := {x ∈ Γ : g(x) ≤ 0} and S¯ be the solution set of m (PI). Suppose that x¯ ∈ S¯ is a given solution. Denote by Rm + the orthant in R whose vectors contain non-negative components. Denote by I(x) := { j ∈ {1, 2, . . . , s} : g j (x) = 0} the set of active constraints at x ∈ S. The following result is known as Fritz John’s optimality conditions (see [20]): Fritz John’s Theorem Let x¯ be a local minimizer of the problem (PI). Let f , gi , i ∈ I(x) ¯ be differentiable at x, ¯ and gi , i ∈ / I(x) ¯ be continuous at x. ¯ Then, there exists a Lagrange multiplier λ ∈ R1+m + , λ = (λ0 , λ1 , λ2 , ..., λm ) 6= 0 such that λ0 ∇ f (x) ¯ +



λi ∇gi (x) ¯ =0

(15)

i∈I(x) ¯

and λi gi (x) ¯ = 0 for all i = 1, 2, ..., m. If the set Γ is convex, f is differentiable and pseudoconvex, gi , i = 1, 2, . . . , m are differentiable and quasiconvex, then (15) with λ0 = 1 are sufficient for x¯ to be a global minimizer. Theorem 3.2 Let the functions f and g be Fr´echet differentiable on some open convex set Γ , f be pseudoconvex, and gi , i = 1, 2, . . . , m be quasiconvex on Γ . Consider the points x1 , x2 , . . . , xs , which are known global solutions of the problem (PI). Then, a point x ∈ S is a global solution of the problem (PI) if and only if there exist a vector p = (p1 , p2 , . . . , ps ) ∈ Rs+ , p 6= 0 i 1 2 s and vectors λ i = (λ0i , λ1i , . . . , λmi ) ∈ Rm + , λ 6= 0, i = 1, 2, . . . , s such that (x, p, λ , λ , . . . , λ ) is a solution of the system of equations (10), (11), and λ ji h∇g j (xi ), x − xi i = 0,

j ∈ I(xi ), i = 1, 2, . . . , s.

(16)

¯ By the quasiconvexity of the constraint functions and Lemma 2.1, it folProof Let x ∈ S. lows that the constraint set S is convex. Then, according to Theorem 3.1, (10) and (11) are satisfied. We prove that (16) holds. We obtain from the quasiconvexity of the constraints that h∇g j (xi ), x − xi i ≤ 0,

j ∈ I(xi ),

(17)

because g j (x) − g j (xi ) = g j (x) ≤ 0 and g j [xi + t(x − xi )] ≤ g j (xi ) = 0 for all t ∈ [0, 1], j ∈ I(xi ). i We conclude from Fritz John’s conditions that there exist vectors λ i ∈ Rm + , λ 6= 0 such that

λ0i h∇ f (xi ), x − xi i +



λ ji h∇g j (xi ), x − xi i = 0,

i = 1, 2, . . . , s.

j∈I(xi )

Then, by (10) and (17), it follows from here that λ ji h∇g j (xi ), x − xi i = 0 for all j ∈ I(xi ). Thus (16) holds. The converse claim follows from Theorem 3.1. t u

10

4 Characterizations of the Solution Set of a Second-Order Pseudoconvex Set-Constrained Program Let Γ ⊆ Rn be an open set. Recall that the following directional derivative of a Fr´echet differentiable function f : Γ → R at a point x ∈ Γ in direction d ∈ Rn f−00 (x, d) := lim inf 2t −2 [ f (x + td) − f (x) − th∇ f (x), di] t→0+

is usually called the second-order lower Dini directional derivative (or Peano derivative). Every Fr´echet differentiable function has a second-order lower Dini derivative, eventually infinite. We denote by f 00 (x, d) := lim 2t −2 [ f (x + td) − f (x) − th∇ f (x), di] t→0+

the standard second-order directional derivative. The following notion was introduced by Ginchev and Ivanov [21] in more general settings: Definition 4.1 A Fr´echet differentiable function f : Γ → R is called second-order pseudoconvex iff, for all x, y ∈ Γ the following implications hold: f (y) < f (x)



h∇ f (x), y − xi ≤ 0;

f (y) < f (x), h∇ f (x), y − xi = 0



f−00 (x, y − x) < 0.

The following result is due to Ginchev and Ivanov [21]: Lemma 4.1 Every second-order pseudoconvex differentiable function, defined on a convex set, is quasiconvex. This class of functions strictly includes all differentiable pseudoconvex functions (see the examples given in [21]). The second-order Karush-Kuhn-Tucker necessary optimality conditions for problems with inequality constraints become sufficient when the objective function is second-order pseudoconvex and the constraints are quasiconvex [22, Theorem 1] In this Section, we derive second-order characterizations for set-constrained problems with a second-order pseudoconvex differentiable objective function, applying the condition ∇ f (x) = p(x)∇ f (x). ¯ Thus we obtain characterizations of the solution set of problems with more general objective function than the usual pseudoconvex Fr´echet differentiable one. Consider the following sets: A˜ := {x ∈ S : h∇ f (x), ¯ x − xi ¯ = 0, ∃p(x) > 0 : ∇ f (x) = p(x)∇ f (x), ¯ ∃ f 00 (x, x¯ − x), f 00 (x, x¯ − x) = 0}; A˜ 1 := {x ∈ S : h∇ f (x), ¯ x − xi ¯ ≤ 0, ∃p(x) > 0 : ∇ f (x) = p(x)∇ f (x), ¯ ∃ f 00 (x, x¯ − x), f 00 (x, x¯ − x) ≥ 0}; A∗ := {x ∈ S : h∇ f (x), ¯ x − xi ¯ = 0, ∃p(x) > 0 : ∇ f (x) = p(x)∇ f (x), ¯ ∃ f 00 (x, x¯ − x), f 00 (x, x¯ − x) = f−00 (x, ¯ x − x)}; ¯

11

A∗1 := {x ∈ S : h∇ f (x), ¯ x − xi ¯ ≤ 0, ∃p(x) > 0 : ∇ f (x) = p(x)∇ f (x), ¯ ∃ f 00 (x, x¯ − x), f 00 (x, x¯ − x) ≥ f−00 (x, ¯ x − x)}; ¯ A# := {x ∈ S : f−00 (x, ¯ x − x) ¯ = 0}. We do not suppose that the existence of the second-order directional derivatives is guaranteed. This case is more general than the case, which includes the assumption that f is second-order differentiable in every direction. For example, if f 00 (x, x¯ − x) does not exist, ˜ then x ∈ / A. Theorem 4.1 Let x¯ ∈ S¯ be a known solution of (P), Γ ⊆ Rn be an open convex set, and S ⊆ Γ be an arbitrary convex subset. If the function f is differentiable and second-order pseudoconvex on Γ , then S¯ = A˜ = A˜ 1 = A˜ ∩ A# = A∗ = A∗1 . Proof It is obvious that A˜ ∩ A# ⊆ A˜ ⊆ A˜ 1 and A˜ ∩ A# ⊆ A∗ ⊆ A∗1 . We prove that S¯ ⊆ A˜ ∩ A# . The function f is quasiconvex on Γ by Lemma 4.1. Suppose ¯ It follows from Lemma 2.6 and Corollary 2.1 that h∇ f (x), that x ∈ S. ¯ x − xi ¯ = 0 and there exists p(x) > 0 such that (5) is satisfied. We obtain that h∇ f (x), x¯ − xi = 0

and

f [x + t(x¯ − x)] = f [x¯ + t(x − x)] ¯ = f (x) = f (x) ¯ for all t ∈ [0, 1],

interchanging x¯ and x, because S¯ is convex. Therefore f 00 (x, ¯ x − x) ¯ = f−00 (x, ¯ x − x) ¯ = 0, there exists f 00 (x, x¯ − x) and f 00 (x, x¯ − x) = 0. We prove that A∗1 ⊆ A˜ 1 . Let x ∈ A∗1 . Since S is convex and x, ¯ x ∈ S, then x¯ + t(x − x) ¯ ∈S for all t ∈ [0, 1]. By x¯ ∈ S¯ we have f [x¯ + t(x − x)] ¯ ≥ f (x), ¯ ∀t ∈ [0, 1]. It follows from here and h∇ f (x), ¯ x − xi ¯ ≤ 0 that h∇ f (x), ¯ x − xi ¯ = 0. Therefore f−00 (x, ¯ x − x) ¯ ≥ 0, which implies by x ∈ A∗1 that f 00 (x, x¯ − x) ≥ 0. ¯ Let x ∈ A˜ 1 . Assume, to the contrary, that x ∈ ¯ Hence At last, we prove that A˜ 1 ⊆ S. / S. f (x) ¯ < f (x). According to the second-order pseudoconvexity, h∇ f (x), x¯ − xi ≤ 0. By (5), we obtain that h∇ f (x), ¯ x − xi ¯ ≥ 0. Taking into account that x ∈ A˜ 1 , we have h∇ f (x), ¯ x − xi ¯ = 0. By (5), we make the conclusion that h∇ f (x), x¯ − xi = 0. Then, by the second-order pseudoconvexity, it follows from here that f 00 (x, x¯ − x) = f−00 (x, x¯ − x) < 0, which contradicts the assumption x ∈ A˜ 1 . t u

5 Characterizations of the Solution Sets of Variational Inequalities In this Section, we derive some characterizations of the solution sets of the Stampacchia variational inequality and the Minty variational inequality problems. Some previous results were obtained in [10] (concerning G-maps) and [12]. Suppose that S ⊆ Rn . Let F : S → Rn be a given single-valued map. Consider the Stampacchia variational inequality problem: Find x¯ ∈ S such that hF(x), ¯ x − xi ¯ ≥ 0,

∀x ∈ S.

12

We denote this problem by SV I(F, S). We also consider the Minty variational inequality problem: Find x¯ ∈ S such that hF(x), x¯ − xi ≤ 0,

∀x ∈ S.

We denote this problem by MV I(F, S). The following Definition was introduced by Karamardian [23]: Definition 5.1 A single-valued map F : S → Rn is called pseudomonotone on S iff the following implication holds: x ∈ S, y ∈ S, hF(x), y − xi ≥ 0



hF(y), x − yi ≤ 0.

The following result is known (see [24, Proposition 3.1] or [25]): Proposition 5.1 Let S be a nonempty, closed and convex subset of Rn and let F be a continuous, pseudomonotone mapping from S to Rn . Then, x¯ solves the problem SV I(F, S) if and only if x¯ is a solution of MV I(F, S). In Ref. [25] is given an example, which shows that the pseudomonotonicity of F in Proposition 5.1 cannot be replaced by quasimonotonicity. In the next result, we prove that, in some sense, the pseudomonotone maps are the largest class of maps satisfying this proposition. We denote by [x, y] the line segment in Rn with endpoints x and y. Proposition 5.2 Let S be a nonempty, closed and convex subset of Rn and let F be a continuous mapping from S to Rn . Then the following statements are equivalent: (i) F is pseudomonotone on S; (ii) the solution sets of SV I(F, X) and MV I(F, X) coincide for all closed convex subsets X ⊆ S; (iii) the solution sets of SV I(F, [x, y]) and MV I(F, [x, y]) coincide for all x, y ∈ S. Proof (i) ⇒ (ii). This claim follows from Proposition 5.1. The implication (ii) ⇒ (iii) is trivial. Implication (iii) ⇒ (i). We prove that F is pseudomonotone. Let x, y be any points from S such that hF(x), y−xi ≥ 0. Then, we obtain from here that hF(x), z−xi ≥ 0 for all z ∈ [x, y]. Therefore, x is a solution of the problem SV I(F, [x, y]). It follows from (iii) that x is a solution of MV I(F, [x, y]). Hence, hF(z), x − zi ≤ 0 for all z ∈ [x, y]. In particular, hF(y), x − yi ≤ 0, which implies that F is pseudomonotone. t u The next Definition was introduced by Crouzeix, Marcotte and Zhu [26]. Definition 5.2 A single-valued map F : S → Rn , S ⊆ Rn , is said to be pseudomonotone-star (or pseudomonotone∗ ) on S iff it is pseudomonotone on S and the following implication holds for all x ∈ S, y ∈ S:  hF(x), y − xi = 0 ⇒ ∃ p(x, y) > 0 : F(y) = p(x, y) F(x). hF(y), x − yi = 0 Denote the solution set of SV I(F, S) by V¯ . We suppose that x¯ is a given solution of this problem. Consider the sets: Vˆ := {x ∈ S : hF(x), ¯ x − xi ¯ = 0, ∃ p(x) > 0 : F(x) = p(x) F(x)}, ¯

13

Vˆ1 := {x ∈ S : hF(x), ¯ x − xi ¯ ≤ 0, ∃ p(x) > 0 : F(x) = p(x) F(x)}, ¯ V˜ := {x ∈ S : hF(x), x¯ − xi = 0}, V˜1 := {x ∈ S : hF(x), x¯ − xi ≥ 0}, V ∗ := {x ∈ S : hF(x), x¯ − xi = hF(x), ¯ x − xi}, ¯ V1∗ := {x ∈ S : hF(x), x¯ − xi ≥ hF(x), ¯ x − xi}, ¯ V # := {x ∈ S : hF(x), ¯ x − xi ¯ = 0}, Theorem 5.1 Let the map F : S → Rn be pseudomonotone∗ on S, and x¯ ∈ V¯ be a given point. Then V¯ = Vˆ = Vˆ1 = V˜ = V˜1 = V ∗ = V1∗ = V˜ ∩V # . Proof It is obvious that V˜ ∩V # ⊆ V˜ ⊆ V˜1 ,

V˜ ∩V # ⊆ V ∗ ⊆ V1∗ ,

Vˆ ⊆ Vˆ1 .

We prove that V˜1 ⊆ V˜ ∩ V # . Let x ∈ V˜1 . Therefore hF(x), x¯ − xi ≥ 0. According to pseudomonotonicity, we obtain from here that hF(x), ¯ x − xi ¯ ≤ 0. It follows from x¯ ∈ V¯ that hF(x), ¯ y − xi ¯ ≥ 0 for all y ∈ S. In particular, hF(x), ¯ x − xi ¯ ≥ 0. By the pseudomonotonicity of F, we have that hF(x), x¯ − xi ≤ 0. We conclude from all these inequalities that hF(x), ¯ x − xi ¯ = 0,

hF(x), x¯ − xi = 0,

(18)

which implies that x ∈ V˜ ∩V # . We prove that V1∗ ⊆ V˜1 . Let x ∈ V1∗ . Therefore hF(x), x¯ − xi ≥ hF(x), ¯ x − xi. ¯ We obtain from x¯ ∈ V¯ that hF(x), ¯ x − xi ¯ ≥ 0. Hence hF(x), x¯ − xi ≥ 0, which implies that x ∈ V˜1 . Therefore V˜ ∩V # = V˜ = V˜1 = V ∗ = V1∗ . We prove that V¯ ⊆ V˜1 . Let x ∈ V¯ . Therefore hF(x), y − xi ≥ 0 for all y ∈ S. In particular, hF(x), x¯ − xi ≥ 0, which implies that x ∈ V˜1 . We prove that V˜ ∩ V # ⊆ Vˆ . Let x ∈ V˜ ∩ V # . By pseudomonotonicity∗ and (18), there exists p > 0, which depend on x with F(x) = p(x) F(x). ¯ It follows from here that x ∈ Vˆ . ˆ ¯ ˆ At last, we prove that V1 ⊆ V . Let x ∈ V1 , and y be an arbitrary point from S. Therefore hF(x), y − xi = p hF(x), ¯ y − xi = p hF(x), ¯ y − xi ¯ + p hF(x), ¯ x¯ − xi.

(19)

We conclude from x ∈ Vˆ1 that hF(x), ¯ x¯ − xi ≥ 0. On the other hand, it follows from x¯ ∈ V¯ that hF(x), ¯ y − xi ¯ ≥ 0. Therefore, by (19), we have hF(x), y − xi ≥ 0, Hence x ∈ V¯ .

∀y ∈ S. t u

Remark 5.1 According to Proposition 5.1, the results from Theorem 5.1 are characterizations of the solution set of M(F, S) in the case, when S is closed and convex, F is continuous and pseudomonotone∗ and x¯ is a known solution of MV I(F, S). Under the assumptions of Theorem 5.1, the following claims are consequence of the characterization V¯ = Vˆ :

14

(i) A point x ∈ S is a solution of the variational inequality if and only if there exists a number p > 0 such that (x, p) is a solution of the system of equations hF(x), ¯ x − xi ¯ = 0,

F(x) − pF(x) ¯ = 0;

(20)

(ii) x¯ is the only solution of the variational inequality if and only if the system (20) has no other solutions except x. ¯ The claims V¯ = Vˆ = V˜ ∩V # are proved in the paper [12] (see Theorem 4.4) and V¯ = V˜ = V˜1 follow from Remark 4.1 in the same reference. We present some different aspects in Theorem 5.1. Our proof is based on a different approach. It is quite compact and short. The following Example shows that the pseudomonotonicity∗ in Theorem 5.1 cannot be replaced by the assumption that F is only pseudomonotone. Example 5.1 Consider the map F : R2 → R2 defined by F(x) = (x1 , 2x1 + x2 ),

where x = (x1 , x2 )

and S = {x ∈ R2 : x1 + x2 ≥ 0}. It is pseudomonotone on R2 , because the system hF(x), y − xi ≥ 0,

hF(y), x − yi > 0

has no a solution for (x, y), but not pseudomonotone∗ . Really x = (−1, 1) and y = (1, −1) satisfy the equations hF(x), y − xi = 0 and hF(y), x − yi = 0, but F(y) = (1, 1) = −F(x). Choose x¯ = (0, 0). We can immediately see that Vˆ = Vˆ1 = {x} ¯ $ V¯ = {x = (x1 , x2 ) ∈ R2 : x1 + x2 = 0, x1 ≥ 0} $ V˜ = V˜1 = V ∗ = V1∗ = V˜ ∩V # = {x ∈ R2 : x1 + x2 = 0}. We prove that V¯ = {x ∈ R2 : x1 + x2 = 0, x1 ≥ 0}. First, we write the inequality hF(x), y − xi ≥ 0,

∀y ∈ S

in the form x1 (y1 + y2 ) + y2 (x1 + x2 ) ≥ (x1 + x2 )2 ,

∀y ∈ S.

After that, we take y1 = 0, y2 = 0. Therefore x1 + x2 = 0. It follows from here that x1 (y1 + y2 ) ≥ 0,

∀y ∈ S.

Then we conclude that x1 ≥ 0. It is easy to see that Vˆ = Vˆ1 = {x} ¯ and V˜ = V˜1 = V ∗ = V1∗ = V˜ ∩V # = {x ∈ R2 : x1 + x2 = 0}.

15

6 Second-Order Characterizations of the Solution Set of a Problem with Inequality Constraints In this Section, we derive second-order characterizations of the solution set of the nonlinear programming problem with inequality constraints (PI), provided that some solution x¯ is known. Definition 6.1 Let the functions f and gi , i = 1, 2, ..., m be Fr´echet differentiable. A direction d is called critical at the point x ∈ S iff h∇ f (x), di ≤ 0

and

h∇gi (x), di ≤ 0 for all i ∈ I(x).

The next result, due to Ginchev and Ivanov [22], contains second-order necessary and sufficient conditions for optimality: Proposition 6.1 Let Γ be an open set in the space Rn , the functions f , gi (i = 1, 2, ..., m) be defined on Γ . We also suppose that x¯ is a local minimizer of the problem (PI) , the functions gi , i ∈ / I(x) ¯ are continuous at x, ¯ the functions f , gi , i ∈ I(x) ¯ are continuously differentiable, and for every direction d ∈ Rn such that h∇ f (x), ¯ di = 0 and h∇gi (x), ¯ di = 0, i ∈ I(x), there ¯ d), i ∈ I(x). ¯ Then, correexist the second-order directional derivatives f 00 (x, ¯ d) and g00i (x, sponding to any critical direction d, there exist non-negative multipliers λ0 , λ1 , ..., λm , not all equal to zero, such that the Lagrange function L = λ0 f + ∑ni=1 λi gi satisfies the following conditions: λi gi (x) ¯ = 0, i = 1, 2, ..., m, ∇L(x) ¯ = 0, (21) λi h∇gi (x), ¯ di = 0, i ∈ I(x), ¯ L00 (x, ¯ d) = λ0 f 00 (x, ¯ d) +

λ0 h∇ f (x), ¯ di = 0,



¯ d) ≥ 0. λi g00i (x,

(22)

i∈I(x) ¯

If Γ is a convex set, f is second-order pseudoconvex, gi , i = 1, 2, . . . , m are quasiconvex on Γ , then conditions (21), (22) with λ0 = 1 become sufficient for x¯ to be a global minimizer. Consider the following sets where x¯ ∈ S¯ and λ ∈ Rm + are known: B˜ := {x ∈ S : h∇ f (x), ¯ x − xi ¯ = 0, ∃p > 0 : ∇ f (x) = p∇ f (x), ¯ ∃ f 00 (x, x¯ − x), f 00 (x, x¯ − x) = 0, ∃g00i (x, ¯ x − x), ¯ λi h∇gi (x), ¯ x − xi ¯ = λi g00i (x, ¯ x − x) ¯ = 0, i ∈ I(x)}; ¯ B˜ 1 := {x ∈ S : h∇ f (x), ¯ x − xi ¯ ≤ 0, ∃p > 0 : ∇ f (x) = p∇ f (x), ¯ ∃ f 00 (x, x¯ − x), f 00 (x, x¯ − x) ≥ 0, ¯ x − x), ¯ ∃g00i (x,

¯ x − x) ¯ = 0, i ∈ I(x)}; ¯ λi h∇gi (x), ¯ x − xi ¯ = λi g00i (x,

B∗ := {x ∈ S : h∇ f (x), ¯ x − xi ¯ = 0, ∃p > 0 : ∇ f (x) = p∇ f (x), ¯ ∃ f 00 (x, x¯ − x), f 00 (x, x¯ − x) = f−00 (x, ¯ x − x), ¯ ∃g00i (x, ¯ x − x), ¯ λi h∇gi (x), ¯ x − xi ¯ = λi g00i (x, ¯ x − x) ¯ = 0, i ∈ I(x)}; ¯ B∗1 := {x ∈ S : h∇ f (x), ¯ x − xi ¯ ≤ 0, ∃p > 0 : ∇ f (x) = p∇ f (x), ¯ ∃ f 00 (x, x¯ − x), f 00 (x, x¯ − x) ≥ f−00 (x, ¯ x − x), ¯ ∃g00i (x, ¯ x − x), ¯ λi h∇gi (x), ¯ x − xi ¯ = λi g00i (x, ¯ x − x) ¯ = 0, i ∈ I(x)}; ¯ B# := {x ∈ S : f−00 (x, ¯ x − x) ¯ = 0}.

16

Theorem 6.1 Let x¯ ∈ S¯ be a known solution of (PI). We also suppose that Γ ⊆ Rn is an open convex set, the functions gi , i ∈ / I(x) ¯ are continuous at x, ¯ the functions f , gi , i ∈ I(x) ¯ are continuously differentiable, for every direction d ∈ Rn such that h∇ f (x), ¯ di = 0, and h∇gi (x), ¯ di = 0, i ∈ I(x), ¯ there exist the second-order directional derivatives f 00 (x, ¯ d) and 00 ¯ d), i ∈ I(x), ¯ f is second-order pseudoconvex on Γ , the constraints gi , i = 1, 2, . . . , m gi (x, are quasiconvex on Γ . If the second-order optimality conditions of Fritz John’s type are satisfied, and for every critical direction d the multiplier λ ∈ Rm + is known, then S¯ = B˜ = B˜ 1 = B˜ ∩ B# = B∗ = B∗1 . Proof It is obvious that B˜ ∩ B# ⊆ B˜ ⊆ B˜ 1 and B˜ ∩ B# ⊆ B∗ ⊆ B∗1 . We prove that S¯ ⊆ B˜ ∩ B# . By the quasiconvexity of the constraint functions and Lemma 2.1, it follows that the constraint set S is convex. Let x be an arbitrary point from the solution ¯ By Lemma 4.1, the objective function is quasiconvex. Then, the direction x − x¯ is set S. ¯ According to critical, because the constraint functions are also quasiconvex and x, x¯ ∈ S. Lemma 2.6, Corollary 2.1 and quasiconvexity of f , we have that h∇ f (x), ¯ x − xi ¯ = h∇ f (x), x¯ − xi = 0, there exists p(x) > 0 such that (5) holds, there exists f 00 (x, x¯ − x), and f 00 (x, x¯ − x) = f 00 (x, ¯ x − x) ¯ = 0. It follows from quasiconvexity of gi and convexity of S that h∇gi (x), ¯ x − xi ¯ ≤ 0, i ∈ I(x). ¯ Then, by (21), we obtain that λi h∇gi (x), ¯ x − xi ¯ = 0, i ∈ I(x). ¯ It follows from here and again from the quasiconvexity of the constraints that ¯ x − x) ¯ ≤0 λi (gi )00− (x,

i ∈ I(x). ¯

Since the direction x − x¯ is critical, then by the hypothesis of the theorem, there exists the limit g00i (x, ¯ x − x) ¯ and λi g00i (x, ¯ x − x) ¯ = λi (gi )00− (x, ¯ x − x) ¯ ≤ 0. Then, we conclude from (22), f 00 (x, ¯ x − x) ¯ = 0, and λi g00i (x, ¯ x − x) ¯ ≤ 0, i ∈ I(x) ¯ that ¯ x − x) ¯ =0 λi g00i (x,

when

i ∈ I(x). ¯

Hence x ∈ B˜ ∩ B# . The inclusions B∗1 ⊆ B˜ 1 and B˜ 1 ⊆ S¯ are consequences of Theorem 4.1, because B˜ 1 ⊂ A˜ 1 . t u

7 Concluding Remarks In this paper, we derive optimality conditions in a set-constrained and inequality-constrained minimization problems, provided that several solutions are known. We also provide several complete characterizations of the solution sets of nonlinear programming problems in terms of a given minimizer. Our study extends a known result due to Mangasarian [1] for convex program to the case when the objective function is pseudoconvex. One of the conditions is replaced by another more general one. Additionally, we derive second-order characterizations of the solution set of a program with a second-order pseudoconvex objective function. Thus our results can be applied to more general classes of problems, which strictly

17

include all Fr´echet differentiable pseudoconvex ones. Furthermore, we characterize the solution set of the Stampacchia variational inequality with a pseudomonotone-star map, and the inequality-constrained nonlinear programming problem using Lagrange multipliers. Such results are useful when the task is to find all solutions, provided that one or more of them are known, for instance, by some numerical method. Acknowledgements The author is thankful to the Editor-in-Chief Professor Franco Giannessi for his help concerning the presentation and to the Publishing House Springer for publishing this paper. This research is partially supported by the TU Varna Grant No 18/2012.

References 1. Mangasarian, O.L.: A simple characterization of solution sets of convex programs. Oper. Res. Lett. 7, 21–26 (1988) 2. Burke, J.V., Ferris, M.C.: Characterization of the solution sets of convex programs. Oper. Res. Lett. 10, 57–60 (1991) 3. Jeyakumar, V: Infinite dimensional convex programming with applications to constrained approximation. J. Optim. Theory Appl. 75, 469–586 (1992) 4. Jeyakumar, V, Yang, X.Q. Convex composite multiobjective nonsmooth programming. Math. Program. 59, 325–343 (1993) 5. Jeyakumar, V., Yang, X.Q.: On characterizing the solution sets of pseudolinear programs. J. Optim. Theory Appl. 87, 747–755 (1995) 6. Lalitha, C.S., Mehta, M.: Characterization of the solution sets of pseudolinear programs and pseudoaffine variational inequality problems. J. Nonlinear Convex Anal. 8, 87–98 (2007) 7. Smietanski, M.: A note on characterization of solution sets to pseudolinear programming problems. Appl. Anal. doi: 10.1080/00038811.2011.584184 8. Ivanov, V.I.: Characterizations of the solution sets of generalized convex minimization problems. Serdica Math. J. 29, 1–10 (2003) 9. Wu, Z.: The convexity of the solution set of a pseudoconvex inequality. Nonlinear Anal. 69, 1666-1674 (2008) 10. Bianchi, M., Schaible, S.: An extension of pseudolinear functions and variational inequality problems. J. Optim. Theory Appl. 104, 59–71 (2000) 11. Ivanov, V.I.: Optimization and variational inequalities with pseudoconvex functions. J. Optim. Theory Appl. 146, 602–616 (2010) 12. Wu, Z.L., Wu, S.Y.: Characterizations of the solution sets of convex programs and variational inequality problems. J. Optim. Theory Appl. 130, 339–358 (2006) 13. Jeyakumar, V., Lee, G.M., Dinh, N.: Lagrange multiplier conditions characterizing the optimal solution set of cone-constrained convex programs. J. Optim. Theory Appl. 123, 83–103 (2004) 14. Penot, J.-P.: Characterization of solution sets of quasiconvex programs. J. Optim. Theory Appl. 117, 627–636 (2003) 15. Dinh, N., Jeyakumar, V., Lee, G.M.: Lagrange multiplier characterizations of solution sets of constrained pseudolinear optimization problems. Optimization 55, 241–250 (2006) 16. Lalitha, C.S., Mehta, M.: Characterizations of solution sets of mathematical programs in terms of Lagrange multipliers. Optimization 58, 995–1007 (2009) 17. Mangasarian, O.L.: Nonlinear Programming. Classics in Applied Mathematics vol. 10, PA SIAM, Philadelphia (1994) ¨ 18. Gordan, P.: Uber die Aufl¨osungen linearer Gleichungen mit reelen coefficienten. Math. Ann. 6, 23–28 (1873) 19. Giorgi, G., Guerraggio, A., Thierfelder, J.: Mathematics of Optimization. Elsevier Science & Technology Books (2004) 20. Bazaraa, M.S., Shetty, C.M.: Nonlinear Programming - Theory and Algorithms. John Wiley and Sons, New York (1979) 21. Ginchev, I., Ivanov, V.I.: Higher-order pseudoconvex functions. In: Konnov, I.V., Luc, D.T., Rubinov, A.M. (eds.), Proc. 8th Int. Symp. on Generalized Convexity/Monotonicity, Lecture Notes in Econom. and Math. Systems, vol. 583, pp. 247–264. Springer, Berlin (2007) 22. Ginchev, I., Ivanov, V.I.: Second-order optimality conditions for problems with C1 data. J. Math. Anal. Appl. 340, 646–657 (2008)

18 23. Karamardian, S.: Complementarity problems over cones with monotone and pseudomonotone maps. J. Optim. Theory Appl. 18, 445–454 (1976) 24. Harker, P.T., Pang, J.S.: Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications. Math. Program. 48, 161–220 (1990) 25. Marcotte, P., Zhu, D.: Weak sharp solutions of variational inequalities. SIAM J. Optim. 9, 179–189 (1998) 26. Crouzeix, J.-P., Marcotte, P., Zhu, D.: Conditions ensuring the applicability of cutting plane methods for solving variational inequalities. Math. Program. Ser. A 88, 521–539 (2000)