Optimality Conditions and Duality of Three Kinds of Nonlinear

1 downloads 0 Views 2MB Size Report
Oct 24, 2013 - applied these optimality conditions to define dual problems and derived ... ditions and dual problems for three kinds of nonlinear fractionalΒ ...
Hindawi Publishing Corporation Advances in Operations Research Volume 2013, Article ID 708979, 9 pages http://dx.doi.org/10.1155/2013/708979

Research Article Optimality Conditions and Duality of Three Kinds of Nonlinear Fractional Programming Problems Xiaomin Zhang and Zezhong Wu Department of Mathematics, Chengdu University of Information Technology, Sichuan 610225, China Correspondence should be addressed to Xiaomin Zhang; [email protected] Received 5 April 2013; Accepted 24 October 2013 Academic Editor: Ching-Jong Liao Copyright Β© 2013 X. Zhang and Z. Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Some assumptions for the objective functions and constraint functions are given under the conditions of convex and generalized convex, which are based on the 𝐹-convex, 𝜌-convex, and (𝐹, 𝜌)-convex. The sufficiency of Kuhn-Tucker optimality conditions and appropriate duality results are proved involving (𝐹, 𝜌)-convex, (𝐹, 𝛼, 𝜌, 𝑑)-convex, and generalized (𝐹, 𝛼, 𝜌, 𝑑)-convex functions.

1. Introduction Multiobjective optimization theory is a development of numerical optimization and related to many subjects, such as nonsmooth analysis, convex analysis, nonlinear analysis, and the theory of set value. It has a wide range of applications in the fields of industrial design, economics, engineering, military, management sciences, financial investment, transport, and so forth, and now it is an interdisciplinary science branch between applied mathematics and decision sciences. Convexity plays an important role in optimization theory, and it becomes an important theoretical basis and useful tool for mathematical programming and optimization theory. Convex function theory can be traced back to the works of Holder, Jensen, and Minkowski in the beginning of this century, but the real work that caught the attention of people is the research on game theory and mathematical programming by von Neumann and Morgenstern [1], Dantzing, and Kuhn and Tucker in the forties to fifties, and people have done a lot of intensive research about convex functions from the fifties to sixties. In the middle of the sixties convex analysis was produced, and the concept of convex function is promoted in a variety of ways, and the notion of generalized convex is given. Fractional programming has an important significance in the optimization problems; for instance, in order to measure the production or the efficiency of a system, we should

minimize a ratio of functions between a given period of time and a utilized resource in engineering and economics. Preda [2] has established the concept of (𝐹, 𝜌)-convex based on 𝐹-convex [3] and 𝜌-convex [4] and obtained some results, which are the expansion of 𝐹-convex and 𝜌-convex. Motivated by various concepts of convexity, Liang et al. [5] have put forward a generalized convexity, which was called (𝐹, 𝛼, 𝜌, 𝑑)-convex, which extended (𝐹, 𝜌)-convex, and Liang et al. [6], Weir and Mond [7], Weir [8], Jeyakumar and Mond [9], Egudo [10], Preda [2], and Gulati and Islam [3] obtained some corresponding optimality conditions and applied these optimality conditions to define dual problems and derived duality theorems for single objective fractional problems and multiobjective problems. Then the definition of generalized (𝐹, 𝛼, 𝜌, 𝑑)-convex is given under the condition of (𝐹, 𝛼, 𝜌, 𝑑)-convex. However, in general, fractional programming problems are nonconvex and the Kuhn-Tucker optimality conditions are only necessary. Under what conditions are the Kuhn-Tucker conditions sufficient for the optimality of problems? This question appeals to the interests of many researchers, and those are what we should probe. Based on the former conclusions, by adding conditions to objective functions and constraint functions and by changing K-T conditions [11], the optimality conditions and dual are given involving weaker convexity conditions. The main results in this paper are based on convex and generalized convex functions and the properties of sublinear functions.

2

Advances in Operations Research

In this paper, we will discuss sufficient optimality conditions and dual problems for three kinds of nonlinear fractional programming problems, and the paper is organized as follows. In Sections 3.1 and 3.2, we present the Kuhn-Tucker sufficient optimality conditions and dual for nonlinear fractional programming problem and multiobjective fractional programming problem based on generalized (𝐹, 𝛼, 𝜌, 𝑑)convex. Section 3.3 contains optimality conditions and dual for multiobjective fractional programming problem under (𝐹, 𝜌)-convex. In these sections, I present some assumptions for the objective functions and constraint functions such that the Kuhn-Tucker optimality conditions are sufficient and obtain the corresponding duality theorem.

Definition 5 (see [4, 12]). Let 𝑓(π‘₯) be a real-valued function defined on the convex set 𝑋0 βŠ‚ 𝐸𝑛 , if there exists a real number 𝜌 ∈ 𝑅, such that

2. Preliminaries

Definition 6 (see [2]). The function 𝑓𝑖 : 𝑋0 β†’ 𝑅 is said to be (𝐹, 𝜌)-convex at π‘₯0 ∈ 𝑋0 , if for any π‘₯0 ∈ 𝑋0 , 𝑓𝑖 (π‘₯) satisfies the following condition:

𝑛

Let 𝐸 be the 𝑛-dimensional real vector space, that is, 𝑛dimensional Euclidean space, where 𝑦 = (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 )𝑇 , 𝑧 = (𝑧1 , 𝑧2 , . . . , 𝑧𝑛 )𝑇 ∈ 𝐸𝑛 , and provides as follows, (see [12]), 𝑦 = 𝑧 ⇐⇒ 𝑦𝑖 = 𝑧𝑖 ,

𝑖 = 1, 2, . . . , 𝑛;

𝑦 > 𝑧 ⇐⇒ 𝑦𝑖 > 𝑧𝑖 ,

𝑖 = 1, 2, . . . , 𝑛;

𝑦 ≧ 𝑧 ⇐⇒ 𝑦𝑖 ≧ 𝑧𝑖 ,

𝑖 = 1, 2, . . . , 𝑛.

(1)

Definition 2 (see [12]). Suppose that π‘₯0 ∈ 𝑋0 ; that is, if π‘₯ βˆ‰ 𝑋0 , such that 𝑓(π‘₯) < 𝑓(π‘₯0 ), π‘₯0 is a weakly efficient solution of multiobjective programming problem. Definition 3 (see [5]). Given an open set 𝑋0 βŠ‚ 𝑅𝑛 , a functional 𝐹 : 𝑋0 Γ— 𝑋0 Γ— 𝑅𝑛 β†’ 𝑅 is called sublinear if, for any π‘₯, π‘₯0 ∈ 𝑋0 , 𝐹 (π‘₯, π‘₯0 ; π‘Ž1 + π‘Ž2 ) ≦ 𝐹 (π‘₯, π‘₯0 ; π‘Ž1 ) + 𝐹 (π‘₯, π‘₯0 ; π‘Ž2 ) , βˆ€π‘Ž1 , π‘Ž2 ∈ 𝑅𝑛 , 𝑛

βˆ€π›Ό ∈ 𝑅, 𝛼 ≧ 0, βˆ€π‘Ž ∈ 𝑅 . (2)

It follows from the second equality that 𝐹 (π‘₯, π‘₯0 ; 0) = 𝐹 (π‘₯, π‘₯0 ; 0 Γ— π‘Ž) = 0 Γ— 𝐹 (π‘₯, π‘₯0 ; π‘Ž) = 0, for any π‘Ž ∈ 𝑅𝑛 .

σ΅„© σ΅„©2 βˆ’ πœŒπœ† (1 βˆ’ πœ†) σ΅„©σ΅„©σ΅„©σ΅„©π‘₯1 βˆ’ π‘₯2 σ΅„©σ΅„©σ΅„©σ΅„©

(4)

for any π‘₯1 , π‘₯2 ∈ 𝑋0 and any πœ† ∈ [0, 1], then the function 𝑓(π‘₯) is said to be 𝜌-convex on 𝑋0 . Especially, if 𝜌 = 0, then we obtain the definition of convex. If 𝜌 > 0 (or 𝜌 < 0) in the above definition, then we have strong convex (or weak convex).

𝑓𝑖 (π‘₯) βˆ’ 𝑓𝑖 (π‘₯0 ) ≧ 𝐹 (π‘₯, π‘₯0 ; βˆ‡π‘“π‘– (π‘₯0 )) + πœŒπ‘– 𝑑2 (π‘₯, π‘₯0 ) .

(5)

Definition 7 (see [5]). The function 𝑓𝑖 is said to be (𝐹, 𝛼, πœŒπ‘– , 𝑑)convex at π‘₯0 ∈ 𝑋0 , if 𝑓𝑖 (π‘₯) βˆ’ 𝑓𝑖 (π‘₯0 ) ≧ 𝐹 (π‘₯, π‘₯0 ; 𝛼 (π‘₯, π‘₯0 ) βˆ‡π‘“π‘– (π‘₯0 )) + πœŒπ‘– 𝑑2 (π‘₯, π‘₯0 ) ,

Definition 1 (see [12]). Suppose that π‘₯0 ∈ 𝑋0 ; that is, if π‘₯ βˆ‰ 𝑋0 , such that 𝑓(π‘₯) ≦ 𝑓(π‘₯0 ), π‘₯0 is an efficient solution of multiobjective programming problem.

𝐹 (π‘₯, π‘₯0 ; π›Όπ‘Ž) = 𝛼𝐹 (π‘₯, π‘₯0 ; π‘Ž) ,

𝑓 (πœ†π‘₯1 + (1 βˆ’ πœ†) π‘₯2 ) ≦ πœ†π‘“ (π‘₯1 ) + (1 βˆ’ πœ†) 𝑓 (π‘₯2 )

(3)

Let 𝐹 : 𝑋0 Γ— 𝑋0 Γ— 𝑅𝑛 β†’ 𝑅 be a sublinear function, and let 𝛼 : 𝑋0 Γ— 𝑋0 β†’ 𝑅+ \ {0}, 𝜌 = (𝜌1 , 𝜌2 , . . . , πœŒπ‘š )𝑇 , πœŒπ‘– ∈ 𝑅, 𝑑 : 𝑋0 Γ— 𝑋0 β†’ 𝑅, and the function 𝑓 = (𝑓1 , 𝑓2 , . . . , π‘“π‘š ) : 𝑋0 β†’ π‘…π‘š is differentiable at π‘₯0 ∈ 𝑋0 . Definition 4 (see [3]). Let πœ™(π‘₯) be a differentiable function defined on 𝑋0 βŠ‚ 𝐸𝑛 . The function πœ™(π‘₯) is said to be 𝐹-convex on 𝑋0 with respect to 𝐹, if πœ™(π‘₯) βˆ’ πœ™(𝑦) ≧ 𝐹π‘₯,𝑦 [βˆ‡πœ™(𝑦)].

βˆ€π‘₯ ∈ 𝑋0 . (6) The function 𝑓 is said to be (𝐹, 𝛼, 𝜌, 𝑑)-convex at π‘₯0 , if each component 𝑓𝑖 of 𝑓 is (𝐹, 𝛼, πœŒπ‘– , 𝑑)-convex at π‘₯0 . The function 𝑓 is said to be (𝐹, 𝛼, 𝜌, 𝑑)-convex on 𝑋0 , if it is (𝐹, 𝛼, 𝜌, 𝑑)-convex at every point in 𝑋0 . Definition 8. The function 𝑓𝑖 is said to be (𝐹, 𝛼, πœŒπ‘– , 𝑑)-quasiconvex at π‘₯0 , if 𝑓𝑖 (π‘₯) ≦ 𝑓𝑖 (π‘₯0 ) β‡’ 𝐹(π‘₯, π‘₯0 ; 𝛼(π‘₯, π‘₯0 )βˆ‡π‘“π‘– (π‘₯0 )) ≦ βˆ’πœŒπ‘– 𝑑2 (π‘₯, π‘₯0 ). The function 𝑓 is said to be (𝐹, 𝛼, 𝜌, 𝑑)-quasiconvex at π‘₯0 , if each component 𝑓𝑖 of 𝑓 is (𝐹, 𝛼, πœŒπ‘– , 𝑑)-quasiconvex at π‘₯0 . Definition 9. The function 𝑓𝑖 is said to be (𝐹, 𝛼, πœŒπ‘– , 𝑑)pseudoconvex at π‘₯0 , if for all π‘₯ ∈ 𝑋0 , 𝑓𝑖 (π‘₯) < 𝑓𝑖 (π‘₯0 ) β‡’ 𝐹(π‘₯, π‘₯0 ; 𝛼(π‘₯, π‘₯0 )βˆ‡π‘“π‘– (π‘₯0 )) < βˆ’πœŒπ‘– 𝑑2 (π‘₯, π‘₯0 ). The function 𝑓 is said to be (𝐹, 𝛼, 𝜌, 𝑑)-pseudoconvex at π‘₯0 , if each component 𝑓𝑖 of 𝑓 is (𝐹, 𝛼, πœŒπ‘– , 𝑑)-pseudoconvex at π‘₯0 . Definition 10. The function 𝑓 is said to be strictly (𝐹, 𝛼, πœŒπ‘– , 𝑑)-pseudoconvex at π‘₯0 ∈ 𝑋0 , if 𝑓(π‘₯) ≦ 𝑓(π‘₯0 ) β‡’ 𝐹(π‘₯, π‘₯0 ; 𝛼(π‘₯, π‘₯0 )βˆ‡π‘“(π‘₯0 )) < βˆ’πœŒπ‘‘2 (π‘₯, π‘₯0 ), where 𝐹(π‘₯, π‘₯0 ; 𝛼(π‘₯, π‘₯0 )βˆ‡π‘“(π‘₯0 )) = (𝐹(π‘₯, π‘₯0 ; 𝛼(π‘₯, π‘₯0 )βˆ‡π‘“1 (π‘₯0 )), . . . , 𝐹(π‘₯, π‘₯0 ; 𝛼(π‘₯, π‘₯0 )βˆ‡π‘“π‘š (π‘₯0 ))). Further, 𝑓 is said to be weakly strictly (𝐹, 𝛼, 𝜌, 𝑑)pseudoconvex at π‘₯0 ∈ 𝑋0 , if 𝑓(π‘₯) ≦ 𝑓(π‘₯0 ) β‡’ 𝐹(π‘₯, π‘₯0 ; 𝛼(π‘₯, π‘₯0 )βˆ‡π‘“(π‘₯0 )) < βˆ’πœŒπ‘‘2 (π‘₯, π‘₯0 ). In order to prove our main result, we need a lemma which we present in this section.

Advances in Operations Research

3

Lemma 11 (see [13]). Suppose that differentiable real-valued functions β„Žπ‘— (π‘₯) (𝑗 = 1, 2, . . . , π‘š) are (𝐹, 𝛼, πœŒπ‘— , 𝑑)-quasiconvex at π‘₯ ∈ 𝑆; then 𝑉𝑇 β„Ž(π‘₯) is (𝐹, 𝛼, βˆ‘π‘š 𝑗=1 V𝑗 πœŒπ‘— , 𝑑)-quasiconvex at 𝑇 π‘₯ ∈ 𝑆, where πœŒπ‘— ∈ 𝑅, 𝑉 ≧ 0 and 𝑉 denote the transpose of the π‘š-dimensional column vector 𝑉; that is, 𝑉𝑇 = (V1 , V2 , . . . , Vπ‘š ).

For each 𝑗 (𝑗 = 1, 2, . . . , π‘š), by the (𝐹, 𝛼, πœŒπ‘— , 𝑑)quasiconvexity assumption of β„Žπ‘— (π‘₯) and Lemma 11, we have that 𝑉0𝑇 β„Ž(π‘₯) is (𝐹, 𝛼, βˆ‘π‘š 𝑗=1 V𝑗 πœŒπ‘— , 𝑑)-quasiconvex on 𝑆. Therefore, π‘š

𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‡β„Ž (π‘₯) 𝑉0 ) ≦ βˆ’ βˆ‘πœŒπ‘— V𝑗 𝑑2 (π‘₯, π‘₯) , 𝑗=1

3. Optimality Conditions and Duality 3.1. Nonlinear Fractional Programming Problem Involved Inequality and Equality Constraints Based on Generalized (𝐹,𝛼,𝜌,𝑑)-Convex. Consider the nonlinear fractional programming problem (FP) min s.t.

𝑓 (π‘₯) 𝑔 (π‘₯)

(7)

β„Ž (π‘₯) ≦ 0, 𝑙 (π‘₯) = 0, π‘₯ ∈ 𝑋0 , 𝑛

where 𝑋0 is an open set of 𝑅 , 𝑓(π‘₯) and 𝑔(π‘₯) are real-valued functions defined on 𝑋0 , β„Ž(π‘₯) is an π‘š-dimensional vectorvalued functions defined also on 𝑋0 , and 𝑙(π‘₯) a π‘ž-dimensional vector-valued function. Let 𝑆 = {π‘₯ ∈ 𝑋0 | β„Ž (π‘₯) ≦ 0, 𝑙 (π‘₯) = 0}

βˆ‡(

𝑓 (π‘₯) ) + βˆ‡β„Ž (π‘₯) 𝑉0 + βˆ‡π‘™ (π‘₯) π‘Š0 = 0, 𝑔 (π‘₯)

β„Ž (π‘₯) ≦ 0,

𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) (βˆ‡ (

𝑓 (π‘₯) ) + βˆ‡β„Ž (π‘₯) 𝑉0 + βˆ‡π‘™ (π‘₯) π‘Š0 )) 𝑔 (π‘₯)

< βˆ’ (𝜌 + 𝑉0𝑇 πœŒσΈ€  + π‘Š0𝑇 πœŒσΈ€ σΈ€  ) 𝑑2 (π‘₯, π‘₯) . (13) Considering that 𝜌 + 𝑉0𝑇 πœŒσΈ€  + π‘Š0𝑇 πœŒσΈ€ σΈ€  ≧ 0, we get 𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) (βˆ‡ (

𝑓 (π‘₯) ) + βˆ‡β„Ž (π‘₯) 𝑉0 + βˆ‡π‘™ (π‘₯) π‘Š0 )) < 0. 𝑔 (π‘₯) (14)

By the K-T conditions, we have βˆ‡(𝑓(π‘₯)/𝑔(π‘₯)) + βˆ‡β„Ž(π‘₯)𝑉0 + βˆ‡π‘™(π‘₯)π‘Š0 = 0. Hence, based on the sublinearity of 𝐹, we obtain 𝑓 (π‘₯) ) + βˆ‡β„Ž (π‘₯) 𝑉0 + βˆ‡π‘™ (π‘₯) π‘Š0 )) = 0, 𝑔 (π‘₯) (15)

which contradicts (14). The proof is complete.

𝑙 (π‘₯) = 0.

Proof. Suppose that π‘₯ is not an optimality solution of (FP). Then, there exists a feasible solution π‘₯ ∈ 𝑆 such that 𝑓(π‘₯)/𝑔(π‘₯) < 𝑓(π‘₯)/𝑔(π‘₯). By the (𝐹, 𝛼, 𝜌, 𝑑)-pseudoconvexity assumption of 𝑓(π‘₯)/𝑔(π‘₯), we have 𝑓 (π‘₯) )) < βˆ’πœŒπ‘‘2 (π‘₯, π‘₯) . 𝑔 (π‘₯)

(12)

By (10), (11), and (12), and based on the sublinearity of 𝐹, we have

𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) (βˆ‡ (

Theorem 12. Suppose that π‘₯ is a feasible solution of (FP), that the Kuhn-Tucker conditions hold at π‘₯, that 𝑓(π‘₯)/𝑔(π‘₯) in problem (FP) is (𝐹, 𝛼, 𝜌, 𝑑)-pseudoconvex on 𝑆, β„Žπ‘— (π‘₯) (𝑗 = 1, 2, . . . , π‘š) are (𝐹, 𝛼, πœŒπ‘— , 𝑑)-quasiconvex on 𝑆, and that 𝑙𝑖 (π‘₯) (𝑖 = 1, 2, . . . , π‘ž) are (𝐹, 𝛼, πœŒπ‘– , 𝑑)-quasiconvex over 𝑆, 𝜌, πœŒπ‘– , πœŒπ‘— ∈ 𝑅, 𝜌 + 𝑉0𝑇 πœŒσΈ€  + π‘Š0𝑇 πœŒσΈ€ σΈ€  ≧ 0, where 𝑉0𝑇 πœŒσΈ€  is the inner product about 𝑉0 and πœŒσΈ€  and π‘Š0𝑇 πœŒσΈ€ σΈ€  the inner product about π‘Š0 and πœŒσΈ€ σΈ€  . Then, π‘₯ is an optimality solution for problem (FP).

𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‡ (

𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‡π‘™ (π‘₯) π‘Š0 ) ≦ βˆ’π‘Š0𝑇 πœŒσΈ€ σΈ€  𝑑2 (π‘₯, π‘₯) .

(9)

𝑉0𝑇 β„Ž (π‘₯) = 0, 𝑉0 ≧ 0,

that is, 𝐹(π‘₯, π‘₯; 𝛼(π‘₯, π‘₯)βˆ‡β„Ž(π‘₯)𝑉0 ) ≦ βˆ’π‘‰0π‘‡πœŒσΈ€  𝑑2 (π‘₯, π‘₯). By the (𝐹, 𝛼, πœŒπ‘– , 𝑑)-quasiconvexity of 𝑙𝑖 (π‘₯) (𝑖 = 1, 2, . . . , π‘ž) π‘ž over 𝑆, we have that π‘Š0𝑇 𝑙(π‘₯) is (𝐹, 𝛼, βˆ‘π‘–=1 𝑀𝑖 πœŒπ‘– , 𝑑)quasiconvexity on 𝑆. ≦ Then we obtain 𝐹(π‘₯, π‘₯; 𝛼(π‘₯, π‘₯)βˆ‡π‘™(π‘₯)π‘Š0 ) π‘ž βˆ’ βˆ‘π‘–=1 πœŒπ‘– 𝑀𝑖 𝑑2 (π‘₯, π‘₯), that is,

(8)

denotes the set of all feasible solutions for (FP) and assume that 𝑓(π‘₯), 𝑔(π‘₯), β„Žπ‘— (π‘₯) (𝑗 = 1, 2, . . . , π‘š), and 𝑙𝑖 (π‘₯) (𝑖 = 1, 2, . . . , π‘ž) are continuously differentiable over 𝑋0 and that 𝑓(π‘₯) ≧ 0, 𝑔(π‘₯) > 0, for all π‘₯ ∈ 𝑋0 . If π‘₯ ∈ 𝑋0 is a solution for problem (FP) and if a constraint qualification [14] holds, then the Kuhn-Tucker necessary conditions are given below: there exists 𝑉0 ∈ π‘…π‘š and π‘Š0 ∈ π‘…π‘ž such that

(11)

(10)

Consider the dual problem of (FP): max s.t.

𝑓 (𝑦) + 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦) 𝑔 (𝑦) 𝑓 (𝑦) ) + 𝑒𝑇 βˆ‡β„Ž (𝑦) + V𝑇 βˆ‡π‘™ (𝑦) = 0, πœ†π‘‡ βˆ‡ ( 𝑔 (𝑦) π‘’π‘‡β„Ž (𝑦) ≧ 0,

(FD)

V𝑇 𝑙 (𝑦) = 0, 𝑒 ≧ 0,

V ≧ 0.

Theorem 13. Suppose that πœ†π‘‡ (𝑓(𝑦)/𝑔(𝑦)) is (𝐹, 𝛼, 𝜌1 , 𝑑)pseudoconvex at 𝑦, π‘’π‘‡β„Ž(𝑦)+V𝑇 𝑙(𝑦) is (𝐹, 𝛼, 𝜌2 , 𝑑)-quasiconvex at 𝑦 in problem (FP) and (FD), and that 𝜌1 + 𝜌2 ≧ 0; then πœ†π‘‡ (𝑓(π‘₯)/𝑔(π‘₯)) ≧ πœ†π‘‡ (𝑓(𝑦)/𝑔(𝑦)), for any feasible solution π‘₯ of (FP) and (𝑦, πœ†, 𝑒, V) of (FD).

4

Advances in Operations Research

Proof. Assume that the conclusion is not true; that is, πœ†π‘‡ (𝑓(π‘₯)/𝑔(π‘₯)) < πœ†π‘‡ (𝑓(𝑦)/𝑔(𝑦)). By the (𝐹, 𝛼, 𝜌1 , 𝑑)-pseudoconvex of πœ†π‘‡ (𝑓(π‘₯)/𝑔(π‘₯)) at 𝑦, we get 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) πœ†π‘‡ βˆ‡ (

𝑓 (𝑦) )) < βˆ’πœŒ1 𝑑2 (π‘₯, 𝑦) . 𝑔 (𝑦)

(16)

Using 𝑒𝑇 β„Ž(π‘₯) ≦ 0, V𝑇 𝑙(π‘₯) = 0, π‘’π‘‡β„Ž(𝑦) ≧ 0, V𝑇 𝑙(𝑦) = 0, we have 𝑇

𝑇

𝑇

𝑇

𝑒 β„Ž (π‘₯) + V 𝑙 (π‘₯) ≦ 𝑒 β„Ž (𝑦) + V 𝑙 (𝑦) .

(17)

By the (𝐹, 𝛼, 𝜌2 , 𝑑)-quasiconvex of 𝑒𝑇 β„Ž(𝑦) + V𝑇 𝑙(𝑦), we get 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) (π‘’π‘‡βˆ‡β„Ž (𝑦) + V𝑇 βˆ‡π‘™ (𝑦))) ≦ βˆ’πœŒ2 𝑑2 (π‘₯, 𝑦) . (18) By (16) and (18) and based on the sublinearity of 𝐹, we have 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) (πœ†π‘‡ βˆ‡ (

where 𝑋0 is an open set of 𝑅𝑛 , 𝑓𝑖 (π‘₯) (𝑖 = 1, 2, . . . , 𝑝), 𝑔(π‘₯), β„Žπ‘— (π‘₯) : 𝑋0 β†’ 𝑅 (𝑗 = 1, 2, . . . , π‘š) are real-valued functions defined on 𝑋0 , π‘™π‘˜ (π‘₯) : 𝑋0 β†’ 𝑅 (π‘˜ = 1, 2, . . . , π‘ž) are real-valued functions defined also on 𝑋0 , 𝑓(π‘₯) = (𝑓1 (π‘₯), 𝑓2 (π‘₯), . . . , 𝑓𝑝 (π‘₯))𝑇 , β„Ž(π‘₯) = (β„Ž1 (π‘₯), β„Ž2 (π‘₯), . . . , β„Žπ‘š (π‘₯))𝑇 , and 𝑙(π‘₯) = (𝑙1 (π‘₯), 𝑙2 (π‘₯), . . . , π‘™π‘ž (π‘₯))𝑇 . Let 𝑆 = {π‘₯ | π‘₯ ∈ 𝑋0 , β„Ž (π‘₯) ≦ 0, 𝑙 (π‘₯) = 0}

denotes the set of all feasible solutions of (VFP) and assume that 𝑓𝑖 (π‘₯) (𝑖 = 1, 2, . . . , 𝑝), 𝑔(π‘₯), β„Žπ‘— (π‘₯) (𝑗 = 1, 2, . . . , π‘š), and π‘™π‘˜ (π‘₯) (π‘˜ = 1, 2, . . . , π‘ž) are continuously differentiable over 𝑋0 and that 𝑔(π‘₯) > 0, for all π‘₯ ∈ 𝑋0 . Theorem 14. Suppose that 𝑓(π‘₯)/𝑔(π‘₯) is weakly strictly (𝐹, 𝛼, 𝜌1 , 𝑑)-pseudoconvex at π‘₯ ∈ 𝑋0 , β„Žπ‘— (π‘₯) (𝑗 = 1, 2, . . . , π‘š) are (𝐹, 𝛼, 𝜌2 , 𝑑)-quasiconvex with respect to π‘₯ ∈ 𝑋0 , π‘™π‘˜ (π‘₯) (π‘˜ = 1, 2, . . . , π‘ž) are (𝐹, 𝛼, 𝜌3 , 𝑑)-convex with respect to π‘₯ ∈ 𝑋0 , and π‘ž that there exists πœ† ∈ ∧++ (or ∧+ ), 𝑒 ∈ 𝑅+π‘š , V ∈ 𝑅+ satisfying

𝑓 (𝑦) ) 𝑔 (𝑦)

𝑝

π‘ž

βˆ‘πœ†π‘– βˆ‡ ( 𝑖=1

𝑇

𝑇

+𝑒 βˆ‡β„Ž (𝑦) + V βˆ‡π‘™ (𝑦)))

(19)

π‘š 𝑓 (π‘₯) ) + βˆ‘π‘’π‘— βˆ‡β„Žπ‘— (π‘₯) + βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯) = 0, 𝑔 (π‘₯) 𝑗=1 π‘˜=1

𝑒𝑗 β„Žπ‘— (π‘₯) = 0,

(24)

< βˆ’ (𝜌1 + 𝜌2 ) 𝑑2 (π‘₯, 𝑦) .

β„Žπ‘— (π‘₯) ≦ 0,

𝑗 = 1, 2, . . . , π‘š,

Since (𝑦, πœ†, 𝑒, V) is the feasible solution of (FD), so

π‘™π‘˜ (π‘₯) = 0,

π‘˜ = 1, 2, . . . , π‘ž

𝑓 (𝑦) πœ† βˆ‡( ) + π‘’π‘‡βˆ‡β„Ž (𝑦) + V𝑇 βˆ‡π‘™ (𝑦) = 0. 𝑔 (𝑦) 𝑇

𝑝

(20)

(23)

π‘ž

and 𝜌 = βˆ‘π‘–=1 πœ†π‘– 𝜌1 + βˆ‘π‘š 𝑗=1 𝑒𝑗 𝜌2 + βˆ‘π‘˜=1 Vπ‘˜ 𝜌3 ≧ 0. Then, π‘₯ is an efficient solution of (VFP), where ∧+

Hence, 𝑓 (𝑦) ) 0 = 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) (πœ† βˆ‡ ( 𝑔 (𝑦) +π‘’π‘‡βˆ‡β„Ž (𝑦) + V𝑇 βˆ‡π‘™ (𝑦)))

= {πœ† = (πœ†1 , πœ†2 , . . . , πœ†π‘ ) | πœ†π‘– ≧ 0, 𝑖 = 1, 2, . . . , 𝑝 β‹… βˆ‘πœ†π‘– = 1}, 𝑖=1

(21)

𝑖=1

which contradicts the known condition 𝜌1 +𝜌2 ≧ 0. The proof is complete. 3.2. Nonlinear Multiobjective Fractional Programming Problem Involved Inequality and Equality Constraints Based on (𝐹, 𝛼, 𝜌, 𝑑)-Convex and Generalized (𝐹, 𝛼, 𝜌, 𝑑)-Convex. Consider the nonlinear multiobjective fractional programming problem (VFP) 𝑓𝑝 (π‘₯) 𝑓 (π‘₯) 𝑓2 (π‘₯) 𝑓 (π‘₯) =[ 1 , ,..., ] 𝑔 (π‘₯) 𝑔 (π‘₯) 𝑔 (π‘₯) 𝑔 (π‘₯)

s.t.

β„Ž (π‘₯) ≦ 0, π‘₯ ∈ 𝑋0 𝑙 (π‘₯) = 0, π‘₯ ∈ 𝑋0 ,

𝑝

𝑇

< βˆ’ (𝜌1 + 𝜌2 ) 𝑑 (π‘₯, 𝑦) ,

𝑀 (π‘₯) =

∧

++

= {πœ† = (πœ†1 , πœ†2 , . . . , πœ†π‘ ) | πœ†π‘– > 0, 𝑖 = 1, 2, . . . , 𝑝 β‹… βˆ‘πœ†π‘– = 1}.

2

min

𝑝

𝑇

𝑇

(25) Proof. Suppose that π‘₯ is not an efficient solution of (VFP); then there exists a feasible solution π‘₯ ∈ 𝑋0 such that 𝑀(π‘₯) ≦ 𝑀(π‘₯), that is, 𝑓(π‘₯)/𝑔(π‘₯) ≦ 𝑓(π‘₯)/𝑔(π‘₯). By the weakly strict (𝐹, 𝛼, 𝜌1 , 𝑑)-pseudoconvexity of 𝑓(π‘₯)/𝑔(π‘₯) at π‘₯ ∈ 𝑋0 , we get 𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‡ (

𝑓 (π‘₯) )) < βˆ’πœŒ1 𝑑2 (π‘₯, π‘₯) . 𝑔 (π‘₯)

(26)

Using πœ† ∈ ∧++ (or ∧+ ), then we have (22)

𝑝

βˆ‘πœ†π‘– 𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‡ ( 𝑖=1

𝑝

𝑓 (π‘₯) )) < βˆ’βˆ‘πœ†π‘– 𝜌1 𝑑2 (π‘₯, π‘₯) . (27) 𝑔 (π‘₯) 𝑖=1

Advances in Operations Research

5

Based on the sublinearity of 𝐹, we obtain 𝑝

Consider the dual problem of (VFP) 𝑝

𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‘πœ†π‘– βˆ‡ ( 𝑖=1

𝑓 (π‘₯) )) < βˆ’βˆ‘πœ†π‘– 𝜌1 𝑑2 (π‘₯, π‘₯) . 𝑔 (π‘₯) 𝑖=1 (28)

max

Since 𝑒𝑇 β„Ž(π‘₯) = 0, 𝑒 ≧ 0 and β„Žπ‘— (π‘₯) ≦ 0, we have 𝑒𝑇 β„Ž(π‘₯) βˆ’ 𝑒𝑇 β„Ž(π‘₯) ≦ 0; that is, 𝑒𝑇 β„Ž(π‘₯) ≦ 𝑒𝑇 β„Ž(π‘₯). Since β„Žπ‘— (π‘₯) (𝑗 = 1, 2, . . . , π‘š) are (𝐹, 𝛼, 𝜌2 , 𝑑)-quasiconvex at π‘₯ ∈ 𝑋0 , by Lemma 11, we have 𝑒𝑇 β„Ž is (𝐹, 𝛼, 𝜌2 βˆ‘π‘š 𝑗=1 𝑒𝑗 , 𝑑)quasiconvex at π‘₯ ∈ 𝑋0 . Hence, we obtain the following inequality: π‘š

π‘š

𝑗=1

𝑗=1

(

𝑓1 (𝑦) + 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦) , . . . , 𝑔 (𝑦) 𝑓𝑝 (𝑦) 𝑔 (𝑦) 𝑝

s.t.

βˆ‘πœ† 𝑖 βˆ‡ ( 𝑖=1

+ 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦)) π‘š 𝑓𝑖 (𝑦) ) + βˆ‘π‘’π‘— βˆ‡β„Žπ‘— (𝑦) 𝑔 (𝑦) 𝑗=1

(35)

π‘ž

+ βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (𝑦) = 0

2

𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‘ 𝑒𝑗 βˆ‡β„Žπ‘— (π‘₯)) ≦ βˆ’ βˆ‘π‘’π‘— 𝜌2 𝑑 (π‘₯, π‘₯) . (29)

π‘˜=1

πœ† ∈ ∧++ ,

By the (𝐹, 𝛼, 𝜌3 , 𝑑)-convexity of π‘™π‘˜ (π‘₯) at π‘₯ ∈ 𝑋0 , we have π‘™π‘˜ (π‘₯) βˆ’ π‘™π‘˜ (π‘₯) ≧ 𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‡π‘™π‘˜ (π‘₯)) + 𝜌3 𝑑2 (π‘₯, π‘₯) . (30) Since V ≧ 0, we have π‘ž

0 = V𝑙 (π‘₯) βˆ’ V𝑙 (π‘₯) ≧ βˆ‘ Vπ‘˜ 𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‡π‘™π‘˜ (π‘₯)) π‘˜=1

𝑒 ∈ 𝑅+π‘š ,

V ∈ 𝑅+π‘ž .

Theorem 15. (𝑓𝑖 /𝑔) (𝑖 = 1, 2, . . . , 𝑝) is (𝐹, 𝛼, 𝜌1 , 𝑑)-convex at 𝑦, β„Žπ‘— (𝑗 = 1, 2, . . . , π‘š) is (𝐹, 𝛼, 𝜌2 , 𝑑)-convex at 𝑦, 𝑝 π‘™π‘˜ (π‘˜ = 1, 2, . . . , π‘ž) is (𝐹, 𝛼, 𝜌3 , 𝑑)-convex at 𝑦, and βˆ‘π‘–=1 πœ† 𝑖 𝜌1 + π‘ž π‘š βˆ‘π‘—=1 𝑒𝑗 𝜌2 + βˆ‘π‘˜=1 Vπ‘˜ 𝜌3 ≧ 0, then

(31)

π‘ž

2

+ βˆ‘ Vπ‘˜ 𝜌3 𝑑 (π‘₯, π‘₯) .

𝑝

βˆ‘πœ† 𝑖 (

π‘˜=1

𝑖=1

By the sublinearity of 𝐹, we obtain

𝑝 𝑓 (𝑦) 𝑓𝑖 (π‘₯) ) ≧ βˆ‘πœ† 𝑖 ( 𝑖 ) + 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦) . 𝑔 (π‘₯) 𝑔 (𝑦) 𝑖=1

(36)

π‘ž

π‘ž

π‘˜=1

π‘˜=1

𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯)) ≦ βˆ’ βˆ‘ Vπ‘˜ 𝜌3 𝑑2 (π‘₯, π‘₯) .

(32)

Proof. By the (𝐹, 𝛼, 𝜌1 , 𝑑)-convex of 𝑓𝑖 /𝑔 at 𝑦, we get

By the known conditions, we have 𝑓 (𝑦) 𝑓𝑖 (π‘₯) 𝑓𝑖 (𝑦) ≧ 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) βˆ‡ ( 𝑖 )) βˆ’ 𝑔 (π‘₯) 𝑔 (𝑦) 𝑔 (𝑦)

𝑝

π‘š 𝑓 (π‘₯) ) + βˆ‘ 𝑒𝑗 βˆ‡β„Žπ‘— (π‘₯) 𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) (βˆ‘πœ†π‘– βˆ‡ ( 𝑔 (π‘₯) 𝑖=1 𝑗=1 π‘ž

(33)

+ 𝜌1 𝑑2 (π‘₯, 𝑦) ,

(37)

𝑖 = 1, 2, . . . , 𝑝.

+ βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯))) = 0. π‘˜=1

By (28), (29), and (32) and by the sublinearity of 𝐹, we obtain 𝑝

𝐹 (π‘₯, π‘₯; 𝛼 (π‘₯, π‘₯) (βˆ‘πœ†π‘– βˆ‡ ( 𝑖=1

π‘š 𝑓 (π‘₯) ) + βˆ‘ 𝑒𝑗 βˆ‡β„Žπ‘— (π‘₯) 𝑔 (π‘₯) 𝑗=1

By the (𝐹, 𝛼, 𝜌2 , 𝑑)-convex of β„Žπ‘— at 𝑦, we get β„Žπ‘— (π‘₯) βˆ’ β„Žπ‘— (𝑦) ≧ 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) βˆ‡β„Žπ‘— (𝑦)) + 𝜌2 𝑑2 (π‘₯, 𝑦) , 𝑗 = 1, 2, . . . , π‘š. (38)

π‘ž

+ βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯))) π‘˜=1

𝑝

π‘š

π‘ž

𝑖=1

𝑗=1

π‘˜=1

(34)

< βˆ’ (βˆ‘πœ†π‘– 𝜌1 + βˆ‘ 𝑒𝑗 𝜌2 + βˆ‘ Vπ‘˜ 𝜌3 ) 𝑑2 (π‘₯, π‘₯) = βˆ’πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0 which contradicts the fact of (33). Therefore, π‘₯ is an efficient solution of (VFP). The proof is complete.

By the (𝐹, 𝛼, 𝜌3 , 𝑑)-convex of π‘™π‘˜ at 𝑦, we get π‘™π‘˜ (π‘₯) βˆ’ π‘™π‘˜ (𝑦) ≧ 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) βˆ‡π‘™π‘˜ (𝑦)) + 𝜌3 𝑑2 (π‘₯, 𝑦) , π‘˜ = 1, 2, . . . , π‘ž. (39)

6

Advances in Operations Research π‘ž

Since πœ† ∈ ∧++ , 𝑒 ∈ 𝑅+π‘š , V ∈ 𝑅+ , and by the previous three inequalities, we have that 𝑝

(βˆ‘πœ† 𝑖 ( 𝑖=1

𝑝 𝑖=1

𝑖=1

≧ βˆ‘πœ† 𝑖 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) βˆ‡ ( 𝑖=1

𝑝 𝑓 (𝑦) 𝑓𝑖 (π‘₯) ) ≧ βˆ‘πœ† 𝑖 ( 𝑖 ) + 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦) . 𝑔 (π‘₯) 𝑔 (𝑦) 𝑖=1 (44)

The proof is complete.

𝑓𝑖 (𝑦) ) + 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦)) 𝑔 (𝑦)

𝑝

𝑝

βˆ‘πœ† 𝑖 (

𝑓𝑖 (π‘₯) ) + 𝑒𝑇 β„Ž (π‘₯) + V𝑇 𝑙 (π‘₯)) 𝑔 (π‘₯)

βˆ’ (βˆ‘πœ† 𝑖 (

Since 𝑒𝑇 β„Ž(π‘₯) ≦ 0, V𝑇 𝑙(π‘₯) = 0, we obtain

𝑓𝑖 (𝑦) )) 𝑔 (𝑦) (40)

π‘š

+ βˆ‘ 𝑒𝑗 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) βˆ‡β„Žπ‘— (𝑦)) 𝑗=1 π‘ž

+ βˆ‘ Vπ‘˜ 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) βˆ‡π‘™π‘˜ (𝑦))

3.3. Nonlinear Multiobjective Fractional Programming Problem Involved Inequality and Equality Constraints under (𝐹,𝜌)Convex. Consider the multiobjective fractional programming problem (MFP) (

s.t.

β„Žπ‘— (π‘₯) ≦ 0, 𝑗 = 1, 2, . . . , π‘š, π‘₯ ∈ 𝑋0 ,

π‘˜=1

π‘š

π‘ž

𝑖=1

𝑗=1

π‘˜=1

where 𝑋0 is an open set of 𝑅𝑛 , 𝑓𝑖 (π‘₯) (𝑖 = 1, 2, . . . , 𝑝) : 𝑋0 β†’ 𝑅, 𝑓𝑖 (π‘₯) ≧ 0, 𝑔𝑖 (π‘₯) (𝑖 = 1, 2, . . . , 𝑝) : 𝑋0 β†’ 𝑅, 𝑔𝑖 (π‘₯) > 0, and β„Žπ‘— (π‘₯) (𝑗 = 1, 2, . . . , π‘š) : 𝑋0 β†’ 𝑅, π‘™π‘˜ (π‘₯) (π‘˜ = 1, 2, . . . , π‘ž) : 𝑋0 β†’ 𝑅 are continuously differentiable over 𝑋0 . Denote by 𝐺 the set of all feasible solutions for (MFP); that is,

By the sublinearity of 𝐹, we obtain 𝑝 𝑖=1

𝑓𝑖 (π‘₯) ) + 𝑒𝑇 β„Ž (π‘₯) + V𝑇 𝑙 (π‘₯)) 𝑔 (π‘₯)

𝐺 = {π‘₯ ∈ 𝑋0 | β„Žπ‘— (π‘₯) ≦ 0, 𝑗 = 1, 2, . . . , π‘š;

𝑝

𝑓 (𝑦) βˆ’ (βˆ‘πœ† 𝑖 ( 𝑖 ) + 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦)) 𝑔 (𝑦) 𝑖=1 𝑝

≧ 𝐹 (π‘₯, 𝑦; 𝛼 (π‘₯, 𝑦) (βˆ‘πœ† 𝑖 βˆ‡ ( 𝑖=1

π‘™π‘˜ (π‘₯) = 0, π‘˜ = 1, 2, . . . , π‘ž}

𝑓𝑖 (𝑦) ) + βˆ‘ 𝑒𝑗 βˆ‡β„Žπ‘— (𝑦) 𝑔 (𝑦) 𝑗=1

π‘ž

π‘ž

𝑖=1

𝑗=1

π‘˜=1

𝑝

β„Žπ‘— (π‘₯) ≦ 0, 𝑗 = 1, 2, . . . , π‘š, π‘™π‘˜ (π‘₯) = 0, π‘˜ = 1, 2, . . . , π‘ž, (41)

By the feasibility of (𝑦, πœ†, 𝑒, V), we have 𝑝

𝑝

(42)

π‘ž

Since βˆ‘π‘–=1 πœ† 𝑖 𝜌1 + βˆ‘π‘š 𝑗=1 𝑒𝑗 𝜌2 + βˆ‘π‘˜=1 Vπ‘˜ 𝜌3 ≧ 0 and by (41), we get 𝑝

( βˆ‘πœ† 𝑖 ( 𝑖=1

𝑝

𝑓𝑖 (π‘₯) ) + π‘’π‘‡β„Ž (π‘₯) + V𝑇 𝑙 (π‘₯)) 𝑔 (π‘₯)

𝑓 (𝑦) βˆ’ (βˆ‘πœ† 𝑖 ( 𝑖 ) + 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦)) ≧ 0. 𝑔 (𝑦) 𝑖=1

π‘ž

πœ†π‘— β„Žπ‘— (π‘₯) = 0, 𝑗 = 1, 2, . . . , π‘š,

+ (βˆ‘πœ† 𝑖 𝜌1 + βˆ‘ 𝑒𝑗 𝜌2 + βˆ‘ Vπ‘˜ 𝜌3 ) 𝑑2 (π‘₯, 𝑦) .

π‘ž π‘š 𝑓 (𝑦) ) + βˆ‘ 𝑒𝑗 βˆ‡β„Žπ‘— (𝑦) + βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (𝑦) = 0. βˆ‘πœ† 𝑖 βˆ‡ ( 𝑖 𝑔 (𝑦) 𝑖=1 𝑗=1 π‘˜=1

Theorem 16. Assume that there exists (π‘₯, 𝛼, πœ†, V) and 𝛼 = 𝑝 (𝛼1 , 𝛼2 , . . . , 𝛼𝑝 ) ∈ 𝑅+ , πœ† = (πœ†1 , πœ†2 , . . . , πœ†π‘š ), V = (V1 , V2 , . . . , Vπ‘ž ) such that (i) βˆ‘π‘–=1 𝛼𝑖 𝑇𝑖 (π‘₯) + βˆ‘π‘š 𝑗=1 πœ†π‘— βˆ‡β„Žπ‘— (π‘₯) + βˆ‘π‘˜=1 Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯) = 0,

π‘˜=1

π‘š

(46)

and let πœ‘π‘– (π‘₯) = 𝑓𝑖 (π‘₯)/𝑔𝑖 (π‘₯), πœ‘(π‘₯) = (πœ‘1 (π‘₯), πœ‘2 (π‘₯), . . . , πœ‘π‘ (π‘₯)).

π‘š

+ βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (𝑦) )) 𝑝

(45)

π‘™π‘˜ (π‘₯) = 0, π‘˜ = 1, 2, . . . , π‘ž, π‘₯ ∈ 𝑋0 ,

𝑝

+ (βˆ‘πœ† 𝑖 𝜌1 + βˆ‘ 𝑒𝑗 𝜌2 + βˆ‘ Vπ‘˜ 𝜌3 ) 𝑑2 (π‘₯, 𝑦) .

(βˆ‘πœ† 𝑖 (

𝑓𝑝 (π‘₯) 𝑓1 (π‘₯) 𝑓2 (π‘₯) , ,..., ) 𝑔1 (π‘₯) 𝑔2 (π‘₯) 𝑔𝑝 (π‘₯)

min

(43)

where 𝑇𝑖 (π‘₯) = (1/𝑔𝑖 (π‘₯))[βˆ‡π‘“π‘– (π‘₯) βˆ’ πœ‘π‘– (π‘₯)βˆ‡π‘”π‘– (π‘₯)]; (ii) 𝑓𝑖 and βˆ’π‘”π‘– (𝑖 = 1, 2, . . . , 𝑝) are (𝐹, 𝜌)-convex at π‘₯, and 𝜌 > 0; (iii) β„Žπ‘— are (𝐹, 𝜌)-convex at π‘₯ for all 𝑗, 𝑗 = 1, 2, . . . , π‘š, and 𝜌 > 0; (iv) π‘™π‘˜ are (𝐹, 𝜌)-convex at π‘₯ for all π‘˜, π‘˜ = 1, 2, . . . , π‘ž, and 𝜌 > 0. Then π‘₯ is a Pareto optimality solution of (MFP). Proof. Suppose that π‘₯ is not a Pareto optimality solution of (MFP); then there exists a feasible solution π‘₯ ∈ 𝐺 such that 𝑓𝑖 (π‘₯)/𝑔𝑖 (π‘₯) ≦ 𝑓𝑖 (π‘₯)/𝑔𝑖 (π‘₯), 𝑖 = 1, 2, . . . , 𝑝, that is, 𝑓𝑖 (π‘₯) βˆ’ (𝑓𝑖 (π‘₯)𝑔𝑖 (π‘₯))/𝑔𝑖 (π‘₯) ≦ 0, that is, 𝑓𝑖 (π‘₯) βˆ’ (𝑓𝑖 (π‘₯)/𝑔𝑖 (π‘₯))𝑔𝑖 (π‘₯) ≦ 𝑓𝑖 (π‘₯) βˆ’ (𝑓𝑖 (π‘₯)/𝑔𝑖 (π‘₯))𝑔𝑖 (π‘₯); it follows that 𝑓𝑖 (π‘₯) βˆ’ πœ‘π‘– (π‘₯) 𝑔𝑖 (π‘₯) ≦ 𝑓𝑖 (π‘₯) βˆ’ πœ‘π‘– (π‘₯) 𝑔𝑖 (π‘₯) .

(47)

Advances in Operations Research

7

By the (𝐹, 𝜌)-convexity of 𝑓𝑖 and βˆ’π‘”π‘– , 𝑖 = 1, 2, . . . , 𝑝, we have 𝑓𝑖 (π‘₯) βˆ’ 𝑓𝑖 (π‘₯) ≧ 𝐹 (π‘₯, π‘₯; βˆ‡π‘“π‘– (π‘₯)) + πœŒπ‘‘2 (π‘₯, π‘₯) ,

which together with πœ†π‘— β„Žπ‘— (π‘₯) ≦ 0 and πœ†π‘— β„Žπ‘— (π‘₯) = 0 yields 𝐹 (π‘₯, π‘₯; πœ†π‘— βˆ‡β„Žπ‘— (π‘₯)) + πœ†π‘— πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0.

βˆ€π‘₯ ∈ 𝑋0 ,

βˆ’π‘”π‘– (π‘₯) + 𝑔𝑖 (π‘₯) ≧ 𝐹 (π‘₯, π‘₯; βˆ’βˆ‡π‘”π‘– (π‘₯)) + πœŒπ‘‘2 (π‘₯, π‘₯),

βˆ€π‘₯ ∈ 𝑋0 . (48)

Using the conditions 𝑓𝑖 (π‘₯) ≧ 0, 𝑔𝑖 (π‘₯) > 0, we see that πœ‘π‘– (π‘₯) = 𝑓𝑖 (π‘₯)/𝑔𝑖 (π‘₯) ≧ 0. By the properties of the sublinear functional 𝐹, we obtain βˆ’ πœ‘π‘– (π‘₯) 𝑔𝑖 (π‘₯) + πœ‘π‘– (π‘₯) 𝑔𝑖 (π‘₯) ≧ πœ‘π‘– (π‘₯) 𝐹 (π‘₯, π‘₯; βˆ’βˆ‡π‘”π‘– (π‘₯)) + πœ‘π‘– (π‘₯) πœŒπ‘‘2 (π‘₯, π‘₯)

(49)

= 𝐹 (π‘₯, π‘₯; βˆ’πœ‘π‘– (π‘₯) βˆ‡π‘”π‘– (π‘₯)) + πœ‘π‘– (π‘₯) πœŒπ‘‘2 (π‘₯, π‘₯) .

𝑓𝑖 (π‘₯) βˆ’ πœ‘π‘– (π‘₯) 𝑔𝑖 (π‘₯) βˆ’ [𝑓𝑖 (π‘₯) βˆ’ πœ‘π‘– (π‘₯) 𝑔𝑖 (π‘₯)] ≧ 𝐹 (π‘₯, π‘₯; βˆ‡π‘“π‘– (π‘₯) βˆ’ πœ‘π‘– (π‘₯) βˆ‡π‘”π‘– (π‘₯))

By accumulating the inequality (56) with 𝑗, we have

(50)

𝑝

𝐹 (π‘₯, π‘₯; βˆ‘π›Όπ‘– (

1 ) [βˆ‡π‘“π‘– (π‘₯) βˆ’ πœ‘π‘– (π‘₯) βˆ‡π‘”π‘– (π‘₯)]) 𝑔𝑖 (π‘₯)

(52)

𝑗=1

𝑗=1

(58)

For π‘˜ = 1, 2, . . . , π‘ž, by the (𝐹, 𝜌)-convexity of π‘™π‘˜ at π‘₯, we have that βˆ€π‘₯ ∈ 𝑋0 . (59)

On multiplying the inequality (59) with Vπ‘˜ ≧ 0, we get Vπ‘˜ π‘™π‘˜ (π‘₯) βˆ’ Vπ‘˜ π‘™π‘˜ (π‘₯) ≧ 𝐹 (π‘₯, π‘₯; Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯)) + Vπ‘˜ πœŒπ‘‘2 (π‘₯, π‘₯) (60)

π‘ž

π‘˜=1

π‘˜=1

(61)

(62)

The inequality (62) along with the sublinearity of 𝐹 implies π‘ž

π‘ž

π‘˜=1

π‘˜=1

𝐹 (π‘₯, π‘₯; βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯)) + βˆ‘ Vπ‘˜ πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0.

(63)

𝑝

π‘š

π‘ž

𝑗=1

π‘˜=1

𝐹 (π‘₯, π‘₯; βˆ‘π›Όπ‘– 𝑇𝑖 (π‘₯) + βˆ‘ πœ†π‘— βˆ‡β„Žπ‘— (π‘₯) + βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯)) 𝑝

+ βˆ‘π›Ό 𝑖 ( 𝑖=1

𝑝

𝐹 (π‘₯, π‘₯; βˆ‘π›Όπ‘– 𝑇𝑖 (π‘₯))

1 ) [1 + πœ‘π‘– (π‘₯)] πœŒπ‘‘2 (π‘₯, π‘₯) 𝑔𝑖 (π‘₯)

π‘š

π‘ž

𝑗=1

π‘˜=1

(64)

+ βˆ‘πœ†π‘— πœŒπ‘‘2 (π‘₯, π‘₯) + βˆ‘ Vπ‘˜ πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0.

𝑖=1

(53) 1 ) [1 + πœ‘π‘– (π‘₯)] πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0. 𝑔𝑖 (π‘₯)

On the other hand, for 𝑗 = 1, 2, . . . , π‘š, by the (𝐹, 𝜌)-convexity of β„Žπ‘— at π‘₯, we have β„Žπ‘— (π‘₯) βˆ’ β„Žπ‘— (π‘₯) ≧ 𝐹 (π‘₯, π‘₯; βˆ‡β„Žπ‘— (π‘₯)) + πœŒπ‘‘2 (π‘₯, π‘₯) ,

π‘ž

βˆ‘ 𝐹 (π‘₯, π‘₯; Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯)) + βˆ‘ Vπ‘˜ πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0.

𝑖=1

Since 𝑇𝑖 (π‘₯) = (1/𝑔𝑖 (π‘₯))[βˆ‡π‘“π‘– (π‘₯) βˆ’ πœ‘π‘– (π‘₯)βˆ‡π‘”π‘– (π‘₯)], we get

𝑖=1

π‘š

The sublinearity of 𝐹, (53), (58), and (63) yields

1 ) [1 + πœ‘π‘– (π‘₯)] πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0. 𝑔𝑖 (π‘₯)

+ βˆ‘π›Ό 𝑖 (

(57)

By accumulating the inequality (61) with π‘˜, we have

If we sum up after multiplying by 𝛼𝑖 (1/𝑔𝑖 (π‘₯)) ≧ 0 (𝑖 = 1, 2, . . . , 𝑝) in the above inequality and by using the sublinearity of 𝐹, we have

𝑝

π‘š

𝐹 (π‘₯, π‘₯; βˆ‘ πœ†π‘— βˆ‡β„Žπ‘— (π‘₯)) + βˆ‘πœ†π‘— πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0.

𝐹 (π‘₯, π‘₯; βˆ‡π‘“π‘– (π‘₯) βˆ’ πœ‘π‘– (π‘₯) βˆ‡π‘”π‘– (π‘₯)) + [1 + πœ‘π‘– (π‘₯)] πœŒπ‘‘ (π‘₯, π‘₯) ≦ 0. (51)

𝑖=1

𝑗=1

that is,

2

+ βˆ‘π›Ό 𝑖 (

𝑗=1

𝐹 (π‘₯, π‘₯; Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯)) + Vπ‘˜ πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0.

By (47), we have

𝑝

π‘š

which together with Vπ‘˜ π‘™π‘˜ (π‘₯) = 0 and Vπ‘˜ π‘™π‘˜ (π‘₯) = 0 yields

+ [1 + πœ‘π‘– (π‘₯)] πœŒπ‘‘2 (π‘₯, π‘₯) .

𝑖=1

π‘š

βˆ‘ 𝐹 (π‘₯, π‘₯; πœ†π‘— βˆ‡β„Žπ‘— (π‘₯)) + βˆ‘ πœ†π‘— πœŒπ‘‘2 (π‘₯, π‘₯) ≦ 0,

π‘™π‘˜ (π‘₯) βˆ’ π‘™π‘˜ (π‘₯) ≧ 𝐹 (π‘₯, π‘₯; βˆ‡π‘™π‘˜ (π‘₯)) + πœŒπ‘‘2 (π‘₯, π‘₯) ,

By (48) and (49) and based on the sublinearity of 𝐹, we have

(56)

βˆ€π‘₯ ∈ 𝑋0 . (54)

On multiplying the inequality (54) by πœ†π‘— β‰₯ 0 and using the sublinearity of 𝐹, we have πœ†π‘— β„Žπ‘— (π‘₯) βˆ’ πœ†π‘— β„Žπ‘— (π‘₯) ≧ 𝐹 (π‘₯, π‘₯; πœ†π‘— βˆ‡β„Žπ‘— (π‘₯)) + πœ†π‘— πœŒπ‘‘2 (π‘₯, π‘₯) , (55)

According to the assumption and the sublinearity of 𝐹, we obtain 𝑝

π‘š

π‘ž

𝑗=1

π‘˜=1

𝐹 (π‘₯, π‘₯; βˆ‘π›Όπ‘– 𝑇𝑖 (π‘₯) + βˆ‘πœ†π‘— βˆ‡β„Žπ‘— (π‘₯) + βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (π‘₯)) 𝑖=1

𝑝

+ βˆ‘π›Όπ‘– ( 𝑖=1

π‘š 1 ) [1 + πœ‘π‘– (π‘₯)] πœŒπ‘‘2 (π‘₯, π‘₯) + βˆ‘ πœ†π‘— πœŒπ‘‘2 (π‘₯, π‘₯) 𝑔𝑖 (π‘₯) 𝑗=1

π‘ž

+ βˆ‘ Vπ‘˜ πœŒπ‘‘2 (π‘₯, π‘₯) > 0, π‘˜=1

(65) which contradicts (64) obviously.

8

Advances in Operations Research Therefore, π‘₯ is a Pareto optimality solution of (MFP). The proof is complete.

By (67) and based on the sublinearity of 𝐹, we have 𝑝

βˆ‘πœ† 𝑖 ( 𝑖=1

Consider the dual problem of (MFP) (

max

𝑔𝑝 (𝑦) 𝑝

βˆ‘πœ† 𝑖 βˆ‡ (

s.t.

𝑖=1

π‘ž

𝑗=1

π‘˜=1

π‘˜=1

𝑝 𝑖=1

(66)

π‘š

π‘ž

𝑗=1

π‘˜=1

𝑝

π‘š

π‘ž

𝑖=1

𝑗=1

π‘˜=1

𝑝

π‘˜=1

Since βˆ‘π‘–=1 πœ† 𝑖 βˆ‡(𝑓𝑖 (𝑦)/𝑔𝑖 (𝑦)) π‘ž βˆ‘π‘˜=1 Vπ‘˜ βˆ‡π‘™π‘˜ (𝑦) = 0, we have

π‘’π‘‡β„Ž (𝑦) ≧ 0,

𝑝

V𝑇 𝑙 (𝑦) = 0,

βˆ‘πœ† 𝑖 ( 𝑖=1

V ≧ 0.

Theorem 17. 𝑓𝑖 (𝑦)/𝑔𝑖 (𝑦) (𝑖 = 1, 2, . . . , 𝑝) is (𝐹, 𝜌)-convex at 𝑦, β„Žπ‘— (𝑦) (𝑗 = 1, 2, . . . , π‘š) is (𝐹, 𝜌)-convex at 𝑦, π‘™π‘˜ (𝑦) (π‘˜ = π‘ž 1, 2, . . . , π‘ž) is (𝐹, 𝜌)-convex at 𝑦, and πœ† ∈ ∧++ , 𝑒 ∈ 𝑅+π‘š , V ∈ 𝑅+ , 𝑝 π‘ž π‘š 𝑇 βˆ‘π‘–=1 πœ† 𝑖 πœŒπ‘– + βˆ‘π‘—=1 𝑒𝑗 πœŒπ‘— + βˆ‘π‘˜=1 Vπ‘˜ πœŒπ‘˜ ≧ 0, then πœ† (𝑓𝑖 (π‘₯)/𝑔𝑖 (π‘₯)) ≧

𝑖=1

𝑝 𝑓 (𝑦) 𝑓𝑖 (π‘₯) ) βˆ’ βˆ‘πœ† 𝑖 ( 𝑖 ) 𝑔𝑖 (π‘₯) 𝑔𝑖 (𝑦) 𝑖=1

π‘š

π‘š

π‘ž

𝑗=1

𝑗=1

π‘˜=1

𝑗=1

𝑗=1

βˆ‘ 𝑒𝑗 β„Žπ‘— (π‘₯) βˆ’ βˆ‘ 𝑒𝑗 β„Žπ‘— (𝑦) π‘š

π‘š

𝑗=1

𝑗=1

≧ 𝐹 (π‘₯, 𝑦; βˆ‘π‘’π‘— βˆ‡β„Žπ‘— (𝑦)) + βˆ‘π‘’π‘— πœŒπ‘— 𝑑2 (π‘₯, 𝑦) , π‘ž

π‘ž

βˆ‘ Vπ‘˜ π‘™π‘˜ (π‘₯) βˆ’ βˆ‘ Vπ‘˜ π‘™π‘˜ (𝑦)

π‘˜=1

π‘ž

𝑝

π‘š

π‘ž

𝑖=1

𝑗=1

π‘˜=1

π‘˜=1

+ (βˆ‘πœ† 𝑖 πœŒπ‘– + βˆ‘ 𝑒𝑗 πœŒπ‘— + βˆ‘ Vπ‘˜ πœŒπ‘˜ ) 𝑑2 (π‘₯, 𝑦) . (69) Since β„Žπ‘— (π‘₯) ≦ 0, (𝑗 = 1, 2, . . . , π‘š), π‘™π‘˜ (π‘₯) = 0, (π‘˜ = 1, 2, . . . , π‘ž), 𝑝 π‘ž 𝑒𝑇 β„Ž(𝑦) ≧ 0, V𝑇 𝑙(𝑦) = 0, βˆ‘π‘–=1 πœ† 𝑖 πœŒπ‘– + βˆ‘π‘š 𝑗=1 𝑒𝑗 πœŒπ‘— + βˆ‘π‘˜=1 Vπ‘˜ πœŒπ‘˜ ≧ 0, 𝑝 𝑝 we have βˆ‘π‘–=1 πœ† 𝑖 (𝑓𝑖 (π‘₯)/𝑔𝑖 (π‘₯)) ≧ βˆ‘π‘–=1 πœ† 𝑖 (𝑓𝑖 (𝑦)/𝑔𝑖 (𝑦)). That is, 𝑇 𝑇 πœ† (𝑓𝑖 (π‘₯)/𝑔𝑖 (π‘₯)) ≧ πœ† (𝑓𝑖 (𝑦)/𝑔𝑖 (𝑦)).

This work has been supported by the young and middle-aged leader scientific research foundation of Chengdu University of Information Technology (no. J201218) and the talent introduction foundation of Chengdu University of Information Technology (no. KYTZ201203).

𝑝 𝑓 (𝑦) ≧ 𝐹 (π‘₯, 𝑦; βˆ‘πœ† 𝑖 βˆ‡ ( 𝑖 )) + βˆ‘πœ† 𝑖 πœŒπ‘– 𝑑2 (π‘₯, 𝑦) , 𝑔𝑖 (𝑦) 𝑖=1 𝑖=1 π‘š

+

Acknowledgments

𝑝

π‘š

βˆ‘π‘š 𝑗=1 𝑒𝑗 βˆ‡β„Žπ‘— (𝑦)

≧ βˆ‘π‘’π‘— β„Žπ‘— (𝑦) βˆ’ βˆ‘π‘’π‘— β„Žπ‘— (π‘₯) + βˆ‘ Vπ‘˜ π‘™π‘˜ (𝑦) βˆ’ βˆ‘ Vπ‘˜ π‘™π‘˜ (π‘₯)

πœ† (𝑓𝑖 (𝑦)/𝑔𝑖 (𝑦)).

Proof. By the (𝐹, 𝜌)-convexity of 𝑓𝑖 (𝑦)/𝑔𝑖 (𝑦), β„Žπ‘— (𝑦), and π‘™π‘˜ (𝑦), π‘ž the sublinearity of 𝐹, and since πœ† ∈ ∧++ , 𝑒 ∈ 𝑅+π‘š , V ∈ 𝑅+ , we have that

+

𝑝 𝑓 (𝑦) 𝑓𝑖 (π‘₯) ) βˆ’ βˆ‘πœ† 𝑖 ( 𝑖 ) 𝑔𝑖 (π‘₯) 𝑔𝑖 (𝑦) 𝑖=1

𝑇

𝑝

(68)

+ (βˆ‘πœ† 𝑖 πœŒπ‘– + βˆ‘π‘’π‘— πœŒπ‘— + βˆ‘ Vπ‘˜ πœŒπ‘˜ ) 𝑑2 (π‘₯, 𝑦) .

π‘ž

βˆ‘πœ† 𝑖 (

𝑓𝑖 (𝑦) ) 𝑔𝑖 (𝑦)

+ βˆ‘π‘’π‘— βˆ‡β„Žπ‘— (𝑦) + βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (𝑦)))

+ βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (𝑦) = 0,

𝑒 ≧ 0,

π‘ž

≧ 𝐹 (π‘₯, 𝑦; (βˆ‘πœ† 𝑖 βˆ‡ (

+ 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦))

π‘š 𝑓𝑖 (𝑦) ) + βˆ‘ 𝑒𝑗 βˆ‡β„Žπ‘— (𝑦) 𝑔𝑖 (𝑦) 𝑗=1

π‘š

βˆ’ βˆ‘ 𝑒𝑗 β„Žπ‘— (𝑦) + βˆ‘ Vπ‘˜ π‘™π‘˜ (π‘₯) βˆ’ βˆ‘ Vπ‘˜ π‘™π‘˜ (𝑦)

𝑓1 (𝑦) + 𝑒𝑇 β„Ž (𝑦) + V𝑇 𝑙 (𝑦) , . . . , 𝑔1 (𝑦) 𝑓𝑝 (𝑦)

𝑝 π‘š 𝑓 (𝑦) 𝑓𝑖 (π‘₯) ) βˆ’ βˆ‘πœ† 𝑖 ( 𝑖 ) + βˆ‘π‘’π‘— β„Žπ‘— (π‘₯) 𝑔𝑖 (π‘₯) 𝑔𝑖 (𝑦) 𝑖=1 𝑗=1

π‘˜=1

π‘ž

π‘ž

π‘˜=1

π‘˜=1

≧ 𝐹 (π‘₯, 𝑦; βˆ‘ Vπ‘˜ βˆ‡π‘™π‘˜ (𝑦)) + βˆ‘ Vπ‘˜ πœŒπ‘˜ 𝑑2 (π‘₯, 𝑦) .

(67)

References [1] J. von Neumann and O. Morgenstern, Theory of Games and Economic Behavior, Princeton University Press, Princeton, NJ, USA, 1944. [2] V. Preda, β€œOn efficiency and duality for multiobjective programs,” Journal of Mathematical Analysis and Applications, vol. 166, no. 2, pp. 365–377, 1992. [3] T. R. Gulati and M. A. Islam, β€œSufficiency and duality in multiobjective programming involving generalized 𝐹-convex functions,” Journal of Mathematical Analysis and Applications, vol. 183, no. 1, pp. 181–195, 1994.

Advances in Operations Research [4] J. P. Vial, β€œStrong and weak convexity of sets and functions,” Mathematics of Operations Research, vol. 8, no. 2, pp. 231–259, 1983. [5] Z. A. Liang, H. X. Huang, and P. M. Pardalos, β€œOptimality conditions and duality for a class of nonlinear fractional programming problems,” Journal of Optimization Theory and Applications, vol. 110, no. 3, pp. 611–619, 2001. [6] Z. A. Liang, H. X. Huang, and P. M. Pardalos, β€œEfficiency conditions and duality for a class of multiobjective fractional programming problems,” Journal of Global Optimization, vol. 27, no. 4, pp. 447–471, 2003. [7] T. Weir and B. Mond, β€œGeneralised convexity and duality in multiple objective programming,” Bulletin of the Australian Mathematical Society, vol. 39, no. 2, pp. 287–299, 1989. [8] T. Weir, β€œA duality theorem for a multiple objective fractional optimization problem,” Bulletin of the Australian Mathematical Society, vol. 34, no. 3, pp. 415–425, 1986. [9] V. Jeyakumar and B. Mond, β€œOn generalised convex mathematical programming,” Australian Mathematical Society B, vol. 34, no. 1, pp. 43–53, 1992. [10] R. R. Egudo, β€œEfficiency and generalized convex duality for multiobjective programs,” Journal of Mathematical Analysis and Applications, vol. 138, no. 1, pp. 84–94, 1989. [11] A. Cambini and L. Martein, Generalized Convexity and Optimization, Springer-Verlag, Berlin, Germany, 2008. [12] L. Cuoyun and D. Jiali, Methods and Theories of Multi-Objective Optimization, Jilin Education Press, Changchun, China, 1992. [13] W. Zezheng and Z. Fenghua, β€œOptimality and duality for a class of nonlinear fractional programming problems,” Journal of Sichuan Normal University, vol. 30, no. 5, pp. 594–597, 2007. [14] O. L. Mangasarian, Nonlinear Programming, McGraw-Hill, New York, NY, USA, 1969.

9

Advances in

Operations Research Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Advances in

Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Applied Mathematics

Algebra

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Probability and Statistics Volume 2014

The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Differential Equations Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Submit your manuscripts at http://www.hindawi.com International Journal of

Advances in

Combinatorics Hindawi Publishing Corporation http://www.hindawi.com

Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Journal of

Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of Mathematics and Mathematical Sciences

Mathematical Problems in Engineering

Journal of

Mathematics Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Discrete Mathematics

Journal of

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Discrete Dynamics in Nature and Society

Journal of

Function Spaces Hindawi Publishing Corporation http://www.hindawi.com

Abstract and Applied Analysis

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

International Journal of

Journal of

Stochastic Analysis

Optimization

Hindawi Publishing Corporation http://www.hindawi.com

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

Volume 2014