A new hybrid particle swarm optimization algorithm ... - Semantic Scholar

3 downloads 1988 Views 790KB Size Report
This study deals with a new hybrid global–local optimization algorithm named PSOLVER that ..... straint sets by using both gradient repair and constraint fitness pri- .... 0.1рxmax А xminЮ and the Solver run probability is set as Pc ¼ 0.01.
Expert Systems with Applications 37 (2010) 6798–6808

Contents lists available at ScienceDirect

Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

PSOLVER: A new hybrid particle swarm optimization algorithm for solving continuous optimization problems Ali Haydar Kayhan 1, Huseyin Ceylan *, M. Tamer Ayvaz 2, Gurhan Gurarslan 2 Department of Civil Engineering, Pamukkale University, TR-20070 Denizli, Turkey

a r t i c l e

i n f o

Keywords: Particle swarm optimization Hybridization Spreadsheets Solver Optimization

a b s t r a c t This study deals with a new hybrid global–local optimization algorithm named PSOLVER that combines particle swarm optimization (PSO) and a spreadsheet ‘‘Solver” to solve continuous optimization problems. In the hybrid PSOLVER algorithm, PSO and Solver are used as the global and local optimizers, respectively. Thus, PSO and Solver work mutually by feeding each other in terms of initial and sub-initial solution points to produce fine initial solutions and avoid from local optima. A comparative study has been carried out to show the effectiveness of the PSOLVER over standard PSO algorithm. Then, six constrained and three engineering design problems have been solved and obtained results are compared with other heuristic and non-heuristic solution algorithms. Identified results demonstrate that, the hybrid PSOLVER algorithm requires less iterations and gives more effective results than other heuristic and non-heuristic solution algorithms. Ó 2010 Elsevier Ltd. All rights reserved.

1. Introduction Optimization is the process of finding the best set of solutions to achieve an objective subject to given constraints. It is a challenging part of operations research and has a wide variety of applications in economy, engineering and management sciences (Zahara & Kao, 2009). During the last decades, huge number of solution algorithms has been proposed for solving the optimization problems. These algorithms may be mainly classified under two categories: non-heuristic and heuristic algorithms. Non-heuristic algorithms are mostly the gradient-based search methods and very efficient in finding the local optimum solutions with a reasonable times. However, they usually require gradient information to find the search directions (Lee & Geem, 2005). Thus, they may be inefficient for solving the problems where the objective function and the constraints are not differentiable. Therefore, there has been an increasing interest to use the heuristic algorithms to solve the optimization problems. Heuristic optimization algorithms get their mathematical basis from the natural phenomena. Most widely used heuristic optimization algorithms are the genetic algorithms (GA) (Goldberg, 1989; Holland, 1975), tabu search (TS) (Glover, 1977), simulated anneal-

* Corresponding author. Tel.: +90 258 296 3386; fax: +90 258 296 3382. E-mail addresses: [email protected] (A.H. Kayhan), hceylan@ pamukkale.edu.tr (H. Ceylan), [email protected] (M.T. Ayvaz), gurarslan@ pamukkale.edu.tr (G. Gurarslan). 1 Tel.: +90 258 296 3393; fax: +90 258 296 3382. 2 Tel.: +90 258 296 3384; fax: +90 258 296 3382. 0957-4174/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2010.03.046

ing (SA) (Kirkpatrick, Gelatt, & Vecchi, 1983), ant colony optimization (ACO) (Dorigo & Di Caro, 1999), particle swarm optimization (PSO) (Kennedy & Eberhart, 1995), and harmony search (HS) (Geem, Kim, & Loganathan, 2001), etc. Although these algorithms are very effective at exploring the search space, they require relatively long time to precisely find the local optimum (Ayvaz, Kayhan, Ceylan, & Gurarslan, 2009; Fesanghary, Mahdavi, MinaryJolandan, & Alizadeh, 2008; Houck, Joines, & Kay, 1996; Houck, Joines, & Wilson, 1997; Michalewicz, 1992). Recently, hybrid global–local optimization algorithms have become popular solution approaches for solving the optimization problems. These algorithms integrate the global exploring feature of heuristic algorithms and local fine tuning feature of non-heuristic algorithms. Through this integration, optimization problems can be solved more effectively than both global and local optimization algorithms (Shannon, 1998). In these algorithms, the global optimization process searches the optimum solution with multiple solution vectors, and then, local optimization process adjusts the results of global optimization by getting its results as initial solution (Ayvaz et al., 2009). However, their main drawback is that programming the non-heuristic optimization algorithms may be difficult since they require some mathematical calculations such as taking partial derivatives, calculating Jacobian and/or Hessian matrices, taking matrix inversions, etc. Besides, they may require an extra effort to handle the given constraint set through non-heuristic algorithms. Recently, popularity of spreadsheets in solving the optimization problems has been increasing through their mathematical add-ins. Most available spreadsheet packages are coupled with a ‘‘Solver”

6799

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

add-in (Frontline System Inc., 1999) which can solve many nonlinear optimization problems without requiring much knowledge about the non-heuristic algorithms, and so are extremely easy to use (Stokes & Plummer, 2004). ‘‘Solver” solves the optimization problems through generalized reduced gradient (GRG) algorithm (Lasdon, Waren, Jain, & Ratner, 1978) and can solve many linear and nonlinear optimization problems (Ayvaz et al., 2009). The main objective of this study is to develop a new hybrid global–local optimization algorithm for solving the constrained optimization problems. With this purpose, a new hybrid solution algorithm, PSOLVER, is proposed. In the PSOLVER algorithm, PSO is used as a global optimizer and integrated with a spreadsheet ‘‘Solver” to improve the PSO results. The performance of the PSOLVER algorithm is tested on several constrained optimization problems and the results are compared with other solution methods in terms of solution accuracy and the number of function evaluations. Identified results showed that, PSOLVER algorithm requires less number of function evaluations and gives more effective results than other solution algorithms. The remaining of this study is organized as follows: First, the main structure of PSO algorithm is described; second, the necessary steps of building PSOLVER algorithm is presented; and finally, the performance of the proposed model is tested on different constrained optimization problems. 2. The particle swarm optimization algorithm The PSO algorithm, first proposed by Kennedy and Eberhart (1995), is developed based on the observations of the social behavior of animals, such as bird flocking or fish schooling. Like other evolutionary algorithms, PSO is also a population based optimization algorithm. In PSO, members of the population are called as the swarm and each individual within the swarm is called as the particle. During the solution process, each particle in the swarm explores the search space through their current positions and velocities. In order to solve an optimization problem using PSO, initially, all the positions and velocities are randomly generated from the feasible search space. Then, the velocity of each particle is updated based on their individual experiences and experiences of the other particles. This task is performed by updating the velocities of each particle using the best position of the related particle and the overall best position visited by the other particles. Finally, the positions of the particles are updated through their new velocities and this process is iterated until the given termination criterion is satisfied. This solution sequence provides that each particle in the swarm can learn based on their own experiences (local search) and the experiences of the group (global search). Mathematical statement of PSO algorithm can be given as follows: Let f be the fitness function governing the problem, n be the number of particles in the swarm, m be the dimension of the probT lem (e.g. number of decision variables), xi ¼ ½xi1 ; xi2 ; . . . ; xim  and T v i ¼ ½v i1 ; v i2 ; . . . ; v im  be the vectors that contain the current positions and the velocities of the particles in each dimension, ^ i ¼ ½^ xi1 ; ^ xi2 ; . . . ; ^ xim T be the vector that contains the current best x ^ ¼ ½g 1 ; position of each particle in each dimension, and g g 2 ; . . . ; g m T be the vector that contains the global best position in each dimension (8i ¼ 1; 2; . . . ; n and 8j ¼ 1; 2; . . . ; m), T be the transpose operator. The new velocities of the particles are calculated as follows:

v ikþ1 ¼ xv ki þ c1 r1



   ^  xki ^i  xki þ c2 r 2 g x 8i ¼ 1; 2; . . . ; n

ð1Þ

where k is the iteration index, x is the inertial constant, c1 and c2 are the acceleration coefficients which are used to determine how much the particle’s personal best and the global best influence its movement, and r1 and r2 are the uniform random numbers between

0 and 1. Note that the values of x; c1 and c2 control the impact of previous historical values of particle velocities on its current one. A larger value of x leads to global exploration, whereas smaller values results with a fine search within the solution space. Therefore, suitable selection of x; c1 and c2 provides a balance between the global and local search processes (Salman, Ahmad, & Al-Madani,     ^  xki in Eq. (1) ^ i  xki and c2 r 2 g 2002). Note that the terms c1 r1 x are called the cognition and social terms, respectively. The cognition term takes into account only the particle’s own experience, whereas the social term signifies the interaction between the particles. Particle’s velocities in a swarm are usually bounded with a maximum  T ; v max ; . . . ; v max which is calculated as a fracvelocity v max ¼ v max 1 2 m tion of the entire search space as follows (Shi & Eberhart, 1998):

vmax ¼ cðxmax  xmin Þ

ð2Þ max



T xmax ; xmax ; . . . ; xmax 1 2 m

where c is a fraction ð06 c < 1Þ; x ¼ and min min T are the vectors that contain the upper xmin ¼ xmin 1 ; x2 ; . . . ; xm and lower bounds of the search space for each dimension, respectively. After the velocity updating process is performed through Eqs. (1) and (3), the new positions of the particles are calculated as follows:

xkþ1 ¼ xki þ v kþ1 i i

8i ¼ 1; 2; . . . ; n

ð3Þ

After the calculation of Eq. (3), the corresponding fitness values are calculated based on the new positions of the particles. Then, the val^ ð8i ¼ 1; 2; . . . ; nÞ are updated. This solution proce^ i and g ues of x dure is repeated until the given termination criterion has been satisfied. Fig. 1 shows the step by step solution procedure of PSO algorithm (Wikipedia, 2009). The PSO has been applied to wide variety of disciplines including neural network training (Eberhart & Hu, 1999; Eberhart & Kennedy, 1995; Kennedy & Eberhart, 1995, 1997; Salerno, 1997; Van Den Bergh & Engelbrecht, 2000), biochemistry (Cockshott & Hartman, 2001), manufacturing (Tandon, El-Mounayri, & Kishawy, 2002), electromagnetism (Baumgartner, Magele, & Renhart, 2004; Brandstätter & Baumgartner, 2002; Ciuprina, Loan, & Munteanu, 2002), electrical power (Abido, 2002; Yoshida, Fukuyama, Takayama, & Nakanishi, 1999), optics (Slade, Ressom, Musavi, & Miller, 2004), structural optimization (Fourie & Groenwold, 2002; Perez & Behdinan, 2007; Venter & Sobieszczanski-Sobieski, 2004), end milling (Tandon, 2000) and structural reliability (Elegbede, 2005). Generally, it can be said that PSO is applicable to solve the most optimization problems. 3. Development of hybrid PSOLVER algorithm As indicated above, PSO is an efficient optimization algorithm and successively applied to the solution of optimization problems. However, like other heuristic optimization algorithms, PSO is also an evolutionary computation technique and may require high computational times to precisely find an exact optimum. Therefore, hybridizing the PSO with a local search method becomes a good idea such that PSO finds the possible solutions where the global optimum exists, and local search method employs a fine search to precisely find the global optimum. This kind of solution approach makes the convergence rate faster than the pure global search and prevents the problem of trapping to local optimums by pure local search (Fan & Zahara, 2007). The current literature includes several studies in which the PSO algorithm is integrated with the local search methods. Fan, Liang, and Zahara (2004) developed a hybrid optimization algorithm which integrates the PSO and Nelder–Mead (NM) simplex search method for the optimization of multimodal test functions. Their results showed that the NM–PSO algorithm is superior to other search methods. Victoire and Jeyakumar (2004) integrated the

6800

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

Fig. 1. Step by step solution procedure of PSO algorithm.

PSO algorithm with the sequential quadratic programming (SQP) technique for solving the economic dispatch problems. In their PSO–SQP algorithm, PSO is used as the global optimizer and SQP is used as the local optimizer which is used for fine tuning the each solution of PSO. They tested their model performance on three different economic dispatch problems. Results showed that their PSO–SQP algorithm provides better solutions than those of other solution methods. Kazuhiro, Shinji, and Masataka (2006) combined the PSO and sequential linear programming (SLP) to solve the structural optimization problems. Their results showed that hybrid PSO–SLP finds very efficient results. Ghaffari-Miab, FarmahiniFarahani, Faraji-Dana, and Lucas (2007) developed a hybrid solution algorithm which integrates the PSO and gradient-based quasi-Newton method. They applied their hybrid model to the solution of complex time Green’s functions of multilayer media. Their results indicated that hybrid PSO algorithm is superior compared to other optimization techniques. Zahara and Hu (2008) developed a hybrid NM–PSO algorithm for solving the constrained optimization problems. Their NM–PSO algorithm handles constraint sets by using both gradient repair and constraint fitness priority-based ranking operators. According to their results, NM–PSO with embedded constraint handling operators is extremely effective and efficient at locating optimal solutions. As a later study, Zahara and Kao (2009) applied the NM–PSO algorithm of Zahara and Hu (2008) to the solution of engineering design problems with a great success. As summarized above, hybridizing the PSO algorithm with local search methods is an effective and efficient way to solve the optimization problems. However, programming these hybrid algorithms may be a difficult task for the non-major people since most of the local search methods require some complex mathematical calculations. Therefore, in this study, PSO is hybridized with a spreadsheet Solver since it requires little knowledge about the programming of local search methods. Solver is a powerful gradient-based optimization add-in and the most commercial spreadsheet products (Lotus 1-2-3Ò, Quattro ProÒ, Microsoft ExcelÒ) contain it. Solver solves the linear and nonlinear optimization problems through GRG algorithm (Lasdon et al., 1978). It works by first evaluating the functions and derivatives at a starting value of the decision vector, and then iteratively searches for a better solution using a search direction suggested by derivatives (Stokes & Plummer, 2004). To determine a search direction, Solver uses the quasi-Newton and conjugate gradient meth-

ods. Note that the user is not required to provide the partial derivatives with respect to decision variables in Solver. Instead, forward or central difference approximations are used in the search process (OTC, 2009). This may be the main advantage of using Solver as a local optimizer in this study. It should be noted that the global optimizer PSO and the local optimizer Solver have been integrated by developing a running Visual Basic for Applications (VBA) code on the background of a spreadsheet platform (Excel for this study). In this integration, two separate running VBA codes have been developed. The first code includes the standard PSO algorithm and is used as the global optimizer. The second code is used for calling the Solver add-in and developed by creating a VBA macro instead of manually calling the Solver add-in. Note that a macro is a series of commands grouped together as a single command to accomplish a task automatically and can be created through macro recorder that saves the series of commands in VBA (Ferreira & Salcedo, 2001). The source code of the recorded macro can be easily modified in the Visual Basic Editor of the spreadsheets (Ferreira & Salcedo, 2001; Microsoft, 1995; Rosen, 1997). By using this feature of the spreadsheets, the recorded Solver macro is integrated with the developed PSO code on VBA platform. Note that, dealing with the use of a spreadsheet Solver as a local optimizer, Ayvaz et al. (2009) firstly proposed a hybrid optimization algorithm in which HS and the Solver is integrated to solve engineering optimization problems. With this purpose, they developed a hybrid HS–Solver algorithm. They tested the performance of HS–Solver algorithm on 4 unconstrained, 4 constrained and 4 structural engineering problems. Their results indicated that hybrid HS–Solver algorithm requires less number of function evaluations and finds better or identical objective function values than many non-heuristic and heuristic optimization algorithms. It should be noted that Fesanghary et al. (2008) mentions about two approaches of integrating global and local search processes. In the first approach, global search process explores the entire search space until the objective function improvement is negligible, and then, local search method performs a fine search by taking the best solution of global search as a starting point. On the other hand, in the second approach, both global and local search processes work simultaneously such that all the solutions of global search are fine tuned by local search. When the optimized solution of local search has a better objective function value than the global search, this solution is transferred to global search and solution proceeds until

6801

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

the given termination criterion satisfied (Fesanghary et al., 2008; Ayvaz et al., 2009). Compared the first and second approaches, it is obvious that the second approach provides better results than the first approach. However, computational cost of the second approach is usually higher than the first one since all the solutions of global search will be subject to local search. Note that the second approach is taken into account in this study and PSO and Solver optimizers are integrated based on a probability of P c such that a globally generated solution vector is subjected to local search with a probability of P c . Note that our trials and the recommendations of Fesanghary et al. (2008) and Ayvaz et al. (2009) state that use of a fairly small P c value is sufficient for solving many optimization problems. Therefore, we have used the probability Pc ¼ 0:01 throughout the paper. After given convergence criteria of the Solver are satisfied, the locally improved solution is included to PSO and the global search proceeds until termination. Fig. 2 shows the step by step procedure of the PSOLVER algorithm.

two stopping criteria have been considered such that the optimization process ends when the number of generations equals to 1000 or the reference or a better solution has been obtained. 4.1. Performance evaluation study: Michalewicz’s test function Michalewicz’s function is a typical example of nonlinear multimodal functions including n! local optima (Michalewicz, 1992). The function can be given as follows:

" Min f ðxÞ ¼ 

n X

   2s # k  x2k sinðxk Þ sin

s:t: 0 6 xk 6 p;

ð4Þ

p

k¼1

k ¼ 1; 2; . . . ; n

ð4aÞ

where the parameter s defines the ‘‘steepness” of the valleys or edges and assumed to be 10 for this solution. This function has a global optimum solution of f ðx Þ ¼ 4:687658 when n ¼ 5. Fig. 3 shows the solution space of the function when n ¼ 2.

4. Numerical applications In this section, performance of the PSOLVER algorithm is tested by solving several constrained optimization problems. However, before solving these examples, it may be essential to show the efficiency of PSOLVER over the standard PSO algorithm. With this purpose, a performance evaluation study has been performed by solving a common unconstrained optimization problem using both PSOLVER and standard PSO algorithms. Then, six constrained benchmark problems and three well-known engineering design problems have been solved and the results have been compared with other non-heuristic and heuristic optimization algorithms. The related solution parameters of PSOLVER algorithm were set as follows: the number of particles is set to n ¼ 21m þ 1 (Zahara & Kao, 2009), the acceleration coefficients are set to c1 ¼ c2 ¼ 2, the inertia factor is x ¼ ½0:5 þ randð0; 1Þ=2:0 (Eberhart & Shi, 2001; Hu & Eberhart, 2001), the maximum velocity of particles v max ¼ 0:1ðxmax  xmin Þ and the Solver run probability is set as P c ¼ 0:01. All the examples have been solved 30 times for different random number seeds to show the robustness of the algorithm. Note that

1 0.5 0

f ( x ) -0.5 -1 -1.5 -2 4 3

4 3

2 X2 x

2

1

2

1 0

0

Fig. 3. Michalewicz’s test function.

Fig. 2. Step by step solution procedure of PSOLVER algorithm.

X1 x1

6802

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

Fig. 4. Convergence history of PSO and PSOLVER.

It can be clearly seen from Fig. 3 that, solution of this function using a gradient-based optimization algorithm is quite difficult task since there are many locations where the gradient of the function equals to zero. Therefore, solving this problem through gradient-based algorithms depends on the quality of the initial solutions. In order to test the performance of PSOLVER algorithm, this problem has been solved using both PSOLVER and standard PSO algorithms. Note that same random number seeds have been used. Thus, same initial solutions have been used in both algorithms. Fig. 4 compares the convergence histories of both algorithms. As can be seen from Fig. 4, both algorithms are started from the same initial solution. Although both PSO and hybrid PSOLVER algorithms find the optimum solution of f ðx Þ ¼ 4:687658, the PSOLVER requires much less function evaluations than PSO. PSOLVER requires only 456 function evaluations, whereas PSO requires 67,600 function evaluations to solve the same problem.

xi  5

i¼1

4 X

x2i 

i¼1

13 X

xi

ð5Þ

i¼5

s:t: g 1 ðxÞ ¼ 2x1 þ 2x2 þ x10 þ x11  10 6 0 g 2 ðxÞ ¼ 2x1 þ 2x3 þ x10 þ x12  10 6 0

ð5aÞ ð5bÞ

g 3 ðxÞ ¼ 2x2 þ 2x3 þ x11 þ x12  10 6 0

ð5cÞ

g 4 ðxÞ ¼ 8x1 þ x10 6 0

ð5dÞ

g 5 ðxÞ ¼ 8x2 þ x11 6 0

ð5eÞ

g 6 ðxÞ ¼ 8x3 þ x12 6 0

ð5fÞ

g 7 ðxÞ ¼ 2x4  x5 þ x10 6 0

ð5gÞ

g 8 ðxÞ ¼ 2x6  x7 þ x11 6 0

ð5hÞ

g 9 ðxÞ ¼ 2x8  x9 þ x12 6 0 0 6 xi 6 1; i ¼ 1; 2; 3; . . . ; 9

ð5iÞ ð5jÞ

0 6 xi 6 100;

ð5kÞ

0 6 xi 6 1;

This minimization problem has two decision variables and two inequality constraints as given in Eq. (6):

2

The first minimization problem, which includes 13 decision variables and nine inequality constraints, is given in Eq. (5):

Min f ðxÞ ¼ 5

4.3. Example 2

Min f ðxÞ ¼ ðx1  10Þ3 þ ðx2  20Þ3

4.2. Example 1

4 X

Fukushima, 2006), GA (Chootinan & Chen, 2006), and NM–PSO (Zahara & Hu, 2008) methods. After applying the PSOLVER algorithm to this problem, we obtained the best solution at x ¼ ð1; 1; 1; 1; 1; 1; 1; 1; 1; 3; 3; 3; 1Þ with the corresponding objective value of f(x*) = 15.000000. Table 1 compares the identified results for different solution algorithms. As can be seen from Table 1, while the optimum solution were obtained using GA, EA and NM–PSO algorithms after 95,512, 122,000 and 41,959 function evaluations, respectively, the PSOLVER algorithm requires only 679 function evaluations. Therefore, the PSOLVER algorithm is the most effective solution method among the other methods in terms of the number of function evaluations.

i ¼ 10; 11; 12 i ¼ 13

ð5lÞ

The optimal solution of this problem is at x ¼ ð1; 1; 1; 1; 1; 1; 1; 1; 1; 3; 3; 3; 1Þ with a corresponding function value of f(x*) = 15. This function was previously solved using Evolutionary Algorithm (EA) (Runarsson & Yao, 2005), Cultural Differential Evolution (CDE) (Becerra & Coello, 2006), Filter Simulated Annealing (FSA) (Hedar &

2

ð6Þ

s:t: g 1 ðxÞ ¼ ðx1  5Þ  ðx2  5Þ þ 100 6 0

ð6aÞ

g 2 ðxÞ ¼ ðx1  6Þ2 þ ðx2  5Þ2  82:81 6 0

ð6bÞ

13 6 x1 6 100

ð6cÞ

0 6 x2 6 100

ð6dÞ

This function has an optimal solution at x ¼ ð14:095; 0:84296Þ with a corresponding function value of f(x*) = 6961.81388. This problem was previously solved using EA (Runarsson & Yao, 2005), CDE (Becerra & Coello, 2006), FSA (Hedar & Fukushima, 2006), GA (Chootinan & Chen, 2006), and NM–PSO (Zahara & Hu, 2008) methods. Among those studies, the best solution was reported by Zahara and Hu (2008) with an objective function value of f(x*) = 6961.8240 using NM–PSO algorithm after 9856 iterations. We applied the PSOLVER algorithm to this problem and obtained the optimum solution at x ¼ ð14:095; 0:842951Þ where the corresponding objective function value is f(x*) = 6961.8244. This solution is obtained after 179 iterations. The comparison of the identified results for different solution algorithms is given in Table 2. It can be clearly seen from Table 2 that the PSOLVER algorithm provides a better solution than the other solution algorithms with fewer number of function evaluations. 4.4. Example 3 The third example has 3 constraints and 5 decision variables. These are:

6803

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 Table 1 Comparison of the identified results for Example 1. Methods

Best objective function value

Mean objective function value

Worst objective function value

Standard deviation

Number of function evaluations

EA (Runarsson & Yao, 2005) CDE (Becerra & Coello, 2006) FSA (Hedar & Fukushima, 2006) GA (Chootinan & Chen, 2006) NM–PSO (Zahara & Hu, 2008) PSOLVER

15.000000 15.000000 14.999105 15.000000 15.000000 15.000000

15.000000 14.999996 14.993316 15.000000 15.000000 15.000000

15.000000 14.999993 14.979977 15.000000 15.000000 15.000000

0 0.000002 0.004813 0 0 0

122,000 100,100 205,748 95,512 41,959 679

Table 2 Comparison of the identified results for Example 2. Methods

Best objective function value

Mean objective function value

Worst objective function value

Standard deviation

Number of function evaluations

EA (Runarsson & Yao, 2005) CDE (Becerra & Coello, 2006) FSA (Hedar & Fukushima, 2006) GA (Chootinan & Chen, 2006) NM–PSO (Zahara & Hu, 2008) PSOLVER

6961.8139 6961.8139 6961.8139 6961.8139 6961.8240 6961.8244

6961.8139 6961.8139 6961.8139 6961.8139 6961.8240 6961.8244

6961.8139 6961.8139 6961.8139 6961.8139 6961.8240 6961.8244

0 0 0 0 0 0

56,000 100,100 44,538 13,577 9856 179

Table 3 Comparison of the identified results for Example 3. Methods

Best objective function value

Mean objective function value

Worst objective function value

Standard deviation

Number of function evaluations

EA (Runarsson & Yao, 2005) CDE (Becerra & Coello, 2006) FSA (Hedar & Fukushima, 2006) NM–PSO (Zahara & Hu, 2008) PSOLVER

0.053942 0.056180 0.053950 0.053949 0.053949

0.111671 0.288324 0.297720 0.054854 0.053950

0.438804 0.392100 0.438851 0.058301 0.053950

1.40E01 1.67E01 1.89E01 1.26E03 1.14E07

109,200 100,100 120,268 265,548 779

Min f ðxÞ ¼ ex1 x2 x3 x4 x5

ð7Þ

s:t: g 1 ðxÞ ¼ x21 þ x22 þ x23 þ x24 þ x25  10 ¼ 0 g 2 ðxÞ ¼ x2 x3  5x4 x5 ¼ 0

ð7aÞ ð7bÞ

g 3 ðxÞ ¼ x31 þ x32  1 ¼ 0

ð7cÞ

 2:3 6 xi 6 2:3;

i ¼ 1; 2

ð7dÞ

 3:2 6 xi 6 3:2;

i ¼ 3; 4; 5

ð7eÞ

Min f ðxÞ ¼ 5:3578547x33 þ 0:8356891x1 x5 þ 37:293239x1 þ 40792:141

ð8Þ

s:t: g 1 ðxÞ ¼ 85:334407 þ 0:0056858x2 x5 þ 0:0006262x1 x4  0:0022053x3 x5  92 6 0

ð8aÞ

g 2 ðxÞ ¼ 85:334407  0:0056858x2 x5  0:0006262x1 x4  0:0022053x3 x5 6 0

ð8bÞ

g 3 ðxÞ ¼ 80:51249 þ 0:0071317x2 x5 þ 0:0029955x1 x2 For this problem, the optimum solution is x ¼ ð1:717143; 1:595709; 1:827247; 0:7636413; 0:763645Þ where f(x*) = 0.0539 498. This problem was previously solved using EA (Runarsson & Yao, 2005), CDE (Becerra & Coello, 2006), FSA (Hedar & Fukushima, 2006), and NM–PSO (Zahara & Hu, 2008) methods. Table 3 shows the optimal solutions of PSOLVER and the previous solution algorithms. As can be seen from the Table 3, NM–PSO and PSOLVER algorithms give the best result with the objective function value of f(x*) = 0.053949. It should be note that the lowest standard deviation, which is observed with PSOLVER algorithm, demonstrates its higher robustness in comparison with the other algorithms. The best solution vector x ¼ ð1:717546; 1:596176; 1:826500; 0:763605; 0:763594Þ has been obtained after 779 function evaluations with PSOLVER while the NM–PSO algorithm requires 265548. 4.5. Example 4 This example has 5 decision variables and 6 inequality constraints as given in Eq. (8):

þ 0:0021813x23  110 6 0

ð8cÞ

g 4 ðxÞ ¼ 80:51249  0:0071317x2 x5  0:0029955x1 x2  0:0021813x23 þ 90 6 0 g 5 ðxÞ ¼ 9:300961 þ 0:0047026x3 x5 þ 0:0012547x1 x3 þ 0:0019085x3 x4  25 6 0

ð8dÞ ð8eÞ

g 6 ðxÞ ¼ 9:300961  0:0047026x3 x5  0:0012547x1 x3  0:0019085x3 x4 þ 20 6 0 78 6 x1 6 102 33 6 x2 6 45 27 6 xi 6 45;

ð8fÞ ð8gÞ ð8hÞ

i ¼ 3; 4; 5

ð8iÞ

The optimal solution of the problem is at x ¼ ð78; 33; 29:995256025682; 45; 36:775812905788Þ with a corresponding function value of f ðx Þ ¼ 30; 665:539. This function was previously solved by using a homomorphous mapping (HM) (Koziel & Michalewicz, 1999), Stochastic Ranking (SR) (Runarsson & Yao, 2000), evolutionary programming (EP) (Coello & Becerra, 2004), hybrid particle swarm optimization (HPSO) (He & Wang, 2007), and NM–PSO (Zahara & Kao, 2009). Among those studies, the best solution was ob-

6804

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

Table 4 Comparison of the identified results for Example 4. Methods

Best objective function value

Mean objective function value

Worst objective function value

Standard deviation

Number of function evaluations

HM (Koziel & Michalewicz, 1999) SR (Runarsson & Yao, 2000) EP (Coello & Becerra, 2004) HPSO (He & Wang, 2007) NM–PSO (Zahara & Kao, 2009) PSOLVER

30,664.500 30,665.539 30,665.500 30,665.539 30,665.539 30,665.539

30,665.300 30,665.539 30,662.500 30,665.539 30,665.539 30,665.539

30,645.900 30,665.539 30,662.200 30,665.539 30,665.539 30,665.539

N/A 0.0000200 9.3000000 0.0000017 0.0000140 0.0000024

1,400,000 350,000 50,020 81,000 19,568 328

tained by He and Wang (2007) using HPSO algorithm with an objective function value of f ðx Þ ¼ 30; 665:539 after 81,000 iterations. We obtained the best solution using PSOLVER algorithm at x ¼ ð78; 33; 29:995256025682; 45; 36:775812905788Þ with the corresponding objective value of f ðx Þ ¼ 30; 665:539. Table 4 compares the identified results of different solution algorithms. It can be seen in Table 4 that PSOLVER gives the same result with SR, HPSO, NM–PSO and better than HM and EP. It should be note that the hybrid PSOLVER requires only 328 function evaluations which is much less in comparison with the other methods.

ð1:2279713; 4:2453733Þ with a corresponding objective function value of f ðx Þ ¼ 0:095825. This solution is obtained after 308 function evaluations. The comparison of the identified results for different solution algorithms are given in Table 5. It can be clearly seen from Table 5 that the PSOLVER algorithm finds optimal solution with the lowest number of function evaluations among those of the other algorithms. 4.7. Example 6 This maximization problem has 3 decision variables and 1 inequality constraints as given in Eq. (9d):

4.6. Example 5 The fifth example has two decision variables and two inequality constraints as given in Eq. (9): 3

sin ð2px1 Þ sinð2px2 Þ Max f ðxÞ ¼ x31 ðx1 þ x2 Þ

100  ðx1  5Þ2  ðx2  5Þ2  ðx3  5Þ2 100 s:t: gðxÞ ¼ ðx1  pÞ2 þ ðx2  qÞ2 þ ðx3  rÞ2  0:0625 6 0

Max f ðxÞ ¼

0 6 xi 6 10 i ¼ 1; 2; 3 and p; q; r ¼ 1; 2; . . . ; 9

ð9Þ

s:t: g 1 ðxÞ ¼ x21  x2 þ 1 6 0

ð9aÞ 2

g 2 ðxÞ ¼ 1  x1 þ ðx2  4Þ 6 0

ð9bÞ

0 6 x1 6 10

ð9cÞ

0 6 x2 6 10

ð9dÞ

This function has the global optimum at x ¼ ð1:2279713; 4:2453733Þ with a corresponding function value of f ðx Þ ¼ 0:095825. This function was previously solved by using a HM (Koziel & Michalewicz, 1999), SR (Runarsson & Yao, 2000), EP (Coello & Becerra, 2004), HPSO (He & Wang, 2007), and NM–PSO (Zahara & Kao, 2009). We applied the PSOLVER algorithm to the solution of this problem and obtained the optimum solution at x ¼

ð10Þ ð10aÞ ð10bÞ

For this example, the feasible region of the search space consists of 93 disjoint spheres. A point ðx1 ; x2 ; x3 Þ is feasible if and only if there exist p; q; r such that the above inequality holds (Zahara & Kao, 2009). For this problem, the optimum solution is x ¼ ð5; 5; 5Þ with f ðx Þ ¼ 1. This problem was previously solved by using a HM (Koziel & Michalewicz, 1999), SR (Runarsson & Yao, 2000), EP (Coello & Becerra, 2004), HPSO (He & Wang, 2007), and NM–PSO (Zahara & Kao, 2009). Table 6 shows the identified results of PSOLVER and the previous studies given above. As can be seen from Table 6, PSOLVER algorithm results with x ¼ ð5; 5; 5Þ with the objective function value of f ðx Þ ¼ 1. PSOLVER algorithm requires only 584 function evaluations for obtaining the optimal solution.

Table 5 Comparison of the identified results for Example 5. Methods

Best objective function value

Mean objective function value

Worst objective function value

Standard deviation

Number of function evaluations

HM (Koziel & Michalewicz, 1999) SR (Runarsson & Yao, 2000) EP (Coello & Becerra, 2004) HPSO (He & Wang, 2007) NM–PSO (Zahara & Kao, 2009) PSOLVER

0.095825 0.095825 0.095825 0.095825 0.095825 0.095825

0.089157 0.095825 0.095825 0.095825 0.095825 0.095825

0.029144 0.095825 0.095825 0.095825 0.095825 0.095825

N/A 2.6E17 0 1.2E10 3.5E08 2.7E12

1,400,000 350,000 50,020 81,000 2103 308

Table 6 Comparison of the identified results for Example 6. Methods

Best objective function value

Mean objective function value

Worst objective function value

Standard deviation

Number of function evaluations

HM (Koziel & Michalewicz, 1999) SR (Runarsson & Yao, 2000) EP (Coello & Becerra, 2004) HPSO (He & Wang, 2007) NM–PSO (Zahara & Kao, 2009) PSOLVER

0.999999 1.000000 1.000000 1.000000 1.000000 1.000000

0.999135 1.000000 0.996375 1.000000 1.000000 1.000000

0.991950 1.000000 0.996375 1.000000 1.000000 1.000000

N/A 0 9.7E03 1.6E15 0 2.6E14

1,400,000 350,000 50,020 81,000 923 584

6805

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

Fig. 5. A tension/compression string design problem.

4.8. Example 7: The tension/compression string design problem The tension/compression string design problem is described in Arora (1989) and the aim is to minimize the weight ðf ðxÞÞ of a tension/compression spring (as shown in Fig. 5) subject to constraints on minimum deflection, shear stress, surge frequency, limits on outside diameter and on design variables. The design variables are the wire diameter dðx1 Þ, the mean coil diameter Dðx2 Þ and the number of active coils Pðx3 Þ. The mathematical formulation of this problem can be described as follows:

Min f ðxÞ ¼ ðx3 þ 2Þx2 x21

ð11Þ

x32 x3 60 s:t: g 1 ðxÞ ¼ 1  71; 785x41

ð11aÞ

4x22  x1 x2 1  þ 160 5108x21 12; 566 x2 x31  x41 140:45x1 60 g 3 ðxÞ ¼ 1  x22 x3 x2 þ x1 g 4 ðxÞ ¼ 160 1:5 0:05 6 x1 6 2:00 g 2 ðxÞ ¼

ð11bÞ

nament selection (GA2) (Coello & Montes, 2002), EP (Coello & Becerra, 2004), co-evolutionary particle swarm optimization approach (CPSO) (He & Wang, 2006), HPSO (He & Wang, 2007), and NM– PSO (Zahara & Kao, 2009). After applying the PSOLVER algorithm to this problem, best solution is obtained at x ¼ ð0:05186; 0:356650; 11:292950Þ with the corresponding value of f ðxÞ ¼ 0:0126652. The best solutions obtained by the above-mentioned methods and the PSOLVER algorithm are given in Tables 7 and 8. It can be seen from Table 8 that the standard deviation of the PSOLVER solution is the smallest. In addition, the PSOLVER requires only 253 function evaluations for solving this problem, while GA2, HPSO, CPSO, and NM–PSO require 80,000, 81,000, 200,000, and 80,000 function evaluations, respectively. Therefore, the PSOLVER is an efficient approach locating the global optimum for this problem. 4.9. Example 8: The welded beam design problem This design problem, which has been often used as a benchmark problem, was firstly proposed by Coello (2000). In this problem, a welded beam is designed for minimum cost subject to constraints on shear stress ðsÞ; bending stress ðrÞ in the beam; buckling load on the bar ðP b Þ; end deflection of the beam ðdÞ; and side constraints. There are four design variables as shown in Fig. 6: hðx1 Þ; lðx2 Þ; tðx3 Þ and bðx4 Þ. The mathematical formulation of the problem is as follows:

Min f ðxÞ ¼ 1:10471x21 x2 þ 0:04811x3 x4 ð14 þ x2 Þ ð11cÞ ð11dÞ

ð12Þ

s:t: g 1 ðxÞ ¼ sðxÞ  smax 6 0

ð12aÞ

g 2 ðxÞ ¼ rðxÞ  rmax 6 0

ð12bÞ

g 3 ðxÞ ¼ x1  x4 6 0

ð12cÞ

ð11eÞ

g 4 ðxÞ ¼ 0:10471x21 þ 0:04811x3 x4 ð14 þ x2 Þ  5 6 0

ð12dÞ

0:25 6 x2 6 1:30

ð11fÞ

g 5 ðxÞ ¼ 0:125  x1 6 0

ð12eÞ

2:00 6 x3 6 15:00

ð11gÞ

g 6 ðxÞ ¼ dðxÞ  dmax 6 0 g 7 ðxÞ ¼ P  Pc ðxÞ 6 0

ð12fÞ ð12gÞ

0:1 6 x1 ; x4 6 2:0

ð12hÞ

0:1 6 x2 ; x3 6 10:0

ð12iÞ

This problem has been used as a benchmark for testing different optimization methods, such as GA based co-evolution model (GA1) (Coello, 2000), GA through the use of dominance-based tour-

Table 7 Comparison of the best solutions for tension/compression spring design problem. Methods

x1 ðdÞ

x2 ðDÞ

x3 ðP b Þ

f(x)

GA1 (Coello, 2000) GA2 (Coello & Montes, 2002) EP (Coello & Becerra, 2004) CPSO (He & Wang, 2006) HPSO (He & Wang, 2007) NM–PSO (Zahara & Kao, 2009) PSOLVER

0.051480 0.051989 0.050000 0.051728 0.051706 0.051620 0.051686

0.351661 0.363965 0.317395 0.357644 0.357126 0.355498 0.356650

11.632201 10.890522 14.031795 11.244543 11.265083 11.333272 11.292950

0.0127048 0.0126810 0.0127210 0.0126747 0.0126652 0.0126302 0.0126652

Table 8 Statistical results for tension/compression spring design problem. Methods

Best objective function value

Mean objective function value

Worst objective function value

Standard deviation

Number of function evaluations

GA1 (Coello, 2000) GA2 (Coello & Montes, 2002) EP (Coello & Becerra, 2004) CPSO (He & Wang, 2006) HPSO (He & Wang, 2007) NM–PSO (Zahara & Kao, 2009) PSOLVER

0.0127048 0.0126810 0.0127210 0.0126747 0.0126652 0.0126302 0.0126652

0.0127690 0.0127420 0.0135681 0.0127300 0.0127072 0.0126314 0.0126652

0.0128220 0.0129730 0.0151160 0.0129240 0.0127190 0.0126330 0.0126652

3.94E05 5.90E05 8.42E04 5.20E04 1.58E05 8.74E07 2.46E09

N/A 80,000 N/A 200,000 81,000 80,000 253

6806

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

We applied the PSOLVER algorithm to this problem and obtained the best solution of f ðxÞ ¼ 1:724717. The comparison of the identified results is given in Tables 9 and 10, respectively. From Table 9, it can be seen that the best solutions found by the PSOLVER is same with the NM-PSO and better than those obtained by the other methods. Standard deviation of the results by the PSOLVER is the smallest. Note that the average number of function evaluations of the PSOLVER is 297. Therefore, it can be said that the PSOLVER is the most efficient among the previous methods. 5. Example 9: The pressure vessel design problem

Fig. 6. The welded beam design problem.

where

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 ðs0 Þ2 þ 2s0 s00 þ ðs00 Þ2 2R P MR x2 s0 ¼ pffiffiffi ; s00 ¼ ; M ¼P Lþ J 2 2 x 1 x2 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 x2 x 1 þ x3 R¼ þ 4 2

pffiffiffi  2  x x1 þ x3 2 J¼2 2 x 1 x2 2 þ 12 2

sðxÞ ¼

6PL rðxÞ ¼ 2 ; x4 x3 Pc ðxÞ ¼

dðxÞ ¼

ffi qffiffiffiffiffiffi x23 x64 4:013E 36 L2

4PL3

ð12kÞ ð12lÞ ð12mÞ

Min f ðxÞ ¼ 0:6224x1 x3 x4 þ 1:7781x2 x23 þ 3:1661x21 x4 þ 19:84x21 x3 ð13Þ

ð12nÞ

Ex33 x4

s:t: g 1 ðxÞ ¼ x1 þ 0:0193x3 6 0

ð13aÞ

g 2 ðxÞ ¼ x2 þ 0:00954x3 6 0

rffiffiffiffiffiffi! x3 E 1 2L 4G

P ¼ 6000 lb; L ¼ 14 in:;

ð12jÞ

In pressure vessel design problem, proposed by Kannan and Kramer (1994), the aim is to minimize the total cost, including the cost of material, forming and welding. A cylindrical vessel is capped at both ends by hemispherical heads as shown in Fig. 7. The are four design variables in this problem: T s (x1 , thickness of the shell), T h (x2 , thickness of the head), R (x3 , inner radius) and L (x4 , length of the cylindrical section of the vessel). Among the four design variables, T s and T h are expected to be integer multiples of 0.0625 in., and R and L are continuous variables. The problem can be formulated as follows (Kannan & Kramer, 1994):

ð13bÞ

ð12oÞ

E ¼ 30  106 psi; G ¼ 12  106 psi

smax ¼ 13; 600 psi; rmax ¼ 30; 000 psi; dmax ¼ 0:25 in:

ð12pÞ ð12qÞ

The methods previously applied to this problem include GA1 (Coello, 2000), GA2 (Coello & Montes, 2002), EP (Coello & Becerra, 2004), CPSO (He & Wang, 2006), HPSO (He & Wang, 2007), and NM–PSO (Zahara & Kao, 2009). Among those studies, the best solution was obtained by using NM–PSO (Zahara & Hu, 2008) with an objective function value of f ðxÞ ¼ 1:724717 after 80,000 function evaluations.

Fig. 7. Pressure vessel design problem.

Table 9 Comparison of the best solutions for welded beam design problem. Methods

x1 ðhÞ

x2 ðlÞ

x3 ðtÞ

x4 ðbÞ

f(x)

GA1 (Coello, 2000) GA2 (Coello & Montes, 2002) EP (Coello & Becerra, 2004) CPSO (He & Wang, 2006) HPSO (He & Wang, 2007) NM–PSO (Zahara & Kao, 2009) PSOLVER

0.208800 0.205986 0.205700 0.202369 0.205730 0.205830 0.205830

3.420500 3.471328 3.470500 3.544214 3.470489 3.468338 3.468338

8.997500 9.020224 9.036600 9.048210 9.033624 9.033624 9.036624

0.210000 0.206480 0.205700 0.205723 0.205730 0.205730 0.205730

1.748309 1.728226 1.724852 1.728024 1.724852 1.724717 1.724717

Methods

Best objective function value

Mean objective function value

Worst objective function value

Standard deviation

Number of function evaluations

GA1 (Coello, 2000) GA2 (Coello & Montes, 2002) EP (Coello & Becerra, 2004) CPSO (He & Wang, 2006) HPSO (He & Wang, 2007) NM–PSO (Zahara & Kao, 2009) PSOLVER

1.748309 1.728226 1.724852 1.728024 1.724852 1.724717 1.724717

1.771973 1.792654 1.971809 1.748831 1.749040 1.726373 1.724717

1.785835 1.993408 3.179709 1.782143 1.814295 1.733393 1.724717

1.12E02 7.47E02 4.43E01 1.29E02 4.01E02 3.50E03 1.62E11

N/A 80,000 N/A 200,000 81,000 80,000 297

Table 10 Statistical results for welded beam design problem.

6807

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808 Table 11 Comparison of the best solutions for pressure vessel design problem. Methods

x1 ðT s Þ

x2 ðT h Þ

x3 ðRÞ

x4 ðLÞ

f(x)

GA1 (Coello, 2000) GA2 (Coello & Montes, 2002) CPSO (He & Wang, 2006) HPSO (He & Wang, 2007) NM–PSO (Zahara & Kao, 2009) PSOLVER

0.8125 0.8125 0.8125 0.8125 0.8036 0.8125

0.4375 0.4375 0.4375 0.4375 0.3972 0.4375

40.3239 42.0974 42.0913 42.0984 41.6392 42.0984

200.0000 176.6540 176.7465 176.6366 182.4120 176.6366

6288.7445 6059.9463 6061.0777 6059.7143 5930.3137 6059.7143

Table 12 Statistical results for pressure vessel design problem. Methods

Best objective function value

Mean objective function value

Worst objective function value

Standard deviation

Number of function evaluations

GA1 (Coello, 2000) GA2 (Coello & Montes, 2002) CPSO (He & Wang, 2006) HPSO (He & Wang, 2007) NM–PSO (Zahara & Kao, 2009) PSOLVER

6288.7445 6059.9463 6061.0777 6059.7143 5930.3137 6059.7143

6293.8432 6177.2533 6147.1332 6099.9323 5946.7901 6059.7143

6308.1497 6469.3220 6363.8041 6288.6770 5960.0557 6059.7143

7.413E+00 1.309E+02 8.645E+01 8.620E+01 9.161E+00 4.625E12

N/A 80,000 200,000 81,000 80,000 310

4 3 px þ 1; 296; 000 6 0 3 3 g 4 ðxÞ ¼ x4  240 6 0

ð13dÞ

0 6 x1 ; x2 6 100

ð13eÞ

10 6 x3 ; x4 6 200

ð13fÞ

g 3 ðxÞ ¼ px23 x4 

ð13cÞ

This problem has been solved before by using previously mentioned GA1 (Coello, 2000), GA2 (Coello & Montes, 2002), CPSO (He & Wang, 2006), HPSO (He & Wang, 2007), and NM–PSO (Zahara & Kao, 2009). Their best solutions were compared against those produced by the PSOLVER and given in Tables 11 and 12, respectively. In this problem, decision variables x1 and x2 are expected to be integer multiples of 0.0625 in. Previous best solutions obtained by the other methods, except NM–PSO, satisfy those constraints. As obviously seen from Table 11, values of x1 and x2 given for NM– PSO are not integer multiples of 0.0625 in. Therefore, the HPSO and the PSOLVER methods give the best results by considering the NM–PSO solution is not feasible for this problem. It should be note that, the PSOLVER requires about only 310 function evaluations for obtaining a feasible solution while GA2, HPSO, CPSO, and NM–PSO require 80,000, 81,000, 200,000, and 80,000 fitness function evaluations, respectively. In addition, the standard deviation of the results by PSOLVER is the smallest. Considering the statistical and comparisonal results, it can be concluded that PSOLVER is more efficient than the other methods for pressure vessel design problem. 6. Conclusion In this study, a new hybrid global–local optimization algorithm is proposed which combines the PSO with a spreadsheet ‘‘Solver” for solving continuous optimization problems. In the proposed PSOLVER algorithm, PSO is used as a global optimizer and Solver is used as a local optimizer. During the optimization process, the PSO and Solver work mutually by feeding each other in terms of initial and sub-initial solution points. With this purpose, a VBA code has been developed on the background of Excel spreadsheet to provide the integration of the PSO and Solver processes. Main advantages of the PSOLVER over standard PSO algorithm is demonstrated within a comparative study and then six constrained and three engineering design problems have been solved by using the PSOLVER algorithm. Results showed that the proposed algorithm provides better solutions than the other

heuristic and non-heuristic optimization techniques in terms of objective function values and number of function evaluations. The most important contribution of the proposed hybrid PSOLVER algorithm is that it requires much less iterations than other solution approaches. It should be note that the spreadsheet applications may require long run-time since the data processing is executed on the PC screen, this deficiency can be overcome by deactivating the screen updating property during the optimization process. Finally, the PSOLVER algorithm may be useful to apply to the real world optimization problems which need significant computational efforts.

References Abido, M. A. (2002). Optimal power flow using particle swarm optimization. International Journal of Electrical Power and Energy Systems, 24(7), 563–571. Arora, J. S. (1989). Introduction to optimum design. New York: McGraw-Hill. Ayvaz, M. T., Kayhan, A. H., Ceylan, H., & Gurarslan, G. (2009). Hybridizing harmony search algorithm with a spreadsheet solver for solving continuous engineering optimization problems. Engineering Optimization, 41(12), 1119–1144. Baumgartner, U., Magele, Ch., & Renhart, W. (2004). Pareto optimality and particle swarm optimization. IEEE Transactions on Magnetics, 40(2), 1172–1175. Becerra, R. L., & Coello, C. A. C. (2006). Cultured differential evolution for constrained optimization. Computer Methods in Applied Mechanics and Engineering, 195(33– 36), 4303–4322. Brandstätter, B., & Baumgartner, U. (2002). Particle swarm optimization-mass spring system analogon. IEEE Transactions on Magnetics, 38(2), 997–1000. Chootinan, P., & Chen, A. (2006). Constraint handling in genetic algorithms using a gradient-based repair method. Computers and Operations Research, 33(8), 2263–2281. Ciuprina, G., Loan, D., & Munteanu, I. (2002). Use of intelligent-particle swarm optimization in electromagnetics. IEEE Transactions on Magnetics, 38(2), 1037–1040. Cockshott, A. R., & Hartman, B. E. (2001). Improving the fermentation medium for Echinocandin B production part II: Particle swarm optimization. Process Biochemistry, 36, 661–669. Coello, C. A. C. (2000). Use of a self-adaptive penalty approach for engineering optimization problems. Computers in Industry, 41(2000), 113–127. Coello, C. A. C., & Becerra, R. L. (2004). Efficient evolutionary optimization through the use of a cultural algorithm. Engineering Optimization, 36(2), 219–236. Coello, C. A. C., & Montes, E. M. (2002). Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Advanced Engineering Informatics, 16(2002), 193–203. Dorigo, M., & Di Caro, G. (1999). Ant colony optimisation: A new meta-heuristic. In Proceedings of the congress on evolutionary computation (Vol. 2, pp. 1470–1477). Eberhart, R. C., & Hu, X. (1999). Human tremoe analysis using particle swarm optimization. In Proceedings of the congress on evolutionary computation, Washington, DC, USA (pp. 1927–1930). Eberhart, R. C., & Kennedy, J. (1995). A new optimizer using particle swarm theory. In Proc. of the sixth int. symp. micro machine and human science, Nagoya, Japan (pp. 39-43).

6808

A.H. Kayhan et al. / Expert Systems with Applications 37 (2010) 6798–6808

Eberhart, R. C., & Shi, Y. (2001). Tracking and optimizing dynamic systems with particle swarms. In Proceedings of congress on evolutionary computation, Seoul, Korea (pp. 27–30). Elegbede, C. (2005). Structural reliability assessment based on particle swarm optimization. Structural Safety, 27(2), 171–186. Fan, S. S., Liang, Y. C., & Zahara, E. (2004). Hybrid simplex search and particle swarm optimization for the global optimization of multimodal functions. Engineering Optimization, 36(4), 401–418. Fan, S. S., & Zahara, E. (2007). A hybrid simplex search and particle swarm optimization for unconstrained optimization. European Journal of Operational Research, 181, 527–548. Ferreira, E. N. C., & Salcedo, R. (2001). Can spreadsheet solvers solve demanding optimization problems. Computer Applications in Engineering Education, 9(1), 49–56. Fesanghary, M., Mahdavi, M., Minary-Jolandan, M., & Alizadeh, Y. (2008). Hybridizing harmony search algorithm with sequential quadratic programming for engineering optimization problems. Computer Methods in Applied Mechanics and Engineering, 197, 3080–3091. Fourie, P. C., & Groenwold, A. A. (2002). The particle swarm optimization algorithm in size and shape optimization. Structural and Multidisciplinary Optimization, 23(4), 259–267. Frontline System Inc. (1999). A tutorial on spreadsheet optimization. Geem, Z. W., Kim, J. H., & Loganathan, G. V. (2001). A new heuristic optimization algorithm: harmony search. Simulation, 76(2), 60–68. Ghaffari-Miab, M., Farmahini-Farahani, A., Faraji-Dana, R., & Lucas, C. (2007). An efficient hybrid Swarm intelligence-gradient optimization method for complex time Green’s functions of multilayer media. Progress in Electromagnetics Research, 77, 181–192. Glover, F. (1977). Heuristic for integer programming using surrogate constraints. Decision Sciences, 8(1), 156–166. Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Addison-Wesley. Hedar, A. D., & Fukushima, M. (2006). Derivative-free filter simulated annealing method for constrained continuous global optimization. Journal of Global Optimization, 35(4), 521–549. He, Q., & Wang, L. (2006). An effective co-evolutionary particle swarm optimization for engineering optimization problems. Engineering Application of Artificial Intelligence, 20, 89–99. He, Q., & Wang, L. (2007). A hybrid particle swarm optimization with a feasibility based rule for constrained optimization. Applied Mathematics and Computation, 186, 1407–1422. Holland, J. H. (1975). Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. Ann Arbor: University of Michigan Press. Houck, C. R., Joines, J. A., & Kay, M. G. (1996). Comparison of genetic algorithms, random start, and two-opt switching for solving large location–allocation problems. Computers and Operations Research, 23(6), 587–596. Houck, C. R., Joines, J. A., & Wilson, J. R. (1997). Empirical investigation of the benefits of partial Lamarckianism. Evolutionary Computation, 5(1), 31–60. Hu, X., & Eberhart, R. C. (2001). Tracking dynamic systems with PSO: Where’s the cheese? In Proceedings of workshop on particle swarm optimization, Indianapolis, USA. Kannan, B. K., & Kramer, S. N. (1994). An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. Journal of Mechanical Design, 116, 318–320. Kazuhiro, I., Shinji, N., & Masataka, Y. (2006). Hybrid swarm optimization techniques incorporating design sensitivities. Transactions of the Japan Society of Mechanical Engineers, 72(719), 2264–2271. Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of the IEEE international conference on neural networks, Piscataway, USA (pp. 1942– 1948). Kennedy, J., & Eberhart, R. C. (1997). A discrete binary version of the particle swarm algorithm. In Proc. IEEE int. conf. systems, man. and cybernetics (Vol. 5, pp. 4104– 4108). Kirkpatrick, S., Gelatt, C., & Vecchi, M. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680.

Koziel, S., & Michalewicz, Z. (1999). Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. Evolutionary Computation, 7, 19–44. Lasdon, L. S., Waren, A. D., Jain, A., & Ratner, M. (1978). Design and testing of a generalized reduced gradient code for nonlinear programming. ACM Transactions on Mathematical Software, 4(1), 34–49. Lee, K. S., & Geem, Z. W. (2005). A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice. Computer Methods in Applied Mechanics and Engineering, 194, 3902–3933. Michalewicz, Z. (1992). Genetic algorithm + data structure = evolution programs. New York: Springer-Verlag. Microsoft. (1995). Microsoft Excel – Visual Basic for applications. Washington: Microsoft Press. OTC. (2009). Optimization Technology Center’s Web site (online). (accessed 31.03.2009). Perez, R. E., & Behdinan, K. (2007). Particle swarm approach for structural design optimization. Computers and Structures, 85, 1579–1588. Rosen, E. M. (1997). Visual Basic for applications, Add-Ins and Excel 7.0. CACHE News (Vol. 45, pp. 1–3). Runarsson, T. P., & Yao, X. (2000). Stochastic ranking for constrained evolutionary optimization. IEEE Transactions on Evolutionary Computation, 4(3), 284–292. Runarsson, T. P., & Yao, X. (2005). Search biases in constrained evolutionary optimization. IEEE Transactions on Systems, Man and Cybernetics, 35(2), 233–243. Salerno, J. (1997). Using particle swarm optimization technique to train a recurrent neural model. In Proc. of the ninth IEEE int. conf. tools and artificial intelligence, USA (pp. 45–49). Salman, A., Ahmad, I., & Al-Madani, S. (2002). Particle swarm optimization for task assignment problem. Microprocessors and Microsystems, 26, 363–371. Shannon, M. W. (1998). Evolutionary algorithms with local search for combinatorial optimization. PhD thesis, University of California, San Diego. Shi, Y., & Eberhart, R. C. (1998). Parameter selection in particle swarm optimization. Evolutionary Programming VII. Lecture Notes in Computer Science (Vol. 1447). Berlin: Springer. Slade, W. H., Ressom, H. W., Musavi, M. T., & Miller, R. L. (2004). Inversion of ocean color observations using particle swarm optimization. IEEE Transactions on Geoscience and Remote Sensing, 42(9), 1915–1923. Stokes, L., & Plummer, J. (2004). Using spreadsheet solvers in sample design. Computational Statistics and Data Analysis, 44(3), 527–546. Tandon, V. (2000). Closing the gap between CAD/CAM and optimized CNC end milling. MSc thesis, Purdue School of Engineering and Technology, Indianapolis, USA. Tandon, V., El-Mounayri, H., & Kishawy, H. (2002). NC end milling optimization using evolutionary computation. International Journal of Machine Tools and Manufacture, 42(5), 595–605. Van Den Bergh, F., & Engelbrecht, A. P. (2000). Cooperative learning in neural networks using particle swarm optimizers. South African Computer Journal, 26, 84–90. Venter, G., & Sobieszczanski-Sobieski, J. (2004). Multidisciplinary optimization of a transport aircraft wing using particle swarm optimization. Structural and Multidisciplinary Optimization, 26(1–2), 121–131. Victoire, T. A., & Jeyakumar, A. E. (2004). Hybrid PSO–SQP for economic dispatch with valve-point effect. Electric Power Systems Research, 71(1), 51–59. Wikipedia. (2009). The free Encyclopedia web site (online). (accessed 31.03.2009). Yoshida, H., Fukuyama, Y., Takayama, S., & Nakanishi, Y. (1999). A particle swarm optimization for reactive power and voltage control in electric power systems considering voltage security assessment. In Proc. IEEE int. conf. systems, man. and cybernetics, Tokyo, Japan (pp. 497–502). Zahara, E., & Hu, C. H. (2008). Solving constrained optimization problems with hybrid particle swarm optimization. Engineering Optimization, 40(11), 1031–1049. Zahara, E., & Kao, Y. T. (2009). Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems. Expert Systems with Applications, 36, 3880–3886.