a hybrid method based on cuckoo search algorithm ...

0 downloads 0 Views 1MB Size Report
Optimization Algorithms (M. Shehab, Khader, & Al-Betar, 2017). Optimization algorithms developed based on nature-inspired ideas deal with selecting the best ...
Journal of ICT, 17, No. 3 (July) 2018, pp: 469–491

How to cite this paper: Shehab, M., Khader, A. T., & Laouchedi, M. (2018). A hybrid method based on cuckoo search algorithm for global optimization problems. Journal of Information and Communication Technology, 17 (3), 469-491.

A HYBRID METHOD BASED ON CUCKOO SEARCH ALGORITHM FOR GLOBAL OPTIMIZATION PROBLEMS Mohammad Shehab, 1Ahamad Tajudin Khader & 2 Makhlouf Laouchedi 1 School of Computer Sciences, Universiti Sains Malaysia, Malaysia 2  Université des Sciences et de Technologies Houari Boumediene, Algeria 1

[email protected]; [email protected]; [email protected] ABSTRACT Cuckoo search algorithm is considered one of the promising metaheuristic algorithms applied to solve numerous problems in different fields. However, it undergoes the premature convergence problem for high dimensional problems because the algorithm converges rapidly. Therefore, we proposed a robust approach to solve this issue by hybridizing optimization algorithm, which is a combination of Cuckoo search algorithmand Hill climbing called CSAHC discovers many local optimum traps by using local and global searches, although the local search method is trapped at the local minimum point. In other words, CSAHC has the ability to balance between the global exploration of the CSA and the deep exploitation of the HC method. The validation of the performance is determined by applying 13 benchmarks. The results of experimental simulations prove the improvement in the efficiency and the effect of the cooperation strategy and the promising of CSAHC. Keywords: Cuckoo search algorithm, Hill climbing, optimization problems, slow convergence, exploration and exploitation. Received: 3 August 2017

Accepted: 14 May 2018

469

Journal of ICT, 17, No. 3 (July) 2018, pp: 469–491 minimization or maximization of a suitable decision-making algorithm normally adapted to the approximation methods. The principle of decision making entails choosing between several alternatives. The result of this INTRODUCTION choice is the selection of the best decision from all choices (Mohammed, Khader, & Al-Betar, 2016). Optimization algorithms based onsuch nature-inspired ideas deal with selecting the best Optimization resides indeveloped many domains, as engineering, energy, economics, alternative in the sense of the given objective function. The optimization algorithm can be either a medical, computerapproach. scienceHeuristic (Mustaffa, Yusof, & Kamaruddin, 2013). heuristic orand a metaheuristic approaches are problem-designed approaches where optimization problem has its own heuristic are not for applicable for decision other kinds of It each is mainly concerned with finding the methods optimalthatvalues several optimization problems. The metaheuristic-based algorithm is alsoThis a general solveristemplate that can variables to form a solution to problem optimization. solution optimally be adapted for various kinds of optimization problems by properly tweaking its operators and considered the (Hasan, decision is satisfied with it.inAn configuring itswhen parameters Quo,maker & Shamsuddin, 2012). As shown Fig. optimization 1, each optimization algorithm can be categorized into three classes: evolutionary algorithms (EAs), swarm-based problem is the minimization or maximization of a suitable decision-making algorithms, and trajectory-based algorithms. Examples of EAs include genetic algorithms (GAs) algorithm normally adapted to the approximation methods. The principle of (Holland, 1975), genetic programming (GP) (Koza, 1994), and differential evolution (DE) (Storn & decision making entails choosing algorithms between include several alternatives. result of Price, 1996). Examples of swarm-based artificial bee colonyThe (ABC) (Karaboga, 2005), particle swarm optimization (PSO)(James & Russell, 1995), and cuckoo search algorithm this choice is the selection of the best decision from all choices (Mohammed, (CSA) (Yang & Deb, 2009). Examples of trajectory- based algorithms includes tabu search (TS) Khader, Al-Betar, (Glover, & 1977), simulated2016). annealing (SA) (Kirkpatrick, Gelatt, Vecchi, & others, 1983), hill climbing (Schaerf & Meisels, 1999).

Figure 1. Optimization Algorithms (M. Shehab, Khader, & Al-Betar, 2017). The performance of the population-based is measured through checking 2017). its ability Figure 1. Optimization Algorithms algorithms (M. Shehab, Khader, & Al-Betar,

to establish a proper trade-off between exploration and exploitation. Where the algorithm has a weak balance between exploration and exploitation be more likely to the trapping in local optima, premature convergence and stagnation (M. M.based Shehab,on Khader, & Al-Betar, 2016). Optimization algorithms developed nature-inspired ideas deal with

selecting the best alternative in the sense of the given objective function. The optimization algorithm can be either a heuristic or a metaheuristic approach. Heuristic approaches are problem-designed approaches where each optimization problem has its own heuristic methods that are not applicable for other kinds of optimization problems. The metaheuristic-based algorithm is also 470

Journal of ICT, 17, No. 3 (July) 2018, pp: 469–491

a general solver template that can be adapted for various kinds of optimization problems by properly tweaking its operators and configuring its parameters (Hasan, Quo, & Shamsuddin, 2012). As shown in Figure 1, each optimization algorithm can be categorized into three classes: evolutionary algorithms (EAs), swarm-based algorithms, and trajectory-based algorithms. Examples of EAs include genetic algorithms (GAs) (Holland, 1975), genetic programming (GP) (Koza, 1994), and differential evolution (DE) (Storn & Price, 1996). Examples of swarm-based algorithms include artificial bee colony (ABC) (Karaboga, 2005), particle swarm optimization (PSO)(James & Russell, 1995), and cuckoo search algorithm (CSA) (Yang & Deb, 2009). Examples of trajectory- based algorithms includes tabu search (TS) (Glover, 1977), simulated annealing (SA) (Kirkpatrick, Gelatt, Vecchi, & et. al., 1983), hill climbing (Schaerf & Meisels, 1999). The performance of the population-based algorithms is measured through checking its ability to establish a proper trade-off between exploration and exploitation. Where the algorithm has a weak balance between exploration and exploitation be more likely to the trapping in local optima, premature convergence and stagnation (Shehab, Khader, & Al-Betar, 2016). Population-based search algorithm is normally very powerful in exploring several regions of the problem search space. However, it has difficulty in determining the local optima in each region. By contrast, deep searching of the local search-based algorithm is very efficient in a single search space region but not for several search space regions (McMinn, 2004). Thus, sometimes, it is very beneficial to hybridize a local and a population search-based method to complement their advantages in a single optimization framework. Based on the above suggestion and through hybridization, the search can strike a balance between the wide range of exploration and nearby exploitation of the problem search space. In this context, CSA has been hybridized with other local searchbased algorithm to improve its performance in tackling complex optimization problems. The linear least squares problem solved by hybridization algorithm between Newton method (NM) and CSA is called CSANM (Abdel-Baset & Hezam, 2016). The authors benefited from CSA for fast convergence and global search as well as from NM for the ability of strong local search. The experimental results showed the convergence efficiency and computational accuracy of the CSANM in comparison with the basic CSA and HS based on NM (HSNM). A novel CSA base on the Gauss distribution (GCSA) was proposed by Zheng et al. (2012). In the basic CSA, although it finds the optimum solution, the search entirely depends on random walks. By contrast, fast convergence and 471

Journal of ICT, 17, No. 3 (July) 2018, pp: 469–491

precision cannot be guaranteed. For this purpose, GCSA was introduced to solve the low convergence rate of the basic CSA. GCSA has been applied to solve the standard test functions and engineering design optimization problems. The obtained results showed that the GCSA proved its efficiency through achieving better solutions compared with basic CSA. Wang et al. (2016) proposed a hybrid algorithm that combined CSA and a HS (HS/CSA) for continuous optimization problems. In the HS/CSA method, the pitch adjustment of HS was used to update the process of the CSA, which leads to the increase of population diversity. The improved elitism scheme was used to retain the best individuals in the cuckoo population as well. The performance of HS/CSA was evaluated by means of testing the set of benchmark functions. The obtained results showed that the HS/CSA achieved better outcomes in comparison with ACO, PSO, GA, HS, DE, and basic CSA. Quadratic assignment problems (QAPs) are considered to be NP-hard problems, which cannot be easily solved by exact methods. Therefore, Dejam et al., (2012) proposed a hybrid algorithm combined with the CSA of TS (i.e., CSA-TS) to solve QAPs. In their research, the QAPs were initially tackled using CSA. Thereafter, these were combined with TS, which focused on the local search to increase the optimization precision. The experimental results indicated that the proposed algorithm performs better than ABC and GA. In this work, a new hybrid optimization approach is developed by hybridizing the cuckoo search algorithm with hill climbing to solve global optimization problems. The proposed approach is evaluated on thirteen benchmark functions carefully selected from the literature. Experimental results demonstrate that the CSAHC performs better than Krill heard (KH) (Gandomi & Alavi, 2012), Harmony Search (HS) (Geem, Kim, & Loganathan, 2001), Bat Algorithm (BA) (Yang, 2010a), GA, and the basic CSA. The paper is organized as follows. Next section describes the CSA and HC in brief. The Proposed Methodology section presents the CSAHC approach in details. Subsequently, our method is evaluated through 13 benchmarks and comparing with 5 methods in the Experimental Results Analysis section. Finally, the conclusion and future works are given in the last section. PRELIMINARY Cuckoo Search Algorithm The use of CSA in the optimization context was proposed by Yang and Deb, (2009). To date, work on this algorithm has significantly increased, and the 472

Journal of ICT, 17, No. 3 (July) 2018, pp: 469–491

CSA has succeeded in having its rightful place among other optimization methodologies (Fister Jr, Yang, Fister, & Fister, 2014). This algorithm is based on the obligate brood parasitic behavior found in some cuckoo species, in combination with the Levy flight behavior discovered in some birds and fruit flies. The CSA is an efficient metaheuristic swarm based algorithm that efficiently strikes a balance between local nearby exploitation and globalwide exploration in the search space problem (Shehab, Khader, & Laouchedi, 2017). The cuckoo has a specific way of laying its eggs to distinguish it from the rest of the birds (Yang & Deb, 2014). The following three idealized rules clarify and describe the standard cuckoo search: o o o

Each cuckoo lays one egg at a time and dumps it in a randomly chosen nest. The best nests with high-quality eggs will be carried over to the next generations. The number of available host nests is fixed, and the egg laid by a cuckoo is discovered by the host bird with a probability Pα∈(0,1). In this case, the host bird can either get rid of the egg or simply abandon the nest and build a completely new nest. In addition, probability Pα can be used by the n host nest to replace the new nests. 1: Objective function 𝑓𝑓(𝑋𝑋), 𝑋𝑋 = (𝑓𝑓(𝑥𝑥1, 𝑥𝑥2, … , 𝑥𝑥𝑥𝑥) ᵀ 2: Generate initial population of n host nests Xi (i=1, 2, …, n) 3: While t < Max_itertions do 4: Get a cuckoo randomly by Levy flights 5: Evaluate its quality/ fitness Fi 6: Choose a nest among n (say, j) randomly 7: If Fi > Fj then 8: replace j by the new solution; 9: End If 10: A fraction (Pa) of worse nests are abandoned and new ones are built; 11: Keep the best solutions 12: Rank the solutions and find the current best 13: End While 14: Postprocess results and visualization

Figure 2. Pseudo code of the Cuckoo Search Algorithm

473

Journal of ICT, 17, No. 3 (July) 2018, pp: 469–491

Figure 2 shows the pseudo code of the CSA search process. Similar to other swarm-based algorithms, the CSA starts with an initial population of n host nests. These initial host nests will be randomly attracted by the cuckoos with eggs and also by random Levy flights to lay the eggs. Thereafter, nest quality will be evaluatedFigure and compared with another host nest. Algorithm In case the 2. Pseudo code of therandom Cuckoo Search e of the Cuckoo Algorithm host Search nest is better, it will replace the old host nests. This new solution has the egg laid by a cuckoo. If the host bird discovers the egg with a probability Pα∈(0,1), host code either out Search the eggs, or abandons it and builds a Figure 2.the Pseudo of throws the Cuckoo Algorithm Yangis done and used a the certain and representation nest.representation This step by abundant solutions with the new of the impleme certain andnew simple of Deb thereplacing implementation, withsimple each egg representing a solution. As the cuckoo lays only random solutions. As the cuckoo lays only one egg, oneAlgorithm solution. The purposeone egg, it also represents on Figure 2. Pseudo codeitofalso the represents Cuckoo Search Yang and Debisused a certainthe anddiversity simple representation ofprobably the implementation, with each egg tocuckoos increase new, and ity of new, and probably better, (solutions) andofreplace them instead better, cuckoos (solutions) a representing a solution. As the cuckoo lays only one egg, it also represents one solution. The purpose by with the worst By contrast, thethe CSA be more complicated Yang Deb used amore certain andsolutions. simpleby representation of implementation, s. By contrast, thetoand CSA can be complicated using multiple eggs incanand is increase the diversity of new, and probably better, cuckoos (solutions) replace them instead with each solution. lays one egg, it with also each egg eachanest toarepresent setthe ofcuckoo solutions. and egg Debrepresenting used certain and simpleaAs representation of theonly implementation, set of solutions.Yang with the worst solutions. By contrast, the CSA can be more complicated by using multiple eggs representingone a solution. As the lays only egg, it also The purpose in represents solution. Thecuckoo purpose is toone increase the represents diversityone of solution. new, and each nest to represent a set of solutions. is to increase the diversity new, and probably better, cuckoos (solutions) andthe replace them instead probably better, cuckoosof(solutions) and replace them instead with worst with the worst solutions. By contrast, the CSA can be more complicated by using multiple solutions. By contrast, the CSA can be more complicated by using multiple eggs in eachin nest to represent arepresent set of solutions. CSA, as aaset batofalgorithm and an FA (Yang, 2010b), eggs each nestThe solutions. gorithm (Yang, 2010a) and antoFA (Yang, 2010b), uses a (Yang, balance2010a) between The CSA, as aexploration bat algorithmand (Yang, 2010a) andThe an CSA FA (Yang, 2010b), uses a balance between exploitation. is equiponderance to the integration ation. The CSA is equiponderance to the integration of a Levy flights. When exploration and exploitation. The CSA is equiponderance to the integration of a Levy flights. When 1 for, The CSA, as a bat algorithm (Yang, 2010a) an say, FA (Yang, 2010b), usesflight a is performed generating new solutions a cuckoo i, a Levy xt and s xt 1 for, say,The a cuckoo i, a Levy flight is performed t  1 CSA, as a bat algorithm (Yang, 2010a) and an FA (Yang, 2010b), uses a balance between generating new solutions a cuckoo i, aThe LevyCSA flightisisequiponderance performed x for, balance between exploration andsay, exploitation. to exploration and exploitation. The CSA is equiponderance to the integration oft+1a Levy flights. When the integration of a Levy flights. When generating new solutions x for, say, solutions say, a cuckoo i, a Levy flight is performed xt is1 for, a generating cuckoo i,new a Levy flight performed t t 1 vyx(t (1) )   levy ( (1))  levy ( ) xit 1  xi xi le i

xit 1  xit   levy ( )

where

 0

(1) (1)

is the step size which should be related to the scales of the problem of interests. In

where the step size which should be related to the scales of the  which scales 0 is of tep size which should related to the problem of interests. Inthe where αcases, >be 0 iswe the size related to the scales of thecurrent problem xit inthe . The should thebe above equation represents location, which is most canstep use  1 t where   0 is the step size which should be related to the scales of the problem of interests. In t interests. In most x most cases, we . equation represents the c of cases, we can use α = 1. The in the above   1 i in the abovetoequation represents the current location, is walk or Markov   1. The xi most t location xit 1 .equation This is represents calledwhich random chain. the only determine xwhich cases,way we can use location, Thenext above the current location, which is The  1.the i in the is represents the current the only way to determine the next t 1 t  1 x . This is called random the only way to determine the next location productx means wiserandom multiplications. wise chain. product is to those used in PSO, wal 1 or entry i similar Thisentry called random walk Markov The ine the next location ..This isiscalled walk ort This Markov chain. The product ⊕ means the only way to determine the next location xi . This is called random walk or Markov chain. The i

but wise here the random walk via Levy flightwise is more efficient in exploring the search space as its step product means entry wise multiplications. This entry wise product entry multiplications. This entry product similar to those used in  means product entrywise wise multiplications. This entry wiseis product similar to those used in PSO, is simil wise multiplications. entry product is similar those used inisPSO, length This is much longer in the long run. A global to explorative random walk by using Levy flights can be PSO, butthe here the walk via Levy flight is more efficient exploring butrandom herevia random walk via Levy more efficient exploring the but here random walk Levy flight is efficient inflight exploring theinsearch space in as its step alk via Levy flight is more efficient intheexploring themore search space as itsis step expressed as follows: the search space as its step length is much longer in the long run. A global length is much longer in the long run. A global explorative random walk byexplorative using Levy flights can be length is much longer the long A global random walk by u n the long run. expressed A global explorative random walk byinusing Levyrun. flights can be follows: explorativeasrandom walk by using Levy flights can be expressed as follows: expressed as follows:

levy ~ u  t   , 1   3 levy ~ u  t   , 1   3

(2) (2) (2)

  , 1   3 lewhich vy ~isuthe  tmean (2) of the occurrence of , 1  where 3 λ is a parameter or expectation

the event during a unit interval. Here the steps essentially form a random walk process with a power law step-length distribution with a heavy tail. Some of 474

ere  is a parameter which is the mean or expectation of the occurrence of the event during a unit Journal of ICT, 17, No. 3 (July) 2018, pp: 469–491 erval. Here the steps essentially form a random walk process with a power law step-length tribution with a heavy tail. Some of the new solutions should be generated by Levy walk around best solution obtained so far, this will speed up the local search. However, a substantial fraction of the new solutions should be generated by Levy walk around the best solution new solutions should be generated by far randomization andsearch. whose However, locations should be far obtained so far, this willfield speed up the local a substantial ough from the current best solution, this will make sure the system will not be trapped in a local fraction of the new solutions should be generated by far field randomization timum. and whose locations should be far enough from the current best solution, this

ll Climbing

will make sure the system will not be trapped in a local optimum. Hill Climbing

Hill Climbing (HC) is a mathematical optimization technique which belongs to the family of local search (Schaerf & Meisels, 1999). It searches for a better ll Climbing (HC) solution is a mathematical optimizationthrough technique which belongs to thestate. family in the neighborhood evaluating the current If of it islocal also arch (Schaerf & Meisels, 1999). It searches for a better solution in the neighborhood through goal state, then return to it and quit. Otherwise, continue updating the current aluating the current state. If it is also goal state, then return to it and quit. Otherwise, continue state, if possible. Then, loop until a solution is found or until there are no new dating the current state, if possible. Then, loop until a solution is found or until there are no new operators left to be applied in the current state. Also, inside the loop there are erators left to be applied in the current state. Also, inside the loop there are two steps. The first step, twohas steps. Thebeen first applied step, select ancurrent operator thatand hasapply not yet applied the ect an operator that not yet to the state it tobeen produce the to new current state and apply it to produce the new state. The second step, evaluate te. The second step, evaluate the new state. Fig. 3 shows the pseudo-code of the HC algorithm, the newofstate. Figure 3 shows the pseudo-code of the HC algorithm, which ich proves the simplicity hill climbing. proves the simplicity of hill climbing.

sed on the above, in HC the basic idea is to always head towards a state which is better than the rrent one. So, it always improves the quality of a solution (Burke & Newall, 2002).

Based on the above, in HC the basic idea is to always head towards a state which is better than the current one. So, it always improves the quality of a solution (Burke & Newall, 2002). gure 2. Pseudo code of the Cuckoo Search Algorithm



1: i = initial solution 2: While f(s) ≤ f (i) s € Neighbours (i) do 3: Generates an s € Neighbours (i); 4: If fitness (s) > fitness (i) then 5: Replace s with the i; 6: End If

Figure 3. Pseudo code of the Hill Climbing method

gure 3. Pseudo code of the Hill Climbing method HC has some advantages, such as it can easily be adjusted to the problem at hand. such Almost algorithm be changed customized. For C has some advantages, as itany canaspect easily of bethe adjusted to themay problem at hand.and Almost any aspect It can used in conversions asItwell domains (Alajmi the algorithm mayexample, be changed andbe customized. For example, can as be discrete used in conversions as wellet discrete domains (Alajmi et al., 2011; and2011). Gámez, 2011). al., 2011; Rubio & Rubio Gámez, THE PROPOSED METHODOLOGY: CSA-HILL CLIMBING Based on the introduction of CSA and HC in the previous sections, this section provides a detailed description of the proposed cuckoo search algorithm with hillPROPOSED climbing (CSAHC). THE METHODOLOGY: CSA-HILL CLIMBING 475

Journal of ICT, 17, No. 3 (July) 2018, pp: 469–491

CSA based on the obligate brood parasitic behavior found in some cuckoo species, in combination with the Levy flight, which it is a type of random walk which has a power law step length distribution with a heavy tail. It is inspired from behavior discovered of some birds and fruit flies (Yang & Deb, 2009). Levy flight used for global exploration and proved its efficiency through achieving good results (Pavlyukevich, 2007; Yang & Deb, 2013). Thus, the CSA is considered as an efficient metaheuristic swarm-based algorithm that efficiently strikes a balance between local nearby exploitation and global wide exploration in the search space problem (Roy & Chaudhuri, 2013b). However, sometimes it exploits solutions poorly with slow convergence. For that reason, the proposed algorithm improves the search ability of the basic CSA through combining it with HC method for deepening exploitation; so-called CSAHC algorithm is used to optimize the benchmark functions (refer Figure 4). Start

Initial population of n host nests Xi

Get a cuckoo randomly by Levy Flight, i

Evaluate its fitness, F (i)

Select a nest (j) of n host nest randomly yes

F(i) ≥ F(j) no Let i as a solution Calculate the neighbouring nest

Let j as a solution

Find the maximum neighbouring nest

Abandon a fraction Pa of the worst nests and build new ones at new locations via Levy Flight no

Keep the current best no

Larger than current? yes

t ≤ Maxlteration

Local maximum found

yes Rank the solutions and find the best no End

yes

Figure4.4. Flowchart Flowchart of the Algorithm Figure of CSAHC the CSAHC Algorithm.

Larger than current local maximum?

Table 2 shows that CSAHC performs the best on 11 of the 13 benchmarks which are F1-F4, F6-F10, and F12-F13. CSA is the second most effective, performing the best on the benchmarks F1-F2, F4-F5, and F13. Followed by GA, KH, BA, HS, respectively. Table 3 illustrated the average of results. CSAHC search by method applying thethestandard cuckoo search for the Where, starts could be the observed CSAHC performs most effective at determining objective function minimum on 10 of the 13 benchmarks F2-F4, F6-F9, and F11-F13. CSA and GA are number of iterations. The best-obtained solution is then passed to thetheHC second most effective, performing best on the benchmarks F4-F5, F10, and F13 for the CSA. While, F2, F11-F12, and F13 for the GA. Followed by KH, BA, and HS, respectively.

476

Journal of ICT, 17, No. 3 (July) 2018, pp: 469–491

to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. HC is an iterative algorithm that starts with an arbitrary solution to a problem and subsequently attempts to determine a better solution by incrementally changing a single element of the solution. When the change produces a better solution, incremental change is performed on the new solution, which is repeated until no further improvements can be found. It then returns the solution to the CSA to check it through the fraction probability Pα. THE EXPERIMENTAL RESULTS ANALYSIS In this section, the proposed CSAHC was tested through an array of experiments. For testing purposes, we implemented the original version of CSA. We compared results of CSAHC with other methods. This comparison is shown in the tables within this section. All the experiments are conducted using a computer with processor Intel(R) Core (TM) i7-6700K CPU 4.00 GHz with 16 GB of RAM and 64-bit for Microsoft Windows 10 Pro. The source code is implemented using MATLAB (R2015a). Benchmark Functions To test the performance of a CSAHC, 13 well-known benchmark functions are used for comparison. Table 1 describes these benchmark functions in terms of the optimum solution after a predefined number of iterations and the rate of convergence to the optimum solution. Further information about all the benchmark functions can be found in (Yao, Liu, & Lin, 1999; Simon, 2008; Jamil & Yang, 2013). Table 1 Table 1

Table 1 Benchmark

Functions

Benchmark Functions Benchmark Functions symbol Function symbol Function symbol F1 FunctionAckley F1

F2

F3

Ackley F1

Ackley

Definition Definition Definition

1 𝑛𝑛 −0.2 1 1 𝑓𝑓(𝑥𝑥̅ ) = 20 + 𝑒𝑒 −−0.2 20. 1 𝑒𝑒 √1 ∑ 1 𝑥𝑥 𝑛𝑛 ∑ 𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥𝑖𝑖 ) 2 𝑛𝑛 𝑖𝑖=1 𝑓𝑓(𝑥𝑥̅ ) = 20 + 𝑒𝑒 − 20. 𝑒𝑒 √ ∑ 𝑥𝑥 𝑛𝑛=1∑ 𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 𝑖𝑖 ) 2 𝑛𝑛=1 𝑛𝑛 𝑖𝑖=1 𝑛𝑛 𝑛𝑛 𝑥𝑥12 𝑥𝑥𝑥𝑥 𝑓𝑓(𝑥𝑥̅ ) 𝑛𝑛= ∑𝑥𝑥 2 − 𝑐𝑐𝑐𝑐𝑐𝑐 𝑛𝑛 ∏ 𝑥𝑥𝑥𝑥 (√𝑖𝑖 ) + 1 1 4000 𝑖𝑖=1 𝑖𝑖=1 𝑓𝑓(𝑥𝑥̅ ) = ∑ − ∏ 𝑐𝑐𝑐𝑐𝑐𝑐 ( ) + 1 √𝑖𝑖 𝑖𝑖=1 4000 𝑖𝑖=1

F2

Griewank Griewank F2 Griewank

F3

𝑛𝑛−1 𝜋𝜋 Penalty #1 2 (𝜋𝜋𝜋𝜋𝜋𝜋) 𝑓𝑓(𝑥𝑥̅ + ∑ (𝑦𝑦𝑦𝑦 − 1)2 • [1 + 10𝑠𝑠𝑠𝑠𝑠𝑠2 (𝜋𝜋𝜋𝜋𝑖𝑖 (continued) + 1)] + (𝑦𝑦𝑦𝑦 − 1)2 } 𝑛𝑛−1 𝜋𝜋 ) = 30 {10𝑠𝑠𝑠𝑠𝑠𝑠 Penalty #1 2 2 [1 2 (𝜋𝜋𝜋𝜋𝑖𝑖 (𝑦𝑦𝑦𝑦𝑖𝑖=1 𝑓𝑓(𝑥𝑥̅ ) = {10𝑠𝑠𝑠𝑠𝑠𝑠 (𝜋𝜋𝜋𝜋𝜋𝜋) +∑∑ − 1) • + 10𝑠𝑠𝑠𝑠𝑠𝑠 + 1)] + (𝑦𝑦𝑦𝑦 − 1)2 } 𝑛𝑛 𝑢𝑢 (𝑥𝑥𝑖𝑖 , 10, 100, 4), 𝑦𝑦𝑦𝑦 =1+0.25(𝑥𝑥𝑖𝑖 + 1) 30 𝑖𝑖+1𝑖𝑖=1 ∑𝑛𝑛𝑖𝑖+1 𝑢𝑢 (𝑥𝑥𝑖𝑖 , 10, 100, 4), 𝑦𝑦𝑦𝑦 =1+0.25(𝑥𝑥𝑖𝑖 + 1)

477

F4

F4

Penalty #2 Penalty #2

𝑛𝑛−1

𝑓𝑓(𝑥𝑥̅ ) = 0.1 {𝑠𝑠𝑠𝑠𝑠𝑠2 (3𝜋𝜋𝜋𝜋1) 𝑛𝑛−1 +∑ 2

𝑖𝑖=1

(𝑥𝑥𝑥𝑥 − 1)2 • [1 + 𝑠𝑠𝑠𝑠𝑠𝑠2 (3𝜋𝜋𝜋𝜋𝜋𝜋 + 1)] + (𝑥𝑥𝑛𝑛 2

𝑛𝑛2

Benchmark Functions Table 1 Benchmark Functions Benchmark Functions symbol Function Benchmark Functions Table 1 symbol Function symbol Function Journal F1 Ackley F1 F1

Ackley Ackley

Definition

Definition Definition of ICT, 17, No. 3 (July) 2018, pp: 1469–491 𝑛𝑛

−0.2 1 1 𝑛𝑛 11 𝑥𝑥1 1 ∑1𝑛𝑛 𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 −0.2 √−0.2 𝑓𝑓(𝑥𝑥̅ ) = 20 + 𝑒𝑒 − 20. 𝑒𝑒Definition ∑ 𝑖𝑖 ) √ 𝑛𝑛=1 ) −𝑒𝑒20.√ 𝑒𝑒 1 ∑𝑥𝑥 𝑛𝑛 ∑ 𝑥𝑥 𝑖𝑖=1 ∑ 𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 2∑ 𝑓𝑓(𝑥𝑥̅ )𝑓𝑓(𝑥𝑥̅ = )20=+20 𝑒𝑒 + − 𝑒𝑒20. 𝑖𝑖 ) 𝑖𝑖 2 𝑛𝑛𝑛𝑛𝑖𝑖=1𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 F1 Ackley 𝑛𝑛=1 𝑖𝑖=1 𝑛𝑛 𝑛𝑛=1 1 −0.2 2 1 1 Table 1 Benchmark √ ∑ symbol Functions Function 𝑓𝑓(𝑥𝑥̅ ) = 20 + 𝑒𝑒 − 20. 𝑒𝑒Definition 𝑥𝑥 ∑ 𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥𝑖𝑖 ) 𝑛𝑛 𝑛𝑛 𝑥𝑥122 22 𝑛𝑛=1 𝑥𝑥𝑥𝑥𝑖𝑖=1 F2 Griewank 𝑛𝑛𝑛𝑛 F1 Ackley 𝑥𝑥1 ∏1𝑛𝑛 𝑐𝑐𝑐𝑐𝑐𝑐 𝑥𝑥𝑥𝑥 1 Griewank 𝑓𝑓(𝑥𝑥̅ ) = ∑𝑛𝑛 𝑛𝑛𝑥𝑥−0.2 𝑥𝑥𝑥𝑥𝑛𝑛 ) + F2 Griewank Table 1 F2 Functions 1 − 1 ∏ 1 ((𝑐𝑐𝑐𝑐𝑐𝑐 Benchmark ) 𝑓𝑓(𝑥𝑥̅ = ∑ − ∏ )+1 4000 √𝑖𝑖 )(𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 ) 𝑓𝑓(𝑥𝑥̅ = ∑ − 𝑐𝑐𝑐𝑐𝑐𝑐 +1 𝑖𝑖=1 𝑖𝑖=1 symbol Function Definition symbol Function Definition √ 𝑓𝑓(𝑥𝑥̅ ) = 20 + 𝑒𝑒 − 20. 4000 𝑒𝑒𝑖𝑖=1 ∑ 𝑥𝑥 ∑ 4000 √𝑖𝑖 𝑖𝑖 ) 𝑖𝑖=1 √𝑖𝑖 𝑖𝑖=1 𝑥𝑥 2 2 𝑖𝑖=1 𝑛𝑛 𝑛𝑛 𝑛𝑛 𝑥𝑥𝑥𝑥 𝑛𝑛=1 𝑖𝑖=1 F2 Griewank 1 F1 Ackley 1 𝑛𝑛 −0.2 Benchmark Functions 𝑛𝑛−1 ) 𝑓𝑓(𝑥𝑥̅ = ∑ − ∏ 𝑐𝑐𝑐𝑐𝑐𝑐 ( ) + 1 1 1 𝜋𝜋 Table 1 F3 Penalty #1 symbol Function Definition 𝑛𝑛−1 2 2 2 2 4000 𝜋𝜋𝑓𝑓(𝑥𝑥̅ ) = √1)∑• [1 𝑖𝑖=1 (𝑦𝑦𝑦𝑦 𝑖𝑖=1 20 ++𝑒𝑒∑ − 𝑛𝑛−1 20. 𝑒𝑒 − 𝑥𝑥 10𝑠𝑠𝑠𝑠𝑠𝑠 ∑√𝑖𝑖 (𝜋𝜋𝜋𝜋𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 F3 Penalty #1 𝑓𝑓(𝑥𝑥̅ ) = 𝜋𝜋 {10𝑠𝑠𝑠𝑠𝑠𝑠 (𝜋𝜋𝜋𝜋𝜋𝜋) + 1)] 𝑖𝑖 ) + (𝑦𝑦𝑦𝑦 − 1)2 } 2 2 (𝜋𝜋𝜋𝜋𝜋𝜋) 2 + 2 (𝜋𝜋𝜋𝜋𝑖𝑖 F3 Penalty 2 (𝜋𝜋𝜋𝜋𝜋𝜋) 2 1) 2 (𝜋𝜋𝜋𝜋𝑖𝑖 2(𝑦𝑦𝑦𝑦 {10𝑠𝑠𝑠𝑠𝑠𝑠 +𝑖𝑖=1 + 𝑥𝑥𝑥𝑥 10𝑠𝑠𝑠𝑠𝑠𝑠 + 1)] + (𝑦𝑦𝑦𝑦 2− 𝑛𝑛 𝑛𝑛∑(𝑦𝑦𝑦𝑦 𝑛𝑛 • F1 Ackley#1 𝑖𝑖=1 [1 𝑓𝑓(𝑥𝑥̅ )𝑓𝑓(𝑥𝑥̅ = )30={10𝑠𝑠𝑠𝑠𝑠𝑠 +∑ •1𝑛𝑛=1 +[1 10𝑠𝑠𝑠𝑠𝑠𝑠 + 1)] + (𝑦𝑦𝑦𝑦 − 1)−}1) } 𝑥𝑥−0.2 F2 Griewank 𝑛𝑛 1− 1) 30 1 1 𝑖𝑖=1 𝑛𝑛 ) = ∑𝑖𝑖=1 30 symbol Function Definition F3 Penalty #1 𝑓𝑓(𝑥𝑥̅ − ∏ 𝑐𝑐𝑐𝑐𝑐𝑐 ( ) + 1 𝑢𝑢𝑛𝑛𝑒𝑒(𝑥𝑥− 10, 100, 4),∑ 𝑦𝑦𝑦𝑦 =1+0.25(𝑥𝑥 1) Benchmark Functions 𝜋𝜋 𝑓𝑓(𝑥𝑥̅ )2=∑20 20. 𝑒𝑒10, √ 𝑥𝑥=1+0.25(𝑥𝑥 ∑√𝑖𝑖𝑖𝑖2+𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 𝑖𝑖𝑢𝑢, 𝑛𝑛−1 𝑖𝑖+1+ F3 Penalty #1 𝑖𝑖 ) (𝑥𝑥𝑖𝑖 (𝑦𝑦𝑦𝑦 ,4000 4), 𝑦𝑦𝑦𝑦+ 𝑖𝑖=1 𝑖𝑖=1 𝑖𝑖 ++1) ∑𝑛𝑛𝑖𝑖+1∑𝑢𝑢 (𝑥𝑥 100, 4), 1) [1 (𝜋𝜋𝜋𝜋𝑖𝑖 Table 2 2𝑦𝑦𝑦𝑦 𝑛𝑛 𝑓𝑓(𝑥𝑥̅ ) = {10𝑠𝑠𝑠𝑠𝑠𝑠 (𝜋𝜋𝜋𝜋𝜋𝜋) +𝑖𝑖+1 ∑ −100, 1) • =1+0.25(𝑥𝑥 10𝑠𝑠𝑠𝑠𝑠𝑠 1)] + (𝑦𝑦𝑦𝑦 − 1)2 } F1 1 Ackley 𝑖𝑖 , 10, 𝑖𝑖 + 𝑖𝑖=1 𝑛𝑛 𝑛𝑛 1 𝑛𝑛𝑥𝑥𝑥𝑥 𝑥𝑥12 1 1 𝑛𝑛=1 F2 Griewank 30 𝑖𝑖=1 −0.2 ) √ 𝑓𝑓(𝑥𝑥̅ = ∑ − ∏ 𝑐𝑐𝑐𝑐𝑐𝑐 ( ) + 1 ) ) 𝑛𝑛 𝑓𝑓(𝑥𝑥̅ = 20 + 𝑒𝑒 − 20. 𝑒𝑒 ∑ 𝑥𝑥 ∑ 𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 𝑛𝑛−1 𝑖𝑖 symbol Function Definition ∑ (𝑥𝑥 𝜋𝜋 𝑢𝑢 , 10, 100, 4), 𝑦𝑦𝑦𝑦 =1+0.25(𝑥𝑥 + 1) F3 Penalty #1 𝑖𝑖 𝑖𝑖 4000 𝑖𝑖+1 Table 1 √𝑖𝑖 2 𝑛𝑛 2 [1 𝑖𝑖=12 (𝜋𝜋𝜋𝜋𝑖𝑖 + 1)] + (𝑦𝑦𝑦𝑦 − 1)2 } Benchmark Functions 𝑓𝑓(𝑥𝑥̅ ) = {10𝑠𝑠𝑠𝑠𝑠𝑠2 (𝜋𝜋𝜋𝜋𝜋𝜋) + ∑ 𝑛𝑛𝑖𝑖=1 (𝑦𝑦𝑦𝑦 • 𝑛𝑛𝑖𝑖=1+ 10𝑠𝑠𝑠𝑠𝑠𝑠 𝑥𝑥12− 1) 𝑛𝑛=1 𝑥𝑥𝑥𝑥 Griewank F1F2 Ackley 𝑛𝑛−1 30 1 ) = ∑𝑖𝑖=1 −0.2 𝑓𝑓(𝑥𝑥̅ −∏ 𝑐𝑐𝑐𝑐𝑐𝑐 (𝑛𝑛 )2+ 1 1𝑛𝑛−1 F4 Penalty #2 2 (3𝜋𝜋𝜋𝜋1) 2 1 𝑛𝑛 + 𝑒𝑒 − + 𝑛𝑛−1 𝑛𝑛−1 F4 Penalty #2 4000 ) = 𝑓𝑓(𝑥𝑥̅ (3𝜋𝜋𝜋𝜋𝜋𝜋 𝑓𝑓(𝑥𝑥̅ 𝜋𝜋 0.1){𝑠𝑠𝑠𝑠𝑠𝑠 ∑ − 1) • [12∑ +[1 𝑠𝑠𝑠𝑠𝑠𝑠 + + (𝑥𝑥𝑛𝑛 2 √(𝑥𝑥𝑥𝑥 √𝑖𝑖 𝑖𝑖=1 𝑖𝑖=1 = 20 20. 𝑒𝑒 ∑ 𝑥𝑥 ∑ (𝑥𝑥 2𝑢𝑢(3𝜋𝜋𝜋𝜋1) F4 #2 , 10, 100, 4), 𝑦𝑦𝑦𝑦 =1+0.25(𝑥𝑥 +2𝑠𝑠𝑠𝑠𝑠𝑠 1)2 (3𝜋𝜋𝜋𝜋𝜋𝜋 F3 Penalty #1 2 𝑖𝑖 ) 1)] 𝑖𝑖 𝑖𝑖+ 𝑛𝑛 𝑛𝑛 𝑖𝑖+1 2 2 Benchmark Functions 2 2 2𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 ) = {𝑠𝑠𝑠𝑠𝑠𝑠 0.1(𝜋𝜋𝜋𝜋𝜋𝜋) {𝑠𝑠𝑠𝑠𝑠𝑠 + (𝑥𝑥𝑥𝑥 − •+ 1) •+𝑥𝑥𝑥𝑥𝑠𝑠𝑠𝑠𝑠𝑠 1)] + (𝑥𝑥 𝑥𝑥Definition F2 Griewank #2 F4symbol Penalty (3𝜋𝜋𝜋𝜋1) (3𝜋𝜋𝜋𝜋𝜋𝜋 21)− 𝑛𝑛[1 (𝑦𝑦𝑦𝑦 (𝜋𝜋𝜋𝜋𝑖𝑖 ={10𝑠𝑠𝑠𝑠𝑠𝑠 0.1 + 1)] + (𝑥𝑥− 𝑖𝑖=1 𝑓𝑓(𝑥𝑥̅ )𝑓𝑓(𝑥𝑥̅ = )𝑓𝑓(𝑥𝑥̅ + ∑+ ∑ −(𝑥𝑥𝑥𝑥 •1)[1 10𝑠𝑠𝑠𝑠𝑠𝑠 + 1)] ++(𝑦𝑦𝑦𝑦 1∑ Function 𝑛𝑛 1) 𝑛𝑛} 𝑖𝑖=1 𝑓𝑓(𝑥𝑥̅ ) = ∑ 𝑖𝑖=1 𝑛𝑛−1 − ∏𝑛𝑛=1 𝑐𝑐𝑐𝑐𝑐𝑐 (𝑛𝑛 𝑖𝑖=1 )𝑛𝑛+ 1 30 𝑖𝑖=1 𝑛𝑛−1 2 [1𝑖𝑖=1 2 F4 Penalty #2 4000 F1F3 Ackley #1 √𝑖𝑖 𝑛𝑛 𝜋𝜋 𝑛𝑛 𝑖𝑖=1 Penalty 2 2 2 − 1) + 𝑠𝑠𝑠𝑠𝑠𝑠 (2𝜋𝜋𝜋𝜋 )]} + ∑ 𝑢𝑢(𝑥𝑥 , 5, 100, 4) 11) 𝑛𝑛𝑠𝑠𝑠𝑠𝑠𝑠 2∑ 2 − 𝑛𝑛 𝑖𝑖(3𝜋𝜋𝜋𝜋𝜋𝜋 2 (𝜋𝜋𝜋𝜋𝜋𝜋) 2𝑦𝑦𝑦𝑦 2+ 2 −0.2 ∑(3𝜋𝜋𝜋𝜋1) (𝑥𝑥 𝑢𝑢 , 10, 100, 4), =1+0.25(𝑥𝑥 1) [1 0.1 {𝑠𝑠𝑠𝑠𝑠𝑠 + (𝑥𝑥𝑥𝑥 • + + 1)] + (𝑥𝑥 1 1 2 2 [1 𝑖𝑖 𝑖𝑖 𝑖𝑖+1 − 1) + 𝑠𝑠𝑠𝑠𝑠𝑠 (2𝜋𝜋𝜋𝜋 )]} + ∑ 𝑢𝑢(𝑥𝑥 , 5, 100, 4) [1 𝑛𝑛 (𝑦𝑦𝑦𝑦 (𝜋𝜋𝜋𝜋𝑖𝑖 𝑓𝑓(𝑥𝑥̅ )𝑓𝑓(𝑥𝑥̅ = ) ={10𝑠𝑠𝑠𝑠𝑠𝑠 + ∑ − 1) • + 10𝑠𝑠𝑠𝑠𝑠𝑠 + 1)] + (𝑦𝑦𝑦𝑦 − 1) } symbol Function Definition 𝑖𝑖 [1𝑛𝑛20. 2 (2𝜋𝜋𝜋𝜋𝑛𝑛 )]} + 𝑠𝑠𝑠𝑠𝑠𝑠 4) 𝑖𝑖=1 𝑛𝑛 𝑛𝑛+ 𝑖𝑖 , 5, 100, 𝑒𝑒 − 𝑥𝑥 ∑ ∑ 𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥 𝑥𝑥𝑥𝑥𝑢𝑢(𝑥𝑥 F2 Griewank 30 𝑓𝑓(𝑥𝑥̅ ) = 20−+1) 𝑖𝑖 ) 𝑖𝑖=1 1√ ∑ 𝑖𝑖=1𝑒𝑒𝑥𝑥𝑖𝑖=1 𝑛𝑛−1 2 ∏1𝑛𝑛=1 𝑐𝑐𝑐𝑐𝑐𝑐 𝑛𝑛 (𝑛𝑛𝑖𝑖=1𝑛𝑛𝑖𝑖=1 𝑛𝑛−1 − F1 Ackley#1#2 𝑓𝑓(𝑥𝑥̅ )+ 11) 𝜋𝜋 𝑛𝑛) = ∑ F3F4 Penalty Penalty 2 2 −0.2 2 2 2 2 ∑ (𝑥𝑥 𝑢𝑢 , 10, 100, 4), 𝑦𝑦𝑦𝑦 =1+0.25(𝑥𝑥 + 1 1 2 2 2 𝑛𝑛 𝑖𝑖(𝜋𝜋𝜋𝜋𝑖𝑖 4000 𝑖𝑖+1 [1𝑖𝑖+ [1 (𝜋𝜋𝜋𝜋𝜋𝜋) (𝑦𝑦𝑦𝑦 −+1)∑ +∑ 𝑠𝑠𝑠𝑠𝑠𝑠 − (2𝜋𝜋𝜋𝜋 𝑢𝑢(𝑥𝑥 , 5,+100, 4) 𝑓𝑓(𝑥𝑥̅ ) =𝑓𝑓(𝑥𝑥̅ ) {10𝑠𝑠𝑠𝑠𝑠𝑠 1) 𝑛𝑛− 10𝑠𝑠𝑠𝑠𝑠𝑠 1)] (𝑦𝑦𝑦𝑦 − 𝑛𝑛1) } 𝑖𝑖=1 𝑖𝑖=1+ ∑ 𝑛𝑛•)]} 𝑖𝑖(3𝜋𝜋𝜋𝜋𝜋𝜋 0.1){𝑠𝑠𝑠𝑠𝑠𝑠 (𝑥𝑥𝑥𝑥 +√𝑖𝑖𝑠𝑠𝑠𝑠𝑠𝑠 ++ 1)] + (𝑥𝑥 F5 Quartic with √ 41) 𝑥𝑥• [1 = 20(3𝜋𝜋𝜋𝜋1) + 𝑒𝑒𝑓𝑓(𝑥𝑥̅ − 𝑖𝑖=1 20. ∑ 𝑛𝑛 ∑ 30 = 𝑓𝑓(𝑥𝑥̅ 𝑖𝑖=1 ) =𝑒𝑒∑ Quartic with + 𝑈𝑈(0, 1)) 𝑐𝑐𝑐𝑐𝑐𝑐(2𝜋𝜋𝑥𝑥𝑖𝑖 ) with 𝑖𝑖=1 2(𝑖𝑖. 𝑥𝑥𝑖𝑖(𝑖𝑖. F5 F5 Quartic Quartic 4𝑛𝑛 𝑥𝑥𝑖𝑖4𝑛𝑛+ 𝑈𝑈(0, )∑ = ∑ 1)) 2 𝑖𝑖=1 noisewith 𝑛𝑛 )𝑓𝑓(𝑥𝑥̅ (𝑖𝑖. 𝑥𝑥𝑛𝑛=1 𝑓𝑓(𝑥𝑥̅ =100, + 𝑈𝑈(0,𝑛𝑛1)) F5 F2 ∑2𝑛𝑛𝑖𝑖+1 𝑢𝑢 (𝑥𝑥 10, 𝑥𝑥𝑛𝑛−1 𝑥𝑥𝑥𝑥𝑖𝑖=1 Griewank 𝑖𝑖=1+0.25(𝑥𝑥 Penalty #2 𝑖𝑖 , 𝑛𝑛−1 𝑖𝑖 + 21) 12 4), 𝑦𝑦𝑦𝑦 noise 𝜋𝜋 2 𝑖𝑖=1 F3F4 Penalty #1 2 noise [1++∑ 2 (𝜋𝜋𝜋𝜋𝜋𝜋) 2 − 1)∑ 𝑠𝑠𝑠𝑠𝑠𝑠 −𝑛𝑛𝑖𝑖=1 (2𝜋𝜋𝜋𝜋 𝑢𝑢(𝑥𝑥 , 5,+100, 4) )+ noise [1(+ 𝑠𝑠𝑠𝑠𝑠𝑠 𝑓𝑓(𝑥𝑥̅(3𝜋𝜋𝜋𝜋1) = ∑ − 2∏ )(𝜋𝜋𝜋𝜋𝑖𝑖 +𝑖𝑖(3𝜋𝜋𝜋𝜋𝜋𝜋 1 = 0.1 {𝑠𝑠𝑠𝑠𝑠𝑠 (𝑥𝑥𝑥𝑥 − 1) +𝑐𝑐𝑐𝑐𝑐𝑐 •∑ ++ 1)] + (𝑥𝑥 𝑛𝑛•)]} (𝑦𝑦𝑦𝑦 𝑓𝑓(𝑥𝑥̅ ) =𝑓𝑓(𝑥𝑥̅ ) {10𝑠𝑠𝑠𝑠𝑠𝑠 1) 1)] (𝑦𝑦𝑦𝑦 − 𝑛𝑛1)2 } F5 Quartic with 4[1 + 10𝑠𝑠𝑠𝑠𝑠𝑠 √𝑖𝑖 𝑖𝑖=1 ) =4000 30 𝑓𝑓(𝑥𝑥̅𝑖𝑖=1 ∑ + 𝑈𝑈(0,𝑖𝑖=1 1)) 2 𝑛𝑛 (𝑖𝑖. 𝑥𝑥𝑖𝑖 𝑖𝑖=1 𝑛𝑛 𝑛𝑛−1 F6 Rastrigin 𝑥𝑥𝑖𝑖=1 𝑛𝑛2 𝑛𝑛 F2F4 Griewank 𝑛𝑛𝑥𝑥𝑥𝑥 Penalty #2 1 𝑖𝑖=1 noise 𝑛𝑛 (𝑥𝑥 2 21) )(𝑥𝑥 Rastrigin 𝑓𝑓(𝑥𝑥̅ = 10. 𝑛𝑛100, ∑ − 10. cos⁡ (2𝜋𝜋𝜋𝜋𝜋𝜋)) 2 𝑐𝑐𝑐𝑐𝑐𝑐 2 2 ∑2𝑛𝑛𝑖𝑖+1 1 , 10, 4), 𝑦𝑦𝑦𝑦 =1+0.25(𝑥𝑥 + )−𝑢𝑢= F6 F6 Rastrigin [1 (3𝜋𝜋𝜋𝜋1) (3𝜋𝜋𝜋𝜋𝜋𝜋 𝑓𝑓(𝑥𝑥̅ ∑ − ∏ ( ) + 1 𝑓𝑓(𝑥𝑥̅ ) = 0.1 {𝑠𝑠𝑠𝑠𝑠𝑠 + ∑ (𝑥𝑥𝑥𝑥 − 1) • + 𝑠𝑠𝑠𝑠𝑠𝑠 + 1)] + (𝑥𝑥 2 𝑖𝑖 𝑖𝑖 ) 𝑛𝑛 = 𝑛𝑛𝑛𝑛(2𝜋𝜋𝜋𝜋 ∑ 𝑛𝑛1 )]} (𝑥𝑥110. 10.√𝑖𝑖 (2𝜋𝜋𝜋𝜋𝜋𝜋)) + 10. 𝑠𝑠𝑠𝑠𝑠𝑠 +−∑ 𝑢𝑢(𝑥𝑥 𝑖𝑖 , 5, 100, 4) )𝑓𝑓(𝑥𝑥̅ 𝑓𝑓(𝑥𝑥̅1) =[1 10. ∑ − cos⁡ (cos⁡ 2𝜋𝜋𝜋𝜋𝜋𝜋)) 𝑛𝑛−1 𝜋𝜋 𝑖𝑖=1𝑛𝑛4000 Quartic #1 with 𝑖𝑖=1𝑖𝑖=1(𝑥𝑥 F3F5 Penalty 4𝑖𝑖=1 𝑖𝑖=1 𝑛𝑛−1 2𝑖𝑖=1 ) =(𝑦𝑦𝑦𝑦 𝑛𝑛 ∑−𝑛𝑛𝑖𝑖=1 1))2 (𝜋𝜋𝜋𝜋𝑖𝑖 F4F6 Penalty #2 𝑓𝑓(𝑥𝑥̅ ) = {10𝑠𝑠𝑠𝑠𝑠𝑠2 (𝜋𝜋𝜋𝜋𝜋𝜋) + 𝑓𝑓(𝑥𝑥̅ ∑ 1)(𝑖𝑖. •𝑥𝑥𝑖𝑖[12++𝑈𝑈(0, 10𝑠𝑠𝑠𝑠𝑠𝑠 + 1)] + (𝑦𝑦𝑦𝑦 − 1)2 } 2 (3𝜋𝜋𝜋𝜋𝜋𝜋 Rastrigin 2 [1 2(𝑥𝑥𝑥𝑥 noise ) = 0.1 {𝑠𝑠𝑠𝑠𝑠𝑠2 (3𝜋𝜋𝜋𝜋1) 𝑓𝑓(𝑥𝑥̅30 +10. ∑ −𝑛𝑛121) • [1 + 𝑠𝑠𝑠𝑠𝑠𝑠 + 1)] + (𝑥𝑥𝑛𝑛 𝑖𝑖=1 𝑛𝑛−1 )= 𝑖𝑖=1 − 1) +𝑛𝑛𝑠𝑠𝑠𝑠𝑠𝑠 (2𝜋𝜋𝜋𝜋 )]} + ∑ 𝑢𝑢(𝑥𝑥 , 5, 100, 4) 𝑓𝑓(𝑥𝑥̅ ∑ (𝑥𝑥 − 10. cos⁡ ( 2𝜋𝜋𝜋𝜋𝜋𝜋)) 𝑖𝑖 𝑛𝑛−1 F7 Rosenbrock 𝑛𝑛−1 𝜋𝜋 2 ) 𝑖𝑖=12 𝑖𝑖=1 𝑛𝑛𝑖𝑖=1 2 Penalty #1 F6 F3F5 Rastrigin 𝑛𝑛−1[100(𝑥𝑥𝑥𝑥 Rosenbrock Quartic with ∑𝑛𝑛𝑖𝑖+1 )= 𝑢𝑢 100, 𝑦𝑦𝑦𝑦•1=1+0.25(𝑥𝑥 + 𝑓𝑓(𝑥𝑥̅ ∑ − 𝑥𝑥𝑈𝑈(0, +2(𝑥𝑥 −1)1)+22 ]1)]2 ]+ (𝑦𝑦𝑦𝑦 − 1)2 } 4[1 F7 F7 Rosenbrock 𝑖𝑖 , 10, 𝑖𝑖(𝜋𝜋𝜋𝜋𝑖𝑖 1210𝑠𝑠𝑠𝑠𝑠𝑠 (𝑦𝑦𝑦𝑦 𝑓𝑓(𝑥𝑥̅ ) = {10𝑠𝑠𝑠𝑠𝑠𝑠2 (𝜋𝜋𝜋𝜋𝜋𝜋) +(𝑥𝑥 ∑ −𝑛𝑛4), 1)(𝑖𝑖.+ )∑ =[100(𝑥𝑥𝑥𝑥 ∑ ++1 )∑ )𝑖𝑖 + = −𝑛𝑛𝑥𝑥1)) (𝑥𝑥1) −]1) 𝑛𝑛−1 )𝑓𝑓(𝑥𝑥̅ 2 [100(𝑥𝑥𝑥𝑥 𝑓𝑓(𝑥𝑥̅ = 2𝑓𝑓(𝑥𝑥̅ +2𝑥𝑥1𝑖𝑖 + − 𝑥𝑥∑ (𝑥𝑥 30 𝑖𝑖2− F4F6 Penalty #2 1 ) +1𝑢𝑢(𝑥𝑥 Rastrigin [1 𝑖𝑖=1 𝑖𝑖=1𝑠𝑠𝑠𝑠𝑠𝑠 noise − 1) + (2𝜋𝜋𝜋𝜋 , 5,𝑖𝑖 100, 4) 2 (3𝜋𝜋𝜋𝜋1) 2+10. 𝑖𝑖=1 (𝑥𝑥 𝑛𝑛 )]} 𝑖𝑖(3𝜋𝜋𝜋𝜋𝜋𝜋 𝑖𝑖=1 ) 𝑛𝑛 𝑓𝑓(𝑥𝑥̅ = 10. 𝑛𝑛 ∑ − cos⁡ ( 2𝜋𝜋𝜋𝜋𝜋𝜋)) [1 ) 𝑖𝑖=1 𝑓𝑓(𝑥𝑥̅ = 0.1 {𝑠𝑠𝑠𝑠𝑠𝑠 + ∑ (𝑥𝑥𝑥𝑥 − 1) • + 𝑠𝑠𝑠𝑠𝑠𝑠 𝑛𝑛−1 𝑛𝑛 1 𝑖𝑖=1 𝑖𝑖 + 1) 2 + 1)] + (𝑥𝑥𝑛𝑛 F5 Quartic with F7 Rosenbrock ∑𝑖𝑖+1 𝑢𝑢 (𝑥𝑥𝑖𝑖 , 10, 100, 4), 𝑦𝑦𝑦𝑦 =1+0.25(𝑥𝑥 4 2 𝑖𝑖=1 𝑖𝑖=1 ) (𝑖𝑖. ∑ 𝑛𝑛 +𝑥𝑥1𝑖𝑖 − + 𝑥𝑥𝐷𝐷𝑈𝑈(0, 1)) 𝑓𝑓(𝑥𝑥̅ ) = 𝑓𝑓(𝑥𝑥̅ ∑ =[100(𝑥𝑥𝑥𝑥 (𝑥𝑥𝑖𝑖 − 1) 1 ] 1 )𝑛𝑛+ F8 Schwefel 2.26 𝐷𝐷 noise 𝐷𝐷 𝑥𝑥 sin⁡(|𝑥𝑥 |1 F6 F8 Schwefel Rastrigin 𝑖𝑖=1 Schwefel 2) 1 ) )= 2 [1 2𝑛𝑛 𝑖𝑖=1 − 𝑓𝑓(𝑥𝑥̅ 418.9829⁡x⁡𝐷𝐷 2⁡ ∑ 2.262.26 𝑖𝑖 (𝑢𝑢(𝑥𝑥 − 1) + 418.9829⁡x⁡𝐷𝐷 𝑠𝑠𝑠𝑠𝑠𝑠 (2𝜋𝜋𝜋𝜋(𝑥𝑥 )]} + ∑ 100, 𝑓𝑓(𝑥𝑥̅ 10. 𝑛𝑛𝑛𝑛−1 ∑ cos⁡ 2𝜋𝜋𝜋𝜋𝜋𝜋)) F5F8 Quartic with −10. ⁡∑ 𝑥𝑥 sin⁡ (|𝑥𝑥 |2 )4) 𝑖𝑖 , 𝑖𝑖5, 𝑛𝑛−1 F4 Penalty #2 )𝑓𝑓(𝑥𝑥̅ ==)418.9829⁡x⁡𝐷𝐷 ⁡− ∑ 𝑥𝑥1)) 𝐼𝐼=1 F7 Rosenbrock 2𝑓𝑓(𝑥𝑥̅ 2 𝑈𝑈(0, )= (𝑖𝑖.𝑛𝑛𝑥𝑥−1𝑖𝑖41) 𝑖𝑖 sin⁡𝑖𝑖(2|𝑥𝑥 𝑖𝑖 |2 )2 𝑖𝑖 = ∑ (𝑥𝑥𝑥𝑥 + 2 )𝑖𝑖=1 𝑖𝑖=1 [1 (3𝜋𝜋𝜋𝜋1) (3𝜋𝜋𝜋𝜋𝜋𝜋 𝐼𝐼=1 𝑓𝑓(𝑥𝑥̅ ) = 0.1 {𝑠𝑠𝑠𝑠𝑠𝑠𝑓𝑓(𝑥𝑥̅ + 𝑠𝑠𝑠𝑠𝑠𝑠 + 1)] + (𝑥𝑥𝑛𝑛 𝑛𝑛 − ) =𝑓𝑓(𝑥𝑥̅ ∑+ ∑[100(𝑥𝑥𝑥𝑥 + 1 −•𝑥𝑥𝐷𝐷 + (𝑥𝑥𝑖𝑖 − 1) 𝐼𝐼=1 noise 2.26 1 1 ] 𝑖𝑖=1 Rastrigin F7 F6 Rosenbrock F8 Schwefel 2 𝑖𝑖=1 𝑖𝑖=1 𝑛𝑛𝑛𝑛−1 ) 𝑓𝑓(𝑥𝑥̅ = 10. ∑ (𝑥𝑥 − 10. cos⁡ ( 2𝜋𝜋𝜋𝜋𝜋𝜋)) 2 ) 𝑓𝑓(𝑥𝑥̅ = 418.9829⁡x⁡𝐷𝐷 − ⁡ ∑ 𝑥𝑥 sin⁡ ( |𝑥𝑥 | ) 1 𝑛𝑛𝑖𝑖 𝑖𝑖 F4F9 Penalty #21.2 𝑖𝑖 𝑛𝑛 𝑛𝑛 Schwefel 𝑛𝑛−1 2 2 2 𝑛𝑛 𝑖𝑖 𝑖𝑖=1 2 [1+ ∑ 2 𝑛𝑛(𝑥𝑥𝑥𝑥 F5F7 with 1.2 Rosenbrock [1 (3𝜋𝜋𝜋𝜋𝜋𝜋 𝑓𝑓(𝑥𝑥̅ ) = 0.1 {𝑠𝑠𝑠𝑠𝑠𝑠 (3𝜋𝜋𝜋𝜋1) + 1)] + (𝑥𝑥𝑛𝑛 41) + F9 Quartic Schwefel 𝑛𝑛 −(∑ 𝑖𝑖•𝐼𝐼=1 2𝑥𝑥𝑥𝑥 2 ] + 4) (2𝜋𝜋𝜋𝜋 𝑓𝑓(𝑥𝑥̅ =∑ ) 𝑠𝑠𝑠𝑠𝑠𝑠 2𝑢𝑢(𝑥𝑥 Schwefel )+=)𝑠𝑠𝑠𝑠𝑠𝑠 ∑ 𝑥𝑥)]} 𝑈𝑈(0, 𝑛𝑛 𝑖𝑖 , 5, [100(𝑥𝑥𝑥𝑥 ) 1)) 𝑓𝑓(𝑥𝑥̅−) 1) =𝑓𝑓(𝑥𝑥̅ ∑ + 𝑥𝑥𝐷𝐷∑ + (𝑥𝑥 − 1)100, F6F9 Rastrigin 1.2 𝑖𝑖 1+− 1𝑥𝑥𝑥𝑥 2 =(𝑖𝑖. ∑ (∑ 𝑖𝑖=1 𝑓𝑓(𝑥𝑥̅ =𝑖𝑖=1)∑ )(2𝜋𝜋𝜋𝜋𝜋𝜋)) 2𝑥𝑥𝑥𝑥)𝑖𝑖 2 1 noise 2.26 𝑗𝑗=1 F8 Schwefel 𝑓𝑓(𝑥𝑥̅ ) = 10. 𝑛𝑛)𝑓𝑓(𝑥𝑥̅ ∑ (𝑥𝑥 − 10. cos⁡ 𝑖𝑖=1𝑖𝑖=1 1(∑ 𝑖𝑖=1 𝑛𝑛−1 𝑓𝑓(𝑥𝑥̅ ) = 418.9829⁡x⁡𝐷𝐷 ⁡∑ 𝑥𝑥𝑛𝑛𝑖𝑖 sin⁡(|𝑥𝑥𝑖𝑖 |2 ) 𝑖𝑖=1 −𝑖𝑖=1 𝑖𝑖𝑗𝑗=12 𝑗𝑗=1 2 [1 2 𝑖𝑖=1𝑛𝑛 F7 Rosenbrock F9 Schwefel 1.2 2 − 1) + 𝑠𝑠𝑠𝑠𝑠𝑠 (2𝜋𝜋𝜋𝜋 )]} + ∑ 𝑢𝑢(𝑥𝑥 , 5, 100, 4) 𝑛𝑛 𝑖𝑖 1) ] 𝑓𝑓(𝑥𝑥̅ ) = ∑ +(∑ 1 − 𝑥𝑥𝐼𝐼=1 + 𝑛𝑛𝑛𝑛𝑛𝑛 1𝑛𝑛)𝑖𝑖=1 𝑓𝑓(𝑥𝑥̅ ) [100(𝑥𝑥𝑥𝑥 =∑ ) 2(𝑥𝑥𝑖𝑖 − 𝐷𝐷 𝑥𝑥𝑥𝑥 F5 Quartic with 1 Rastrigin F10 2.22 𝑛𝑛2 𝑛𝑛 F8 Schwefel 2.26 F8 F6 𝑛𝑛−1 )𝑖𝑖=1 𝑖𝑖=1 𝑗𝑗=1 𝑛𝑛 1)) 𝑥𝑥𝑖𝑖4−⁡+ + 𝑈𝑈(0, )=𝑛𝑛 Schwefel 2.22 𝑓𝑓(𝑥𝑥̅ =𝑓𝑓(𝑥𝑥̅ 10. ∑ (𝑥𝑥 10. cos⁡ 2𝜋𝜋𝜋𝜋𝜋𝜋)) =∑ ∑𝑛𝑛 (𝑖𝑖. |𝑥𝑥𝑥𝑥| ∏ F7 Rosenbrock 𝑖𝑖 |(|𝑥𝑥 1− 𝑓𝑓(𝑥𝑥̅ )) 𝑓𝑓(𝑥𝑥̅ = 418.9829⁡x⁡𝐷𝐷 ∑ 𝑥𝑥𝑖𝑖(|𝑥𝑥 sin⁡ |𝑥𝑥𝑖𝑖 ||22) F10 F10 Schwefel Schwefel 2.22 noise )∑ 𝑛𝑛|𝑥𝑥𝑥𝑥| = ∑ +)𝑖𝑖=1 ∏ 2.26 𝑖𝑖=1 [100(𝑥𝑥𝑥𝑥 ] 𝑖𝑖 𝑓𝑓(𝑥𝑥̅ ) = ∑ + 1 |𝑥𝑥𝑥𝑥| −∏ 𝑥𝑥𝑖𝑖 12𝐼𝐼=1 + (𝑥𝑥 − 1) F9 Schwefel 1.2 𝑓𝑓(𝑥𝑥̅ )𝑓𝑓(𝑥𝑥̅ = + |𝑥𝑥 | 𝑖𝑖=1 𝑖𝑖=1 𝑛𝑛 𝑖𝑖𝑖𝑖 𝐷𝐷 𝑖𝑖=1 𝑖𝑖=1 F5F8 Quartic with 1 ) 𝑓𝑓(𝑥𝑥̅ = ∑ (∑ 𝑥𝑥𝑥𝑥 ) 2 4 𝑖𝑖=1 𝑖𝑖=1 Schwefel 2.26 𝑛𝑛 ) = ∑ 𝑛𝑛𝑛𝑛𝑖𝑖=1(𝑖𝑖. 𝑥𝑥−𝑖𝑖 ⁡+ 𝑈𝑈(0, 1)) F10 Schwefel 2.22 𝑓𝑓(𝑥𝑥̅ ) 𝑓𝑓(𝑥𝑥̅ =𝑓𝑓(𝑥𝑥̅ 418.9829⁡x⁡𝐷𝐷 ∑ sin⁡|(|𝑥𝑥𝑖𝑖 |2 ) 𝑖𝑖=1 𝑗𝑗=1 𝑥𝑥𝑖𝑖 |𝑥𝑥 noise 𝑛𝑛−1 ) = ∑ |𝑥𝑥𝑥𝑥| + ∏ 𝑚𝑚𝑚𝑚𝑚𝑚 𝑖𝑖=1 < F6 Rastrigin2.21 𝑖𝑖 F7 Rosenbrock 2 |, 𝑖𝑖 2 F11 Schwefel 𝐼𝐼=1 F9 Schwefel 1.2 𝑚𝑚𝑚𝑚𝑚𝑚 < 1 2] = {|𝑥𝑥 1𝑥𝑥𝐷𝐷1≤ 𝑖𝑖+