A new hybrid combinatorial genetic algorithm for ... - Springer Link

1 downloads 781 Views 260KB Size Report
Jul 30, 2014 - Computer Engineering Technical College, Guangdong Institute of Science ...... DB (ed) Proceedings of the 1st ICEC94, Orlando, pp 559–564.
J Supercomput (2014) 70:930–945 DOI 10.1007/s11227-014-1268-9

A new hybrid combinatorial genetic algorithm for multidimensional knapsack problems Guoming Lai · Dehui Yuan · Shenyun Yang

Published online: 30 July 2014 © Springer Science+Business Media New York 2014

Abstract Multidimensional knapsack problem (MKP) is known to be a NP-hard problem, more specifically a NP-complete problem, which cannot be resolved in polynomial time up to now. MKP can be applicable in many management, industry and engineering fields, such as cargo loading, capital budgeting and resource allocation, etc. In this article, using a combinational permutation constructed by the convex combinatorial value M j = (1 − λ)u j + λx LP j of both the pseudo-utility ratios of MKP and LP the optimal solution x of relaxed LP, we present a new hybrid combinatorial genetic algorithm (HCGA) to address multidimensional knapsack problems. Comparing to Chu’s GA (J Heuristics 4:63–86, 1998), empirical results show that our new heuristic algorithm HCGA obtains better solutions over 270 standard test problem instances. Keywords Multidimensional knapsack problem · Genetic algorithm · Integer linear programming · Hybrid combinatorial genetic algorithm

G. Lai Computer Engineering Technical College, Guangdong Institute of Science and Technology, Zhuhai 519090, China e-mail: [email protected] D. Yuan Department of Mathematics, Hanshan Normal University, Chaozhou 521041, Guangdong, China S. Yang (B) Department of Computer Science and Engineering, Hanshan Normal University, Chaozhou 521041, China e-mail: [email protected]

123

A new hybrid combinatorial genetic algorithm

931

1 Introduction The knapsack problem is a well-known combinatorial optimization problem, which has been studied for more than a century. The knapsack problem can be described as follows: given a set of items, each with a mass and a value, and a knapsack with fixed-size, the problem is how to determine the items to be included in the knapsack so that the value is as large as possible, while the total weight is less than or equal to the limit of knapsack size. The objective of knapsack problem is to maximize the value of the items in the knapsack while satisfying the constraint of the knapsack size. There are many variations of the knapsack problem in real-world decision-making processes in a wide variety of fields. The main variations occur by changing the number of some problem parameters, such as the number of items, the number of objectives, or the number of knapsacks. The multidimensional knapsack problem (MKP) is one of the variants, which has a set of constraints instead of the limit of knapsack size. The MKP first appeared in the context of capital budgeting. Since then, many practical problems were stated as MKP, such as cargo loading, project selection, resource allocation in computer networks, capital budgeting, and cutting stock problems, etc. Recently, the MKP has been used to model the daily management of a remote sensing satellite like SPOT, which consists in deciding which photographs every day will be attempted in the next day [1]. This paper is organized as follows. The next section formulates the multidimensional knapsack problem mathematically. Section 3 reviews the related work on MKP. Section 4 presents some basic descriptions of genetic algorithm (GA), which is suitable to solve the MKP. In Sect. 5, a new hybrid combinatorial GA is introduced in detail. Empirical results of our HCGA and comparisons with other algorithms using standard test problems follow in Sect. 6. Finally, some conclusions and remarks for further work are given in Sect. 7.

2 The definition of multidimensional knapsack problem The multidimensional knapsack problem is described as following. Given n items with profits p j > 0 and m resources with capacities bi > 0, the 0–1 decision variables x j indicate which items are selected. Each item j consumes an amount wi j > 0 from each resource i. The goal of MKP is to select a subset of the items with maximum total profit, see (1); chosen items must, however, not exceed the resource capacities, which is the knapsack constraints, see (2). Mathematically, the MKP can be formulated as the following integer linear program (ILP):

(MKP)

max z =

n 

pjxj

(1)

j=1

subject to

n 

wi j x j ≤ bi , i = 1, . . . , m.

(2)

j=1

x j ∈ {0, 1},

j = 1, . . . , n.

(3)

123

932

G. Lai et al.

For the MKP, once the condition (3) is replaced by the following condition (4), x j ∈ [0, 1],

j = 1, . . . , n.

(4)

we get a general linear programming, which relaxes the 0-1 variables to the real numbers in interval [0, 1], denoted by LP.

3 Related work There are many literature giving overviews for the MKP [4–9,31,34]. A comprehensive overview of practical and theoretical results for the MKP can be found in the monograph on knapsack problems by Kellerer et al. [34]. Hanafi and Freville [4] give review of tabu search for 0–1 MKP about the literatures published 1996 ago, and then describe a new approach for tabu search based on strategic oscillation and surrogate constraint information for 0–1 MKP. Chu and Beasley [31] review the literature for the MKP, published 1996 ago too. They cluster heuristic algorithms for the MKP into early heuristics, bound-based heuristics, tabu search heuristics, genetic algorithm heuristics, analyzed heuristics and other heuristics. While Gottlieb [6] classifies evolutionary algorithms for MKP by the search spaces explored by the population. These classes include direct search in S (= {0, 1}n ), direct search in F (feasible region), direct search in B (boundary of the feasible region) and indirect search approaches. Freville and Hanafi [9] survey the principal results published in the literature concerning both the problem’s theoretical properties and its approximate or exact solutions. This survey focuses on the more recent results—for example, those relevant to surrogate and composite duality, new preprocessing approaches creating enhanced versions of leading commercial software, and efficient meta-heuristic-based methods. Mahajan and Chopra [11] analyze several algorithm design paradigms applied to 0/1 knapsack problem, whose objective is to analyze how the various techniques like dynamic programming, greedy algorithm and genetic algorithm affect the performance of knapsack problem. The approaches for MKP can be, in general, classified into the exact methods and the heuristic ones. The development of exact algorithms began several decades ago, while a variety of solution methods, including dynamic programming and its variants, a branch-and-bound network approach as well as special enumeration technique and reduction schemes, have been presented since then. However, these exact techniques are just applied to solve small-to-moderately sized instances. For MKP with a large number of constraints, heuristic methods are competitive alternatives when considering the memory- and time-consumption, and a lot of papers have addressed heuristic approaches; see [2–10,13–23]. Heuristic algorithms, such as genetic algorithms (GAs), are suitable for solving larger knapsack problems, see [24–32], and general 0–1 integer programming problems [33]. To our knowledge, the method, currently yielding the best heuristic results at least for commonly used benchmark instances, is a tabu-search linear programming-based approach described by Vasquez and Hao [2]. It was recently refined by Vasquez and Vimont [3]. Khan [12] proposes a new

123

A new hybrid combinatorial genetic algorithm

933

evolutionary algorithm (EA) for single-objective 0/1 knapsack problem, which uses a single variation operator called masked mutation. On one hand, many researchers are focussed on developing good heuristic algorithms for MKP. Moraga et al. [37] present a promising solution approach called MetaRaPS for the 0–1 MKP. The Meta-RaPS chooses the next item to add to the solution by occasionally choosing a feasible item without having the largest pseudo-utility ratio, while the genetic algorithm chooses the item with the largest pseudo-utility ratio. The Meta-RaPS is applied to a single greedy construction heuristic through the use of four user-defined parameters: % priority, % restriction, % improvement and the number of iterations. In Meta-RaPS for MKP, the % priority parameter defines the percentage of time that the next item added to the knapsack has the largest pseudo-utility ratio. In the remaining time (i.e., 100 %–% priority), the next item is randomly chosen from these items whose pseudo-utility ratios are within % restriction below the largest pseudoutility ratio. Next, an improvement heuristic is implemented on those constructed solutions that are within % improvement below the best unimproved solutions. The Meta-RaPS with dynamic greedy rule heuristic (Meta-RaPS DGR) developed by the paper achieved good results when compared with both the optimal solution and other 0–1 MKP solution techniques such as simulated annealing, tabu search, genetic algorithms, and other 0–1 MKP heuristics. However, when they tested the performance of their heuristic against the genetic algorithm of Chu and Beasley in 270 problem instances described in [31], the performance of Meta-RaPS DGR is not quite as good as that of Chu and Beasley’s genetic algorithm or Bertsimas and Demir’s approximate dynamic programming for the largest problem sizes [38]. In 2006, Li et al. [39] presented a genetic algorithm based on the orthogonal design (OGA), and Alonso et al. [40] proposed a double genetic algorithm with local search (DGALS) for the 0–1 MKP. These two algorithms follow the performance of the steady-state GA described in [31]. The OGA [39] uses a different strategy in crossover operator step, and the DGALS [40] introduces a different local search procedure. The most importance is that DGALS applies the local search procedure only periodically, each time that a number of t generations are produced, and applies local search to all individuals in the population. DGALS has been tested against the genetic algorithm of Chu and Beasley [31] over 270 problem instances described in [31]. The results show that the performs better than Chu and Beasley’s genetic algorithm in terms of the average % gaps. Akcay et al. [41] propose another new greedy-like heuristic method, which is primarily intended for the general multidimensional knapsack problem [41], but proving itself effective for the 0–1 MKP. For the 0–1 MKP, their heuristic differs from the existing greedy-like heuristics in that the heuristic uses the effective capacity defined as the maximum number of copies of an item that can be accepted if the entire knapsack is to be used for that item alone. They test the performance of their heuristic against the genetic algorithm of Chu and Beasley on the 270 problem instances described in [31] too. The results show that the genetic algorithm of Chu and Beasley is better than their heuristic methods, since it is mainly tailored for the general MKP, not for the 0–1 MKP. Recently, Kong et al. [42] present a new ant colony optimization (ACO) approach, called binary ant system (BAS), for the 0–1 MKP. The BAS uses a pheromone laying method specially designed for the binary solution structure, and allows the generation

123

934

G. Lai et al.

of infeasible solutions in the solution construction procedure. The BAS has been tested against those of other three ACO-based algorithms proposed by Leguizamon and Michalewicz [43], Fidanova [44], and Alaya et al. [45], respectively, on 60 instances described in [31]. BAS algorithm finds all the best solutions, which are not achieved by any other three algorithms. But they never do experiments over the remaining 210 instances. Puchinger and Raidl [15] present a guided relaxation variable neighborhood search method, which brings order into the neighborhoods. Hanafi and Glove [46] show how the exploitation of nested inequalities and surrogate constraints can be strengthened to give better results. Kong and Tian [47], as well as Hembecker et al. [48] present the particle swarm optimization method for the MKP, respectively. Ke et al. [49] present another ant colony optimization (ACO) approach. Hanafi and Wilbaut [50] consider scatter search for the 0–1 multidimensional knapsack problem. Montana et al. [51] propose a method for computing surrogate constraints of linear problems that evolves sets of surrogate multipliers coded in floating point and uses as fitness function the value of the -approximate solution of the corresponding surrogate problem. Djannaty and Doostdar [67] propose a hybrid genetic algorithm with a strong initial population and a number of novel penalty functions. Samanta et al. [68] use a new heuristic called ant weight lifting algorithm (AWL) to solve 0/1 knapsack problem. Htiouech et al. [69] present a new heuristic for solving the multidimensional multichoice knapsack problem called MMKP, whose main idea is to explore both sides of the feasibility border that consists in alternating both constructive and destructive phases in a strategic oscillating manner. On the other hand, the good computational results for single knapsack problem can be obtained by heuristic combining with exact methods. Moreover, using the same idea, people can obtain good experimental results for the MKP. Basing on Marsten and Morin’s method [52], Gallardo et al. [53] propose a hybridization of an evolutionary algorithm (EA) with the branch and bound method (B&B). Their strategy is that both techniques (EA and B&B) cooperate by exchanging information, namely lower bounds in the case of the EA, and partial promising solutions in the case of the B&B. By this way, the EA provides lower bounds that the B&B can use to purge the problem queue, whereas the B&B guides the EA to look into promising regions of the search space. the resulting hybrid algorithm has been tested on large instances of the MKP problem with encouraging results. Motivated by these encouraging results, many researchers are interested in exact methods again. Osorio et al. [8] use surrogate analysis and constraint pairing in multidimensional knapsack problems to fix some variables to zero and to separate the rest into two groups—those that tend to be zero and those that tend to be one, in an optimal integer solution, and design a preprocessing algorithm for MKP. Puchinger et al. [54] propose a different fixing variable method—core concept method, which is an extension of the classical concept for the single knapsack problem. To apply the dynamic programming (DP) to solve the MKP, Balev et al. [55] and Boyer et al. [56] present the modified dynamic programming for the MKP, respectively. The key of this method is the list representation of DP. For the single knapsack problem, the list presentation is based on the step-wise growing nature of the knapsack function and uses a list containing only the points where the value of the knapsack function changes. Balev et al. use a natural generalization of this idea for the MKP. Boyer et al. [56] generate the list with dominated states technique. To obtain a good upper

123

A new hybrid combinatorial genetic algorithm

Initial Population

935

Y

Fitness

Stop conditions

Computation Optimal N

Result

Parent Selection

Update Population

Crossover and Mutation

Y Begin N

End

Discard

Repair and Improvement

Fig. 1 The framework of stand GA

bound, Kaparis and Letchford [57] take the whole constraint matrix into account, and propose a cutting plane method based on new ‘global’ lifted cover inequalities (LCIs), while Vimont et al. [58] propose an implicit enumeration which uses a reduced costs constraint to fix non-basic variables and prune nodes of the search tree. They obtain the new optimal solutions never published before and the new bounding of the number of items at the optimum for hard instances.

4 Basic definitions about genetic algorithm Conceived by John Holland and his colleagues, GA can be understood as an ’intelligent’ probabilistic search algorithm that can be applied to a variety of combinatorial optimization problems. GAs are based on the evolutionary process of biological organisms in nature, which include such as the representation and fitness function of individuals, generation of initial population, parent selection, crossover and mutation, and the repair and improvement of individuals; see Fig. 1.

4.1 Population representation and fitness function The MKP solution is generally represented by a n-bit 0–1 binary string, where n is the number of variables in the MKP. In this representation a value of 0 or 1 at the jth bit in the string implies that x j = 0 or 1 in the solution, respectively. x j = 1 indicates that the jth item is selected in the knapsack, while x j = 0, vice versa. This binary representation of an individual’s chromosome (solution) for the MKP is illustrated in Fig. 2. While the single fitness function is based entirely on the objective function to be maximized: f (x1 , x2 , . . . , xn ) =

n 

p j x j ( j = 1, 2, . . . , n; x j ∈ {0, 1})

(5)

j=1

123

936 Fig. 2 Representation of a MKP solution

G. Lai et al.

j S(j)

1 0

2 1

3 0

4 0

5 1

··· n − 1 n ···

1

0

4.2 Generation of initial population Good initial populations facilitate a GA’s convergence to good solutions while poor initial populations can hinder GA convergence. Just as a starting point dictates the quality of a gradient-based non-linear optimization algorithm, the initial population can affect the convergence of MKP solution obtained by GAs. There are two methods in population initialization. one is random initialization method, where populations are produced by extracting at random without any rules, the other is induced initialization method, where populations are produced consistently by using background knowledge and information relating to given values. To the MKP, random initialization method has been commonly used for population initialization.

4.3 Parent selection Parent selection is the task of assigning reproductive opportunities to each individual in the population. Typically, in a GA we need to generate two parents who will have (one or more) children. There are many methods that can be adopted for parent selection, for example, tournament selection method, roulette wheel rule selection, ergodic random sampling selection, local selection and their improvement. Note that the tournament selection method can be implemented very efficiently, the general GAs adopt this selection method for parent selection.

4.4 Crossover and mutation The binary representation of the MKP solution allows a wide range of the standard GA crossover and mutation operators to be adopted. The computational experiences indicate that the overall performance of GAs is insensitive to the particular choice of crossover operator. Therefore, the uniform crossover operator is commonly chosen as the default crossover operator. This crossover permits that two selected parents have a single child. Each bit in the child solution is created by copying the corresponding bit from one or the other parent randomly, according to a binary random number generator [0, 1]. If the random number is a 0, the bit is copied from the first parent, inversely, if it is a 1, the bit is copied from the second parent. Once a child solution has been generated through crossover, a mutation procedure is performed to maintain the diversity of population in order to prevent the premature convergence phenomenon. The mutation operator randomly selected some bits in the child solution to change from 0 to 1 or vice versa. The rate of mutation is generally set to be a small value (in the order of 1 or 2 bits per string).

123

A new hybrid combinatorial genetic algorithm

937

4.5 Repair and improvement Obviously, some child solutions, generated by the crossover and mutation procedures, may not be feasible because the knapsack constraints may not all be satisfied. Even if a child solution is feasible, it is possible to transform it into a better one. The aim of the repair and improve operator is to obtain a feasible solution from an infeasible solution and to improve the fitness of a feasible solution. Basically, there are two techniques to deal with the infeasible solutions. One approach to handle infeasible solutions is the usage of penalty functions incorporated into the objective function, see [26–29]; the other is to incorporate a repair algorithm that transforms each infeasible solution into a feasible one, in which a repair operator based on the greedy algorithm is applied, as described in [31,32,39]. In addition, approach to obtain a better feasible solution is to incorporate an improve algorithm, which may transform a feasible solution into a better one. Note that the repair or improve operator lies on a fixed permutation of n items of MKP; see [31]. In this article, we use the combined permutation determined by the convex combinatorial values M j to design a new hybrid combinatorial genetic algorithm (HCGA).

5 A new order and the hybrid GA for MKP The method, proposed by Chu and Beasley [30,31] to design greedy-like heuristics for the MKP, follows the notion of the pseudo-utility ratios (for solving the single constraint problem), which are defined as the ratios of the objective function coefficients ( p j s) to the coefficients of the single knapsack constraint (b j s). The greater the ratio it is, the higher chance of the corresponding variable x j will be equal to one in the solution. In [31], they adopt the surrogate duality approach of Pirkul [22] in determining the pseudo-utility ratios as it is conceptually simple and computationally straightforward; see [31] for the details of the pseudo-utility ratios and the surrogate duality. While Raidl [36] proposes a method that makes use of the LP optimal solution x LP , based on the intuition that higher x LP j values indicate a better usefulness of setting x j = 1. For the GAs, whenever in the initialization population phase or repair and improvement phase, all considered methods are based on a random permutation π : J → J of the items and start with the solution candidate x. The most important idea is the procedure Try(x, j), which increases x j from 0 to 1 if the resulting solution is feasible. Chu and Beasley [30,31] calculate a pseudo-utility ratio for each item (based on shadow prices of the constraints of the LP relaxation) and determine π by sorting the items accordingly. Raidl [36] proposes to use the LP optimum x LP as sorting criterion to obtain π and suggests to randomly shuffle all the items with the same x LP j value, which introduces more nondeterminism into π , since there exist many items j with x LP j ∈ {0, 1}. It must be remarked that Chu and Beasley never change the original order π to yield a very effective deterministic repair and improvement procedure. On the other hand, Raidl shuffles the order before repairing and before optimizing, which yields an effective nondeterministic repair and optimization method too.

123

938

G. Lai et al.

In reference [31], Chu and Beasley use the permutation determined by the pseudoutility ratios to design greedy-like heuristics which is used in the repair and improvement step. Their repair operator consists of two phases. The first phase (called DROP) examines each variable in increasing order of u j and changes the variable from one to zero if feasibility is violated. The second phase (called ADD) reverses the process by examining each variable in decreasing order of u j and changes the variable from zero to one as long as feasibility is not violated. The aim of the DROP phase is to obtain a feasible solution from an infeasible solution, whilst the ADD phase seeks to improve the fitness of a feasible solution. In order to achieve an efficient implementation of the repair operator, a preprocessing routine is applied to each problem that sorts and renumbers variables according to the decreasing order of their u j s. The pseudo-code for the repair operator (after this preprocessing has been carried out) is given in [31]. From the results of Chu and Beasley [30,31] and the experiments of Raidl [36], we know that once using the information about the LP programming of MKP, the good results can be expected. In the methods proposed by Chu and Beasley, they used only the information of pseudo-utility ratios, while in the methods proposed by Raidl, he used only the information of x LP that is the optimal solution of LP. Motivated by Raidl, Chu and Beasley, we propose a new hybrid method that can use both these two information about the LP of MKP and the pseudo-utility ratios of MKP in this section. Before describing the new hybrid method, let us normalize the pseudo-utility ratios u j and the optimal solution x LP of LP of MKP, i.e.: uj u j ← n x LP j ←

k=1 u k x LP j n LP k=1 x k

j ∈ J, j ∈ J,

m where u j is defined as p j /( i=1 Vi ri j ) and Vi is the shadow price of MKP corresponding to the ith constraint of the relaxed LP. For any fixed λ ∈ [0, 1], we now LP consider the convex combinatorial value M j = (1 − λ)u j + λx LP j of u j and x j for each j ∈ J . Obviously, when λ = 0, M j is the normalized pseudo-utility ratio u j in [31]. Whilst for λ = 1, M j is the jth normalized component of the optimal solution x LP . To the best of our knowledge, using the permutation determined by the sequence M j for λ ∈ (0, 1) (which is called combined permutation) to design the genetic algorithm still has never found in the literature. Therefore, we address the problem of using a combined permutation to design GA. Here, we also use the standard GA, see Fig. 1. When replacing the permutation, which is used to design greedy-like heuristics in the repair step of Chu’s work, by the combined one, we get a new hybrid combinatorial GA, simply abbreviated to HCGA. Obviously, the difference between our algorithm and Chu’s work is the use of different permutation to design the greedy-like heuristics in the repair step. Note that the combinatorial permutation is constructed by using the convex combinatorial value M j = (1 − λ)u j + λx LP j , which is the combination of the pseudo-utility ratios of MKP and the optimal solution x LP of relaxed LP. Therefore,

123

A new hybrid combinatorial genetic algorithm

939

we can expect that our method can get better results than the Chu’s work when they are used to solve the same multidimensional knapsack problem. For the stand GA, the initial population is important since the performance of results is determined by its diversification. By our observation with different population values N = 100, 200, 300, the qualities of solution with different N values have little improvements and can be negligible. Furthermore, in order to achieve sufficient diversification while having reasonable simulation time, the initial population, with the size being fixed at N = 100, was randomly generated. Each of the initial feasible solutions is constructed by a primitive primal heuristic that repeatedly randomly selects a variable and sets it to one if the solution is feasible. The heuristic terminates when the selected variable cannot be added to the solution; see Algorithm 2 in [31] for details. In order to compare our HCGA with Chu’s GA, the default settings for our HCGA are set to be the same of Chu’s GA. They are as follows, see [31]: – – – –

The binary tournament selection method, The uniform crossover operator, A mutation rate equal to 2 bits per child string, To discard any duplicate children (i.e., discard any child which is the same as a member of the population), – The steady-state replacement method based on eliminating the individual with the lowest fitness value. Recall here that in order to achieve an efficient implementation of the repair operator, a preprocessing routine is applied to each problem to sort variables. This is done only once so as not to affect the per iteration time complexity. Similar to the Algorithm 1 in [31], we conclude that the time complexity of the repair operator, as well as the complexity of the GA per iteration, is approximately O(mn), see [31]. Since this time complexity is relatively small, we can expect the GA to execute each iteration fairly quickly. When the parameter λ ∈ [0, 1] is fixed, The permutation π(λ), determined by the sequence M j , is determined too. Whilst the permutation π(λ) may change with different λ ∈ [0, 1]. By experiments, however, we found that our HCGA obtains good results with the parameter λ = 0.05. Therefore, the parameter λ is set to the value 0.05 in our HCGA.

6 Computational studies 6.1 Benchmark instances We compare the performance of our HCGA with that of the Chu’s GA heuristic on Chu and Beasley’s benchmark library, since Chu’s GA is superior to other heuristic methods which were introduced in [31] in terms of the quality of the solutions obtained. The Chu and Beasley’s benchmark library for the MKP is widely used in the literature. The library contains classes of randomly created instances for each combination of n ∈ {100, 250, 500} items, m ∈ {5, 10, 30} constraints, and tightness ratios

123

940

G. Lai et al.

α = n

bi

j=1 wi j

∈ {0.25, 0.5, 0.75}.

The resource consumption values wi j are the integers chosen uniformly from (0, 1,000), and the profits of n items are correlated to the weights and generated as m wi j + 500r j . p j = i=1 m Here, r j is a uniformly and stochastically chosen real number from (0, 1]. For each class (i.e., for each combination of n, m, and α), 10 different instances are available. These problem instances made by Chu and Beasley are available on the web site (http:// people.brunel.ac.uk/~mastjjb/jeb/orlib/mknapinfo.html) and were used in [31]. 6.2 Experiment results and discussion From the results presented in Table 1 for small problems in [31], we believe that there is little improvement that can be gained by comparing heuristics with respect to their performances on this standard. Hence we shall, in this section, restrict our performance comparison solely to large-size multidimensional knapsack problems. Moreover, since Chu’s GA is superior to other heuristic methods which were introduced in [31] in terms of the quality of the solutions obtained, we only compare the performance of our HCGA with the Chu’s GA heuristic. Since the original codes of Chu’s GA algorithms were not available to us from the web, and in order to make our work more comparable to it, we have implemented not only our HCGA algorithm, but also Chu’s GA algorithm based on the framework of stand GA presented in Fig. 1. In order to better test the effectiveness of our HCGA, we solve the large-size multidimensional knapsack problems using both Chu’s GA and our HCGA heuristics. We solve these problems on our Silicon Graphics Indigo workstation with windows xp system, using both Chu’s GA and our HCGA heuristics, and both these two GA heuristics are coded in matlab. The GA heuristics are run ten times for each of the 270 problems and each running terminates when 106 non-duplicate children are generated. The results are shown in Table 1. On one hand, since the most optimal solution values for most of the problems in Table 1 are not known, the qualities of the solutions generated (either by Chu’s GA or by our HCGA) are measured by the minimum percentage gap between the best solution value found when run 10 times and the optimal value of the LP relaxation, i.e., 100(optimal LP value − best solution value)/(optimal LP value). On the other hand, the best recorded solution for each of the 270 problems is available from the OR-library, the qualities are also measured through comparing the solutions generated (either by Chu’s GA or by our HCGA) with the one recorded on OR Library. The first three columns in Table 1 indicate the sizes (m and n) and the tightness ratios (α) of a particular problem structure, with each problem structure containing 10 problem instances. The next three columns show the results of comparing our HCGA

123

A new hybrid combinatorial genetic algorithm

941

Table 1 Performance comparison of our HCGA with Chu’s GA heuristic Problem

HCGA

Chu’s GA

m

n

α

N.O.R

N.O.Ch

AMGap

N.O.R

N.O.HCG

AMGap

5

100

0.25

10

0

0.9886

10

0

0.9886

0.5

10

0

0.4512

10

0

0.4512

0.75

10

0

0.3179

10

0

0.3179

0.25

1

1

0.2674

3

4

0.2532

0.5

5

1

0.1239

6

6

0.1184

0.75

9

2

0.0772

9

1

0.0783

0.25

6

9

0.0908

0

1

0.1093

0.5

6

8

0.0445

3

0

0.0479

0.75

7

2

0.0263

5

1

0.0277

0.25

10

0

1.5619

10

0

1.5619

0.5

9

1

0.8088

10

1

0.7946

0.75

10

0

0.4823

10

0

0.4823

0.25

5

8

0.4847

3

2

0.5321

0.5

10

5

0.2413

6

0

0.2548

0.75

8

2

0.1474

8

1

0.1503

0.25

4

8

0.2318

1

1

0.2577

0.5

5

6

0.1062

2

1

0.1144

0.75

8

7

0.0698

5

1

0.0728

0.25

9

1

2.9051

9

1

2.9124

0.5

10

2

1.3208

10

0

1.3377

0.75

9

0

0.8309

9

0

0.8309

0.25

7

2

1.2017

7

3

1.1947

5

5

10

10

10

30

30

30

250

500

100

250

500

100

250

500

0.5

8

7

0.5189

4

0

0.5449

0.75

8

2

0.3038

7

1

0.3101

0.25

4

6

0.6118

3

1

0.6396

0.5

6

7

0.2602

4

1

0.2687

0.75

8

6

0.1606

4

2

0.1661

N .O.R the sample number that corresponding heuristics outperform the one recorded on OR Library, N .O.Ch the sample number that HCGA outperforms Chu’s GA re-implemented, N .O.H C G the sample number that the re-implemented Chu’s GA outperform our HCGA, AM Gap the average minimum percentage gap over 10 problem instances for each problem structure

with the best recorded solution of OR-library. In more detail, the value of N.O.R is the number of sample instances that the solution generated by our HCGA is greater than or equal to the best one recorded by OR-library over 10 problem instances for each problem structure. Moreover, the column of N.O.Ch is the number of problem instances that the solution generated by our HCGA is greater than the one generated by Chu’s GA over 10 problem instances for each problem structure. Last but not least, the average minimum percentage gap of our HCGA, AMGap, is the average value

123

942

G. Lai et al.

of minimum gap for 10 problem instances each running 10 times for every problem structure, respectively. For comparison, computational results, using the Chu’s GA heuristic re-implemented in this paper, are also shown in the remaining three columns in Table 1. The value of N.O.R is the number of problem instances that the solution generated by Chu’s GA is greater than or equal to the one recorded on OR-Library. Meanwhile, the figure of N.O.HCG is the number of problem instances that the solution generated by Chu’s GA is greater than the one generated by our HCGA. Finally, the average minimum percentage gap of Chu’s GA, over 10 problem instances and each running 10 times for every problem structure, is shown by AMGap, respectively. It can be seen that both our HCGA and Chu’s GA are very effective for large sizes MKP instances of various structures. However, our HCGA is more effective than Chu’s GA, since the summation of the N.O.R, that the solutions of our HCGA outperform the ones recorded on OR-Library, is 202 while the sum of N.O.R, that Chu’s GA outperforms the one recorded on OR-Library, is 168. Furthermore, it is also clear that in all but four (m = 5, n = 250, α = 0.25, 0.50; m = 10, n = 100, α = 0.50; m = 30, n = 250, α = 0.25) of the 27 problem structures in Table 1, the average minimum percentage gap produced by the our HCGA is at least as good as the ones produced by Chu’s GA. However, the average minimum percentage gap of the HCGA (0.5421 %) is lower than that of Chu’s GA (0.5488 %) over all 270 test problems. Moreover, the total number of N.O.Ch, that our HCGA outperforms Chu’s GA, is 93, while the total number of N.O.HCG, that Chu’s GA outperforms our HCGA, is 29 over all 270 test problems. Therefore, the total number of N.O.Ch is much larger than the total number of N.O.HCGA for all 270 test problems.

7 Conclusions In this paper, we present a hybrid combinatorial heuristic algorithm HCGA based on the stand GAs for solving multidimensional knapsack problems. Our approach differs from previous Chu’s GA-based techniques in the way that our method uses a new permutation, which is called combinational one, to design the repair and improvement operator. This repair operator guarantees that our HCGA heuristic can obtain better solutions than the Chu’s GA heuristic. Our positive results support the idea that this new permutation is desirable to be used to design GA heuristics for solving the MKP. On a large set of randomly generated problems, we have shown that our HCGA heuristic is capable of obtaining high-quality solutions for problems of various characteristics, whilst requiring only a modest amount of computational effort. Moreover, computational results show that the HCGA gives superior quality solutions than Chu’s GA heuristics. Furthermore, since the MKP is a NP-complete problem, which attracts a large number of researchers dedicated to this field, more and more state-of-the-art literature concerning MKP are published these days; so our future work will also focus on this area, and more comparisons against other up-to-date methods will be our future work.

123

A new hybrid combinatorial genetic algorithm

943

Acknowledgments This work is partially supported by the Project of Department of Education of Guangdong Province (No. 2013KJCX0128), the Nature Science Foundation of Guangdong Province (No. 10152104101000004 and No. S2013010013101), and the Foundation of Hanshan Normal University (Grant No. QD20131101). The authors would like to thank the anonymous referees for valuable comments and suggestions, which helped a lot in improving the quality of this paper.

References 1. Vasquez M, Hao JK (2001) A logic-constrained knapsack formulation and a tabu algorithm for the daily photograph scheduling of an earth observation satellite. Comput Optim Appl 20:137–157 2. Vasquez M, Hao J-K (2001) A hybrid approach for the 0–1 multidimensional knapsack problem. In: Proceedings of the international joint conference on artificial intelligence, pp 328–333 3. Vasquez M, Vimont Y (2005) Improved results on the 0–1 multidimensional knapsack problem. Eur J Oper Res 165:70–81 4. Hanafi S, Freville A (1998) An efficient tabu search approach for the 0–1 multidimensional knapsack problem. Eur J Oper Res 106(2–3):659–675 5. Fréville A (2004) The multidimensional 0–1 knapsack problem: an overview. Eur J Oper Res 155:1–21 6. Gottlieb J (2000) On the effectivity of evolutionary algorithms for the multidimensional knapsack problem. Lecture Notes in Computer Science vol 1829, pp 23–37 7. Raidl GR, Gottlieb J (2005) Empirical analysis of locality, heritability and heuristic bias in evolutionary algorithms: a case study for the multidimensional knapsack problem. Evol Comput J 13(4):441–475 8. Osorio MA, Glover F, Hammer P (2002) Cutting and surrogate constraint analysis for improved multidimensional knapsack solutions. Ann Oper Res 117(1):71–93 9. Fréville A, Hanafi S (2005) The multidimensional 0–1 knapsack problem bounds and computational aspects. Ann Oper Res 139:195–227 10. Dominique F, Ider T (2004) Global optimization and multi knapsack: a percolation algorithm. Eur J Oper Res 154(1):46–56 11. Mahajan R, Chopra S (2012) Analysis of 0/1 knapsack problem using deterministic and probabilistic techniques. In: Second international conference on advanced computing and communication technologies (ACCT), pp 150–155 12. Khan MHA (2013) An evolutionary algorithm with masked mutation for 0/1 knapsack problem. In: International conference on informatics, Electronics and vision (ICIEV), pp 1–6 13. Thiongane B, Nagih A, Plateau G (2006) Lagrangean heuristics combined with reoptimization for the 0–1 bidimensional knapsack problem. Discrete Appl Math 154:2200–2211 14. Puchinger J, Raidl GR (2005) Relaxation guided variable neighborhood search. In Proceedings of the XVIII mini EURO conference on VNS, Tenerife 15. Puchinger J, Raidl G (2008) Bringing order into the neighborhoods: relaxation guided variable neighborhood search. J Heuristics 14(5):457–472 16. Løketangen A (2002) Heuristics for 0–1 mixed-integer programming. In: Pardalos PM, Resende MGC (eds) Handbook of applied optimization, Oxford University Press, Oxford, pp 474–477 17. Løketangen A, Glover F (1998) Solving zero/one mixed integer programming problems usingtabu search. Eur J Oper Res 106:624–658 18. Fischetti M, Lodi A (2003) Local branching. Mathematical Programming Series B 98:23–47 19. Danna E, Rothberg E, Le Pape C (2005) Exploring relaxation induced neighborhoods to improve MIP solutions. Math Progr Ser A 102:71–90 20. Magazine MJ, Oguz O (1984) A heuristic algorithm for the multidimensional zero-one knapsack problem. Eur J Oper Res 16:319–326 21. Martello S, Toth P (1990) Knapsack problems: algorithms and computer implementations. Wiley, New York 22. Pirkul H (1987) A heuristic solution procedure for the multiconstrained zero-one knapsack problem. Naval Res Logist 34:161–172 23. Volgenant A, Zoon JA (1990) An improved heuristic for multidimensional 0–1 knapsack problems. J Oper Res Soc 41:963–970 24. Balas E, Martin CH (1980) Pivot and complement: a heuristic for 0–1 programming. Manag Sci 26(1):86–96

123

944

G. Lai et al.

25. Hinterding R (1994) Mapping, order-independent genes and the knapsack problem. In: Fogel DB (ed) Proceedings of the 1st IEEE international conference on evolutionary computation, Orlando, pp 13–17 26. Khuri S, Bäck T, Heitkötter J (1994) The zero/one multiple knapsack problem and genetic algorithms. In: Proceedings of the 1994 ACM symposium on applied computing, pp 188–193 27. Olsen AL (1994) Penalty functions and the knapsack problem. In: Fogel DB (ed) Proceedings of the 1st international conference on evolutionary computation, Orlando, pp 559–564 28. Rudolph G, Sprave J (1996) Significance of locality and selection pressure in the grand deluge evolutionary algorithm. In: Proceedings of the international conference on parallel problem solving from nature IV, pp 686–694 29. Thiel J, Voss S (1994) Some experiences on solving multiconstraint zero-one knapsack problems with genetic algorithms. INFOR 32:226–242 30. Chu PC (1997) A genetic algorithm approach for combinatorial optimization problems, Ph.D. thesis at The Management School, Imperial College of Science, London 31. Chu PC, Beasley JE (1998) A genetic algorithm for the multidimensional knapsack problem. J Heuristics 4:63–86 32. Cotta C, Troya JM (1998) A hybrid genetic algorithm for the 0–1 multiple knapsack problem. Artif Neural Nets Genet Algorithms 3:251–255 33. Sun Y, Wang Z (1994) The genetic algorithm for 0–1 programming with linear constraints. In: Fogel DB (ed) Proceedings of the 1st ICEC94, Orlando, pp 559–564 34. Kellerer H, Pferschy U, Pisinger D (2004) Knapsack problems. Springer, Berlin 35. Bäck T, Fogel D, Michalewicz Z (1997) Handbook of evolutionary computation. Oxford University Press, Oxford 36. Raidl GR (1998) An improved genetic algorithm for the multiconstrained 0–1 knapsack problem. In: Proceedings of the 5th IEEE international conference on evolutionary computation, pp 207–211 37. Moraga RJ, DePuy GW, Whitehouse GE (2005) Meta-RaPS approach for the 0–1 multidimensional knapsack problem. Comput Ind Eng 48(1):83–96 38. Bertsimas D, Demir R (2002) An approximate dynamic programming approach to multidimensional knapsack problem. Manag Sci 48(4):550–565 39. Li H, Jiao Y-C, Zhang L (2006) Genetic algorithm based on the orthogonal design for multidimensional knapsack problems. Lecture Notes in Computer Science vol 4221, pp 696–705 40. Alonso C, Caro F, Montana JL (2006) A flipping local search genetic algorithm for the multidimensional 0–1 knapsack problem. Lecture Notes in Artificial Intelligence vol 4177, pp 21–30 41. Akcay Y, Li HJ, Xu SH (2007) Greedy algorithm for the general multidimensional knapsack problem. Ann Oper Res 150(1):17–29 42. Kong M, Tian P, kao Y (2008) A new ant colony optimization algorithm for the multidimensional knapsack problem. Comput Oper Res 35(8):2672–2683 43. Leguizamon G, Michalewicz Z (1999) A new version of ant system for subset problem. Congr Evol Comput 14:59–64 44. Fidanova S (2002) Evolutionary algorithm for multidimensional knapsack problem. PPSNVIIWorkshop 45. Alaya I, Solnon C, Ghéira K (2004) Ant algorithm for the multi-dimensional knapsack problem. In: International conference on bioinspired optimization methods and their applications (BIOMA 2004), p 63C72 46. Hanafi S, Glover F (2007) Exploiting nested inequalities and surrogate constraints. Eur J Oper Res 179(1):50–63 47. Kong M, Tian P (2006) Apply the particle swarm optimization to the multidimensional knapsack problem. Lecture Notes in Computer Science vol 4029, pp 1140–1149 48. Hembecker F, Lopes H, Godoy W Jr (2007) Particle swarm optimization for the multidimensional knapsack problem. Lecture Notes in Computer Science vol 4431, pp 358–365 49. Ke L, Feng Z, Ren Z, Wei X (2010) An ant colony optimization approach for the multidimensional knapsack problem. J Heuristics 16(1):65–83 50. Hanafi S, Wilbaut C (2008) Scatter search for the 0–1 multidimensional knapsack problem. J Math Modell Algorithms 7(2):143–159 51. Montana J, Alonso C, Cagnoni S, Callau M (2008) Computing surrogate constraints for multidimensional knapsack problems using evolution strategies. Lecture Notes in Computer Science vol 4974, pp 555–564

123

A new hybrid combinatorial genetic algorithm

945

52. Marsten RE, Morin TL (1978) A hybrid approach to discrete mathematical programming. Math Progr 14:21–40 53. Gallardo JE, Cotta C, Fernandez AJ (2005) Solving the multidimensional knapsack problem using an evolutionary algorithm hybridized with branch and bound. Lecture Notes in Computer Science vol 3562, pp 21–30 54. Puchinger J, Raidl G, Pferschy U (2006) The core concept for the multidimensional knapsack problem. Lecture Notes in Computer Science vol 3906, pp 195–208 55. Balev S, Yanev N, Fréville A, Andonov R (2008) A dynamic programming based reduction procedure for the multidimensional 0–1 knapsack problem. Eur J Oper Res 186(1):63–76 56. Boyer V, Elkihel M, Elbaz D (2009) Heuristics for the 0–1 multidimensional knapsack problem. Eur J Oper Res 199(3):658–664 57. Kaparis K, Letchford AN (2008) Local and global lifted cover inequalities for the 0–1 multidimensional knapsack problem. Eur J Oper Res 186(1):91–103 58. Vimont Y, Boussier S, Vasquez M (2008) Reduced costs propagation in an efficient implicit enumeration for the 0–1 multidimensional knapsack problem. J Comb Optim 15(2):165–178 59. Bibliography (2004) For further detail, please visit our website http://www.joics.com 60. Wang RH (1999) Numerical approximation. Higher Education Press, Beijing 61. Fusiello A (2000) Uncalibrated euclidean reconstruction: a review. Image Vis Comput 18:555–563 62. Provot X (1995) Deformation constraints in a mass-spring model to describe rigid cloth behavior. In: Proceedings of graphics interface ’95, pp 147–154 63. Sun Y (2002) Space deformation with geometric constraint, M.S. Thesis, Department of Applied Mathematics, Dalian University of Technology 64. Knuth DE (1996) The TEXbook, Addison-Welsey, New York 65. Ortiz EL (1974) Canonical polynomials in the Lanczos tau-method. In: Scaife B (ed) Studies in numerical analysis. Academic Press, New York, pp 73–93 66. Egeblad J, Pisinger D (2009) Heuristic approaches for the two- and three-dimensional knapsack packing problem. Comput Oper Res 36:1026–1049 67. Djannaty F, Doostdar S (2008) A hybrid genetic algorithm for the multidimensional knapsack problem. Int J Contemp Math Sci 3(9):443–456 68. Samanta S, Chakraborty S, Acharjee S, Mukherjee A, Dey N (2013) Solving 0/1 knapsack problem using ant weight lifting algorithm. In: IEEE international conference on computational intelligence and computing research (ICCIC), pp 1–5 69. Htiouech S, Bouamama S, Attia R (2013) Using surrogate information to solve the multidimensional multi-choice knapsack problem. In: IEEE congress on evolutionary computation (CEC), pp 2102–2107

123