A scheduling problem with unrelated parallel machines ... - CiteSeerX

23 downloads 490 Views 650KB Size Report
Geraldo Robson Mateus is a Professor in Computer Science at Federal. University of Minas Gerais, ... Pedro Leite Rocha holds a Bachelor Degree in Computer Science from the Federal University of ..... available online.1. 4.2 Tests planning.
380

Int. J. Operational Research, Vol. 2, No. 4, 2007

A scheduling problem with unrelated parallel machines and sequence dependent setups M. Gόmez Ravetti*, Geraldo R. Mateus and Pedro L. Rocha Department of Computer Science, Federal University of Minas Gerais, Av. Antônio Carlos 6627 – Prédio do ICEx, 31270-010, Belo Horizonte, MG Brazil Fax: +55 31 34995858 E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] *Corresponding author

Panos M. Pardalos Department of Industrial and Systems Engineering, University of Florida, 303 Weil Hall, 32611, Gainesville, FL, USA Fax: +1 352 392-3537 E-mail: [email protected] Abstract: This paper addresses a scheduling problem with unrelated parallel machines, sequence dependent setups and due dates. The problem is based on a real case and the objective is to minimise the sum of the makespan and the weighted delays. A mixed integer linear programming model is proposed and since the model considers realistic constraints, a metaheuristic based on GRASP is used as a solution method. Several versions of the metaheuristic are tested and multiple instances of the problem with different configurations are generated to test the solution quality and the computational performance. Keywords: scheduling problems; metaheuristics; GRASP. Reference to this paper should be made as follows: Ravetti, M.G., Mateus, G.R., Rocha, P.L. and Pardalos, P.M. (2007) ‘A scheduling problem with unrelated parallel machines and sequence dependent setups’, Int. J. Operational Research, Vol. 2, No. 4, pp.380–399. Biographical notes: Martin Gόmez Ravetti is a doctoral student in Computer Science at the Federal University of Minas Gerais, Belo Horizonte, Brazil. His undergraduate degree is in Industrial Engineering from the National University of Rosario, Rosario, Argentina. He holds a Masters degree in Computer Science from the Federal University of Minas Gerais. He is visiting the Department of Industrial and Systems Engineering at the University of Florida, where his research involves scheduling problems, metaheuristics and combinatorial optimisation.

Copyright © 2007 Inderscience Enterprises Ltd.

A scheduling problem with unrelated parallel machines

381

Geraldo Robson Mateus is a Professor in Computer Science at Federal University of Minas Gerais, Belo Horizonte, Brazil. He received his MS and PhD in Computer Science from the Federal University of Rio de Janeiro, Brazil, in 1980 and 1986, respectively. He spent 1991 and 1992 at the University of Ottawa, Canada, as a visiting researcher. His research interests span network optimisation, combinatorial optimisation, algorithms and telecommunications. He has published over 200 scientific papers, 30 journal papers, two books, and three book chapters, and is a leader of several national and international projects. He has worked as a consultant for companies such as Telemig, Telemar, Usiminas, CVRD, MBR, France Telecom, and Embratel. Pedro Leite Rocha holds a Bachelor Degree in Computer Science from the Federal University of Minas Gerais where he is a Masters student. His research interests include combinatorial optimisation, metaheuristics and exact algorithms. He is especially interested in scheduling problems. Panos M. Pardalos is distinguished Professor of Industrial and Systems Engineering at the University of Florida. He is also Co-Director of the Center for Applied Optimisation. He is a World Leading Expert in global and combinatorial optimisation. He is the Editor-in-Chief of the Journal of Global Optimisation, managing editor of several book series, and a member of the editorial board of 20 international journals. He is the author of seven books and the editor of more than 50 books. His recent research interests include network design problems, scheduling problems, biomedical applications, e-commerce, and massive computing.

1

Introduction

In this paper we consider a particular scheduling problem involving unrelated parallel machines, sequence dependent setups and due dates. Each job has a different due date and a tardiness coefficient, which we use to penalise its delays. Another important feature of this problem is the sequence and machine Dependent Setups (SDS). The objective of the problem is to schedule the jobs in order to minimise the sum of the makespan and the weighted delays. The use of this objective function answers real-life problems. After knowing the set of jobs to be scheduled, the possibility of weighting the jobs allows the operator to prioritise the jobs, maintaining the focus on makespan and selecting the jobs to be delayed. The literature on Parallel Machine Scheduling Problems (PMSP) is very extensive and this topic is studied from many points of view. Nevertheless, the consideration of an unrelated environment is not usual, especially if compared to the identical parallel machines or single machine problem. The unrelated PMSP (UPMSP) is very common in industry. In particular, the problem addressed in this work is based on a real case of a refractory brick company, where the equipment is composed of machines with different technologies. That fact implies not only different processing times but also different setup times per machine. Garey and Johnson (1979) show that minimising the makespan considering two identical machines is a NP-hard problem. Certainly, a problem with unrelated machines and job sequence dependent setups is also NP-hard. Due to its complexity and to the necessity of almost constant reschedules, the use of exact algorithms is prohibited.

382

M.G. Ravetti et al.

Interesting surveys on parallel machines can be found in Lawler et al. (1993), Cheng and Sin (1990) and Mokotoff (2001). Cheng and Ding (2004) also present a very useful survey on scheduling problems considering setup times. There are few works considering UPMSP. Adamopolos and Pappis (1998) propose a polynomial heuristic to deal with a scheduling problem with common duedates. Jansen and Porkolab (2001) propose an approximation scheme for an UPMSP considering sequence dependent setup times and preemptive and nonpreemptive jobs. Kim et al. (2002) use Simulated Annealing to address an UPMSP, minimising the total tardiness considering sequence-dependent setup times. Kim et al. (2003) consider a similar scenario with common due dates where four heuristics are proposed and studied. An exact solution to an UPMSP is attempted in (Pereira-Lopes and de Carvalho, 2006). In this work the authors use a Branch & Price algorithm after a Dantzig-Wolfe decomposition. The objective function considered is the minimisation of the total weighted tardiness of the jobs. To solve our problem, we use the metaheuristic Greedy Randomised Adaptive Search Procedure (GRASP) proposed by Feo and Resende (1995). We also refer to Feo et al. (1991, 1996), where we find applications of GRASP for one-machine scheduling problems, Binato et al. (2002) which present an interesting application of the metaheuristic for job shop problems, Festa and Resende (2002) where a bibliography of GRASP is presented and Resende and Ribeiro (2003a) where we find an interesting and useful analysis of the applications of Path Relinking (PR) in GRASP. We also refer to Błażewicz et al. (1996), Pinedo (1995), Lee and Pinedo (2002) and Brucker (2004). The remainder of this paper is organised as follows: in Section 2 a Mixed Integer Linear Programing (MILP) model is presented. In Section 3, the implementation of the metaheuristic is described. Computational tests and numerical results are analysed in Section 4. Finally, Section 5 concludes the paper.

2

Mathematical model

In this section we present a MILP model considering realistic constraints, but first we must define the following parameters and variables: Parameters N

Total number of jobs

I

Job index i = {1, 2, ..., N}

M

Total number of machines

M

Machine index, m = {1, 2, ..., M}

Pim

Processing time of job i by machine m

Sii′m

Time for setting up machine m, from processing job i to processing job i′

di

Due date of job I

Vi

Tardiness penalty coeffcient of job I

gii′m

Time between the beginning of job i and the conclusion of setup (i, I′).gii′m = Pim + Sii′m

Cm

Number of available time units in machine m

G

Suffciently large positive constant

A scheduling problem with unrelated parallel machines

383

Decision variables ti

Date on which job i begins to be processed [time units]

Ρi

Delay of job I

Zm

Makespan of machine m

Zt

Global makespan

αim.

αim = 1, if job i is processed by machine m; αim = 0, otherwise

βii′m

βii′m = 1, if job i is processed by machine m and before job i'; βii′m = 0, otherwise, not necessarily immediately before

2.1 MILP model In this model we want to minimise the sum of the global makespan and the sum of weighted delays. In Mosheiov (2004) we find a similar objective function but with a multiple criteria approach. Let us first define Em as the group of all pairs of jobs (i, i′) that will be produced by the same machine m. The MILP model can be defined as: N   Min  zt + ∑ (ν i ⋅ ρi )  . i =1  

(1)

Subject to: (1 − αim ) ⋅ G + (1 − αi ′m ) ⋅ G + (1 − β ii ′m ) ⋅ G + ti ′ ≥ ti + gii ′m

∀(i, i ′) ∈ E m , ∀m

(2)

(1 − α im ) ⋅ G + (1 − α i ′m ) ⋅ G + β ii ′ ⋅ G + ti ≥ ti ′ + gi ′im ∀(i, i ′) ∈ E m , ∀m

(3)

ti + pim ≤ di + ρi ∀i, ∀m

(4)

Z m ≤ Cm ∀m

(5)

Z m ≥ ti + pim − (1 − α im ) ⋅ G ∀i, ∀m

(6)

Z t ≥ Z m ∀m

(7)

M

∑α

im

= 1 ∀i .

(8)

m =1

Constraints 2 and 3 describe the precedence relationship between the jobs. These constraints are based on the model proposed by Manne (1960) and used by Acero and Delgado (2000) in a related case without sequence dependent setups. The use of the set Em reduce considerably the number of constraints related to the sequencing of the jobs. Another important issue, is related to the relation between the processing times and setup times. This model considers that all processing times are larger than setup times, if this situation does not happen the model must be slightly modified. The use of variables Beta (β) is also worth to notice. In our model, these variables indicate a precedence relation between two jobs, but not necessarily immediately before each other. This approach presents better performance than when variables Betas mean specific setups situations.

384

M.G. Ravetti et al.

In constraints 4, the tardiness ρi of job i is calculated. The value of the tardiness is penalised in the objective function. Constraint 5 forbids the production program to pass the capacity of the machines. Remember that this capacity is expressed in allowed hours. In the industry, it is common to find two approaches: Scheduling jobs considering the capacity of each machine and scheduling jobs assuming infinity capacity of the machines. The model above can work perfectly with both approaches. The makespan value of a specific machine m is calculated by constraint 6. Also, the global makespan, in other words the specific date on which processing of all jobs is finished, is calculated by using constraint 7. Finally, we must also be sure that each job is assigned to just one machine, and this is expressed in equation 8. Due to its complexity, this approach cannot be used for an industrial application with a great number of jobs. For that, we choose a heuristic approach. In the next section, we present a metaheuristic based on GRASP.

3

Metaheuristic

The metaheuristic used in this paper has a similar structure to basic GRASP (Feo and Resende, 1995), which is a multi-start metaheuristic for combinatorial problems (Feo and Resende, 1995; Feo et al., 1996). Basically, this metaheuristic consists of two phases: the construction of a feasible solution and a local search. These two phases are repeated in every iteration. Figure 1 presents the pseudo-code of GRASP. In basic GRASP, for building a feasible solution, a list of candidates is compiled for random selection of the next element to be added to the solution. This list is made based on a greedy function. However, in our implementation, the candidate's list is made only once at the beginning of the procedure. In each iteration, the list is randomised and the algorithm builds the solutions using a greedy function. The use of a greedy function to schedule the jobs, and not to build the list of candidates, proved to be remarkable. This procedure simplifies the implementation, providing good results. Figure 1

Pseudo-code of main block

A scheduling problem with unrelated parallel machines

385

3.1 Sorting Rules Two rules for sorting the job list are tested. The first one uses the due date as sorting criterion. In this case, the jobs are ordered in a non-decreasing order. The second rule uses a tardiness Lower Bound (LB), suggested by Armentano and Ronconi (1999) for tabu search. In this case, the jobs are ordered in a non-increasing order. For this problem the LB is calculated as follows: LB[i ] = di − max{ pim }. m

(9)

3.2 Construction Phase (CP) Two different types of Construction Phase (CP) are considered. In the first, after sorting the candidate list, a greedy function schedules each job to the machine that finishes it first. In the second type, we select two machines as candidate machines and with a probability of 70% for the best candidate and 30% for the other, we randomly choose one. Figure 2 shows the basic pseudo-code of the CP function. Figure 2

Pseudo-code of the Construction Phase

3.3 Probability Distribution and local search Once the job list is ordered, the algorithm will randomly reorder the array using a specific Probability Distribution. In this paper, we present several tests with two types of PD. The first probability function is F1(i) = 1/pos(i), where pos(i) indicates the position of the job i on the sorted vector. The second function is created using a linear equation considering the first and last job of the sorted vector F2(i) = – (pos(i)/n) + 1. The second function allows the metaheuristic to exploit a bigger set of feasible solutions. The local search swaps all the existing pairs of jobs. This local search is chosen arbitrarily. Our algorithm has a time complexity O(n2).

386

M.G. Ravetti et al.

3.4 Path Relinking (PR) The (PR) technique is used as search strategies in many heuristics (Glover and Laguna, 1997; Glover, 2000). In 1999 it was used in a GRASP environment by Laguna and Martí (1999). In Resende and Ribeiro (2003b) two strategies are presented: •

PR as a tool to improve the solution found by GRASP.



PR as a strategy to intensify the local search procedure.

The algorithm selects two solutions and analyses the path of solutions found between both. This path is built around some number of job movements. For each movement, the encountered solution is analysed. In this work, we use PR to intensify the local search. Even so, there are several ways to implement this technique. Resende and Ribeiro (2003b) mention a few: •

Periodic use of PR.



Use of PR in the two possible directions, that is, from solution x1 to solution x2 and from x2 to x1.



Use of PR in only one direction.



Use of truncated PR. That is, by analysing only part of the path.

The algorithm of PR used in this paper works as follows. First, a pool of good solutions is retained. Every time PR is used, a solution is randomly chosen from this pool. Then, the path of solutions is analysed in one direction, from the found solution in local search to the selected solution from the pool. Finally, the whole path between the solutions is considered.

3.5 Versions of GRASP This section presents the different versions of the metaheuristic tested. The results of these experiments are presented in Section 4. The versions vary in the following aspects: •

Sorting Rule (SR)



Frequency of Path Relinking use (FPR)



Probability Distribution (PD)



Construction Phase (CP).

Table 1 details the parameters utilised for each of the metaheuristics considered. As an example, the version ‘GPSF’ of the table, indicates a GRASP algorithm with a FPR equals 1, due date as sorting criterion, F2 as the probability function and a deterministic CP.

A scheduling problem with unrelated parallel machines Table 1

Name

387

Configurations of the metaheuristic for the experiments. The column SR shows the Sorting Rule chosen, FPR indicates the number of iterations between the use of PR, PD indicates the Probability Distribution F1 = 1/pos(i) or F2(i) = – (pos(i)/n) + 1 and finally the column CP indicates if is used a deterministic or a random approach SR

FPR

PD

CP

GP

Due date

1

F1

Deterministic

GPL

LB

1

F1

Deterministic

GPS

Due date

2000

F1

Deterministic

GPF

Due date

1

F2

Deterministic

GPC

Duedate

1

F1

Random

GPSL

LB

2000

F1

Deterministic

GPLF

LB

1

F2

Deterministic

GPLC

LB

1

F1

Random

GPSF

Duedate

2000

F2

Deterministic

GPSC

Duedate

2000

F1

Random

GPFC

Duedate

1

F2

Random

GPSLF

LB

2000

F2

Deterministic

GPSLC

LB

2000

F1

Random

GPLFC

LB

1

F2

Random

GPSFC

LB

2000

F2

Random

GPSLFC

LB

2000

F2

Random

4

Experiments

The heuristics are implemented in C, using the gcc compiler. The experiments are executed on a Pentium IV computer, 2.4 GHz CPU and 1 GB of RAM memory under a Debian GNU/Linux environment. Different instances with a variety of sizes are considered. Instances for this kind of problem are not found so we have generated them randomly. Ten replications are considered for each size, N = 20, 60, 100, 150. This problem was originally based on a real-world problem. Therefore, the configuration tested is composed by four identical machines and two unrelated machines (411).

4.1 Instances generation Random instances are generated for these tests, based on the following rules: •

the processing times of the group of identical machines vary between 10 and 20 time units



the processing times of the unrelated machines vary between five and eight time units



the setup times vary between one and seven time units.

388

M.G. Ravetti et al.

The generated instances can also be used for different machine configurations. The algorithm to generate these instances and the instances used in this paper are available online.1

4.2 Tests planning The tests are divided in five parts. In the first part, the seed and the number of iterations are fixed. The objective of this part is to find the best configuration of the heuristic parameters to this kind of problem. For these tests a configuration of four identical machines and two unrelated is maintained. In the second part only instances with 150 jobs are considered. The seed is fixed but not the number of iterations. In the third part only one instance of the same size is considered. The number of iterations is fixed and the heuristic is executed ten times with different seeds. With the second and third tests, we intend to analyse the relationship between the metaheuristic performance with the number of iterations, and its seed dependence. In the fourth part of the tests, we analyse the influence of the use of PR. Finally, in the fifth part the goal is to analyse the metaheuristic behaviour for different machine configurations.

4.3

Lower Bounds (LB)

For each instance a LB is obtained, using three simple methods: •

The first LB is the ratio between the sum of the minimum processing time for each job and the number of machines.



The second bound is the maximum processing time, considering the minimum processing time for each job.



The third bound is the sum of the n smallest processing times plus n – 1 smallest setup times. The variable n is the ratio between the number of jobs and machines available. n represents the number of jobs that at least one machine must process.

The final LB is the largest of the values calculated. It is easy to realise that this LB is not tight, because it does not analyse the possibility of delay. Even so, we use this LB to obtain the Average Percentage Deviation (APD) (Oğuz et al., 2003). The percentage deviation is defined as follows, PD =

X − LB LB

(10)

where X denotes the value of the objective function obtained by the metaheuristic. The percentage deviation is a measure of how far the solution is from the LB. Then the solution is better for small values of APD.

4.4 Results This section presents the computational results of several tests with the different versions of the heuristic. Five different subsections are considered.

A scheduling problem with unrelated parallel machines

389

4.4.1 First part In this first part of the results the focus is on the adjustment of the parameters. The goal is to detect the best configuration for this kind of problem. The seed and the number of iterations are fixed. The arbitrary chosen seed for these tests is 1.00 and the number of iterations is 15,000. From Tables 2–5 we can analyse the performance of the different versions of the metaheuristic for different numbers of jobs. Table 2

PD for each version of GRASP with the same seed and 15,000 iterations, considering 20 jobs Instances for 20 jobs

Name

1

GP 0.46 GPL 0.46 GPS 0.46 GPSL 0.46 GPF 0.46 GPLF 0.50 GPSF 0.46 GPSLF 0.50 GPC 0.50 GPLC 0.50 GPSC 0.50 GPSLC 0.50 GPSFC 0.50 GPSLFC 0.50 GPFC 0.50 GPLFC 0.50 Grand averages Table 3

2

3

4

5

6

7

8

9

10

APD

2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35 2.35

5.90 5.90 5.90 5.90 5.90 5.90 5.90 5.90 5.90 5.90 5.90 5.90 5.90 5.90 5.90 5.90

6.33 6.33 6.33 6.37 6.33 6.33 6.33 6.33 6.33 6.33 6.37 6.37 6.50 6.37 6.33 6.43

4.33 4.41 4.33 4.33 4.33 4.26 4.33 4.33 4.22 4.33 4.33 4.44 4.41 4.41 4.41 4.44

4.63 4.63 5.22 5.30 4.93 4.63 5.30 5.26 5.30 4.63 5.44 5.30 5.59 5.44 4.63 4.93

2.93 2.93 2.93 2.97 2.93 2.97 2.97 2.93 2.97 2.93 2.97 3.00 2.97 3.00 3.00 2.97

2.57 2.53 2.57 2.53 2.57 2.53 2.53 2.53 2.57 2.53 2.57 2.57 2.57 2.53 2.57 2.53

3.65 3.65 3.65 3.65 3.65 3.65 3.65 3.65 3.65 3.65 3.65 3.65 3.65 3.65 3.65 3.65

2.64 2.64 2.64 2.64 2.64 2.64 2.64 2.64 2.64 2.64 2.64 2.64 2.64 2.64 2.64 2.64

3.58 3.58 3.64 3.65 3.61 3.58 3.65 3.64 3.64 3.58 3.67 3.67 3.71 3.68 3.60 3.63 3.632

PD for each version of GRASP with the same seed and 15,000 iterations, considering 60 jobs Instances for 60 jobs

Name GP GPL GPS GPSL GPF GPLF GPSF GPSLF GPC

1

2

3

4

5

6

7

8

9

10

APD

5.49 5.89 6.27 6.38 6.00 6.83 7.09 6.84 6.36

5.27 5.34 5.77 5.41 6.64 6.10 6.42 7.09 6.16

5.34 5.42 6.01 6.03 6.06 6.40 6.43 7.16 5.98

7.39 8.10 8.79 8.17 8.52 8.49 9.97 10.26 8.07

5.98 6.60 7.29 7.02 7.42 6.92 7.99 8.21 6.96

2.78 2.52 2.58 2.61 2.71 3.35 3.33 3.69 3.11

6.70 6.28 6.97 7.21 6.06 6.54 8.37 7.99 6.74

1.19 1.18 1.20 1.22 1.36 1.22 1.40 1.39 1.22

0.58 0.59 0.58 0.59 0.66 0.71 0.66 0.69 0.66

2.27 2.44 2.87 2.68 2.76 2.67 3.20 3.17 2.78

4.29 4.44 4.83 4.73 4.82 4.93 5.48 5.65 4.80

390 Table 3

M.G. Ravetti et al. PD for each version of GRASP with the same seed and 15,000 iterations, considering 60 jobs (continued)

Name GPLC GPSC GPSLC GPSFC GPSLFC GPFC GPLFC Grand average Table 4

1 GP 7.17 GPL 6.77 GPS 7.48 GPSL 7.74 GPF 8.02 GPLF 7.77 GPSF 8.81 GPSLF 7.93 GPC 7.81 GPLC 7.77 GPSC 7.57 GPSLC 8.38 GPSFC 9.61 GPSLFC 9.84 GPFC 9.14 GPLFC 9.34 Grand average

Name GP GPL GPS GPSL GPF GPLF GPSF

2 5.74 6.38 5.99 6.94 7.45 6.61 5.85

3 5.70 6.21 6.19 6.91 6.43 6.61 6.51

Instances for 60 jobs 4 5 6 7 7.44 7.31 2.63 6.41 8.84 7.28 3.33 7.63 8.93 7.80 3.71 7.15 8.90 8.39 4.24 8.32 9.83 8.92 4.45 8.09 8.25 7.52 3.56 7.39 9.56 6.85 3.75 7.23

8 1.19 1.33 1.28 1.73 1.96 1.33 1.69

9 0.62 0.66 0.65 0.79 0.71 0.78 0.71

10 2.57 3.32 3.09 3.41 3.39 2.73 3.19

APD 4.58 5.19 5.18 5.70 5.89 4.99 5.15 5.042

PD for each version of GRASP with the same seed and 15,000 iterations, considering 100 jobs

Name

Table 5

1 6.24 6.93 6.99 7.41 7.71 5.14 6.17

2 1.05 1.12 1.38 1.68 1.69 2.38 2.63 3.07 1.48 1.88 2.35 2.53 4.07 3.50 2.35 2.80

3 3.18 3.03 4.11 4.47 5.18 4.42 5.83 6.61 3.69 4.45 5.32 5.55 6.43 6.57 4.89 5.82

Instances for 100 jobs 4 5 6 7 1.05 6.38 4.31 2.96 1.64 6.33 4.20 3.28 1.31 6.75 5.35 3.92 1.73 6.49 5.08 4.19 1.42 7.49 5.55 5.29 1.84 6.96 6.27 5.43 2.59 8.83 7.25 6.49 2.52 8.43 7.08 7.00 1.33 6.33 5.20 3.05 1.76 6.88 5.10 4.55 2.03 7.15 5.78 4.66 1.95 7.30 6.00 5.01 2.90 8.75 7.40 6.40 2.55 8.72 7.04 6.63 2.61 7.04 5.88 5.80 2.04 8.04 5.09 4.82

8 1.10 0.97 1.18 1.15 1.44 1.73 1.89 2.00 1.09 1.12 1.43 1.79 2.70 2.62 1.87 1.97

9 1.30 1.25 1.49 1.49 2.31 2.60 2.85 2.63 1.48 1.73 1.79 1.68 2.81 3.15 2.59 2.71

10 1.89 1.74 2.17 2.03 3.52 3.28 3.69 3.60 2.34 2.29 2.54 2.73 4.16 4.07 3.06 3.57

APD 3.04 3.03 3.51 3.60 4.19 4.27 5.08 5.09 3.38 3.75 4.06 4.29 5.52 5.47 4.54 4.62 4.216

PD for each version of GRASP with the same seed and 15,000 iterations, considering 150 jobs 1 1.02 0.98 1.05 1.03 1.42 1.54 1.91

2 1.26 1.15 1.60 1.78 2.94 3.08 4.02

3 3.78 3.42 3.87 3.71 6.23 5.16 6.44

Instances for 150 jobs 4 5 6 7 0.70 4.26 4.25 3.50 0.71 5.04 3.59 3.69 0.74 5.17 5.24 4.06 0.73 5.03 5.15 4.12 1.09 7.18 6.89 6.54 1.36 6.86 6.91 7.61 1.18 8.27 8.04 7.32

8 0.81 0.81 0.86 0.86 1.36 1.03 1.63

9 7.73 8.47 8.65 8.63 9.60 12.67 12.83

10 0.97 1.07 1.02 1.36 2.44 2.55 3.34

APD 2.83 2.89 3.23 3.24 4.57 4.88 5.50

A scheduling problem with unrelated parallel machines Table 5

391

PD for each version of GRASP with the same seed and 15,000 iterations, considering 150 jobs (continued) Instances for 150 jobs

Name GPSLF

1

2

3

4

5

6

7

8

9

10

APD

2.29

3.93

6.14

1.49

7.61

8.52

8.23

1.71

12.02

2.68

5.46

GPC

1.16

2.05

4.38

0.83

5.59

4.97

4.67

0.87

8.53

1.28

3.43

GPLC

0.96

2.44

4.34

0.70

4.74

4.93

4.74

0.89

9.64

1.38

3.48

GPSC

1.07

2.36

4.87

0.86

6.23

5.80

5.36

0.97

10.11

1.42

3.90

GPSLC

1.26

2.40

5.19

0.79

5.78

5.71

4.92

0.98

10.54

1.49

3.90

GPSFC

2.48

4.14

6.92

1.90

8.52

8.46

8.37

2.00

14.27

3.14

6.02

GPSLFC

2.38

4.68

7.08

1.94

8.04

7.49

8.06

2.14

13.96

2.72

5.85

GPFC

1.87

3.87

6.18

1.32

7.90

5.63

6.75

1.94

12.31

2.77

5.06

GPLFC

1.88

4.16

7.11

1.32

7.71

6.53

6.11

1.79

13.55

3.05

5.32

Grand average

4.348

In Table 6 and Figure 3 we present a summary of the results. Table 7 presents the average computation time spent for each version considering different instance sizes: 60, 100 and 150 jobs. Table 6

APD for each version of GRASP and for each instance size 20 jobs

60 jobs

100 jobs

150 jobs

Overall APD

GP

3.580

4.298

3.042

2.828

3.437

GPL

3.584

4.436

3.035

2.894

3.487

Version

GPS

3.639

4.833

3.513

3.227

3.803

GPSL

3.650

4.732

3.604

3.242

3.807

GPF

3.609

4.819

4.190

4.568

4.297

GPLF

3.576

4.923

4.267

4.878

4.411

GPSF

3.647

5.485

5.085

5.497

4.928

GPSLF

3.643

5.648

5.087

5.463

4.960

GPC

3.642

4.805

3.380

3.433

3.815

GPLC

3.580

4.584

3.753

3.478

3.849

GPSC

3.672

5.193

4.062

3.905

4.208

GPSLC

3.671

5.178

4.293

3.905

4.262

GPSFC

3.707

5.705

5.523

6.020

5.239

GPSLFC

3.679

5.893

5.471

5.850

5.223

GPFC

3.598

4.993

4.524

5.056

4.542

GPLFC

3.634

5.150

4.621

5.319

4.681

392

M.G. Ravetti et al.

Figure 3

Table 7

APD for the six versions with the best results, considering 60, 100 and 150 jobs (Time in seconds)

Average computation times considering 60,100 and 150 jobs (Time in seconds) 60

100

150

GP

18.59

61.38

144.79

GPL

18.46

57.66

145.74

GPS

18.35

57.34

143.89

GPSL

18.13

60.32

144.34

GPF

19.00

59.36

147.74

GPLF

18.95

57.91

148.18

GPSF

18.68

58.46

148.91

GPSLF

18.57

59.62

145.70

GPC

18.99

60.84

146.18

GPLC

18.91

58.68

146.57

GPSC

18.70

58.91

149.77

GPSLC

18.65

60.51

145.16

GPSFC

19.05

60.30

146.72

GPSLFC

18.96

61.48

147.25

GPFC

19.41

59.13

149.43

GPLFC

19.35

58.53

150.10

From these data, it is apparent that the GP and GPL approaches present the best average results without additional computation time. In general, the different versions have a good behaviour except the last four or five.

A scheduling problem with unrelated parallel machines

393

The poor performance of some versions are caused by the junction of the linear probability function (F2) and the random CP (C2). These two parameters cause an extended search in the solution space. This kind of approach could bring excellent results or the worst results. This characteristic could be very interesting when using a larger local search or for a multicriteria approach. In a similar way we can conclude that the best performances occur when only few modifications are made in the greedy CP procedure. That is translated to our algorithm as the use of the F1 probability function and the deterministic schedule of the jobs. It is worth noting that as the number of jobs increases, the APD values decrease, as expected. The effectiveness of the LB improves with the increase in the number of jobs.

4.4.2 Second part In this part of the tests we analyse the metaheuristic performance for a different number of iterations. We choose five versions of GRASP and a set of five problems with 150 jobs. We keep the seed used in the first part (1.00). In Tables 8–12, we present the percentage deviation for the five instances and its average for the chosen versions. Table 8

APD results for the GP version considering different number of iterations [thousands](second part) GP

Iterations

1

2

3

4

5

APD

15000

0.912

1.236

3.290

0.712

4.730

2.176

30000

0.912

1.111

3.042

0.712

4.716

2.099

50000

0.907

1.107

3.042

0.712

4.171

1.988

100000

0.907

1.107

3.042

0.712

4.014

1.957

200000

0.903

1.107

2.995

0.712

4.014

1.946

300000

0.903

1.107

2.939

0.699

4.014

1.932

Table 9

APD results for the GPL version considering different number of iterations [thousands](second part) GP-LB

Iterations

1

2

3

4

5

APD

15000

0.952

1.209

3.262

0.676

4.512

2.122

30000

0.930

1.209

3.243

0.648

4.512

2.108

50000

0.930

1.209

3.019

0.648

4.512

2.063

100000

0.930

1.209

2.701

0.648

4.507

1.999

200000

0.930

1.102

2.626

0.648

4.071

1.875

300000

0.930

1.089

2.626

0.648

4.066

1.872

394

M.G. Ravetti et al.

Table 10

APD results for the GPLF version considering different number of iterations [thousands](second part) GPLF

Iterations

1

2

3

4

5

APD

15000

1.520

3.098

4.883

1.187

7.090

3.556

30000

1.520

3.098

4.883

1.110

7.043

3.531

50000

1.520

3.098

4.883

1.050

7.043

3.519

100000

1.317

3.098

4.883

0.945

6.929

3.434

200000

1.088

2.787

4.776

0.927

6.929

3.301

300000

1.066

2.658

4.776

0.927

6.929

3.271

Table 11

APD results for the GPLFC version considering different number of iterations [thousands](second part) GPLFC

Iterations

1

2

3

4

5

APD

15000

1.731

3.373

6.542

1.379

7.417

4.089

30000

1.731

3.351

6.215

1.187

7.417

3.980

50000

1.731

3.351

5.986

1.187

7.412

3.934

100000

1.731

3.320

5.986

1.187

7.412

3.927

200000

1.326

3.320

5.921

1.187

6.464

3.644

300000

1.326

3.320

5.533

1.187

6.431

3.559

Table 12

APD results for the GPS version considering different number of iterations [thousands](second part) GPS

Iterations

1

2

3

4

5

APD

15000

0.982

1.391

3.799

0.731

5.175

2.416

30000

0.982

1.391

3.687

0.726

5.175

2.392

50000

0.982

1.391

3.636

0.726

4.654

2.278

100000

0.969

1.378

3.636

0.726

4.654

2.273

200000

0.969

1.378

3.369

0.726

4.464

2.181

300000

0.956

1.378

3.369

0.703

4.464

2.174

From these results we can conclude the GP and GPL present better responses to this kind of problem. We can also observe that we obtain a solution improvement, but the increase in the number of iterations also affects the computation time. The average CPU time for 300,000 iterations is near 2500 seconds (150 jobs).

A scheduling problem with unrelated parallel machines

395

For the reasons above, the use of a large number of iterations depends more on the situation that is to be solved, that is, on the trade off between the benefits of a long run and time expended. In our particular case, a large number of iterations is interesting for the monthly schedule program. However, in the daily decision process of the company, new schedules are needed to support decisions, and the use of a larger number of iterations is not very convenient. These daily problems are usually related to new contracts, new clients or new due dates.

4.4.3 Third part In this part of the test the focus is on the metaheuristic’s seed dependence. We choose only one instance with 150 jobs and we set the number of iterations to 15,000. The seeds used are randomly chosen. In Table 13 and Figure 4 we present the results. In this case is not necessary to use the APD form because we are interested in the variability of the solution. The difference between the worst and the best result for each version is also shown. The best result for each version is in italics. The best overall result is in bold typeface. These results are very interesting, especially if associated with the previous ones. In the second part of the tests, it is possible to realise that the APD improvement made by the increase of the number of iterations vary between 0.242 to 0.53. And in this test, it is possible to see that the improvement could reach the 1% of the objective value. We can conclude that it appears to be more interesting to use several seeds with shorter runs, than one seed with a larger run. Table 13 Version

Results considering different seeds (third part) 1

2

3

4

5

6

7

8

9

10

Diff.

GP

1225

1087

1195

1209

1144

1196

1195

1044

1214

1164

181

GPL

1191

1138

1165

1228

1152

1193

1161

1098

1198

1173

130

GPS

1288

1348

1327

1305

1318

1202

1331

1259

1301

1347

146

GPSL

1307

1272

1160

1306

1305

1286

1268

1278

1260

1284

147

GPF

1623

1632

1715

1508

1778

1842

1594

1366

1792

1838

476

GPLF

1623

1632

1715

1508

1778

1842

1594

1366

1792

1838

476

GPSF

1908

1757

1878

1916

1863

1904

1940

1939

1915

2013

256

GPSLF

1920

1845

2004

1964

1997

1863

2011

1857

1668

1883

343

GPC

1178

1390

1512

1322

1283

1387

1266

1406

1268

1407

334

GPLC

1376

1383

1336

1345

1162

1390

1232

1356

1297

1295

228

GPSC

1442

1460

1365

1516

1443

1423

1470

1462

1476

1479

151

GPSLC

1458

1494

1396

1415

1476

1481

1384

1544

1480

1426

160

GPSFC

1986

2071

2007

1981

2048

2013

2000

2027

1955

1929

142

GPSLFC

1868

1824

1970

1892

2079

2000

2003

1969

2001

1884

255

GPFC

1640

1807

1748

1929

1871

1860

1841

1750

1865

1806

289

GPLFC

1796

1797

1841

1724

1708

1777

1702

1810

1927

1649

278

396 Figure 4

M.G. Ravetti et al. Maheuristic’s seed dependence (third part)

4.4.4 Fourth part In this experiment we use path-relinking as a form of intensification of the local search, applied in each iteration or after 2000 iterations. We used three instances of 100 jobs to analyse how PR affects the performance of the metaheuristic. We use the GP version considering 50,000 iterations and 6 different FPR, 1, 2000, 5000, 10,000, 15,000 and 20,000. The results are presented in Table 14. We can observe that the use of PR at different steps does not cause a significant reduction of CPU time. With these tests we can conclude that PR is not relevant in the computational time spent by the metaheuristic. Table 14

Results applying PR after different numbers of iterations (FPR) for the GP version Inst. 1

FPR

Inst. 2

Inst. 3

Obj. func.

Time (sec)

Obj. func.

Time (sec)

Obj. func.

Time (sec)

1

1027

115.77

290

117.13

623

119.81

2000

1300

116.11

340

115.58

757

119.59

5000

1300

116.08

340

115.64

757

119.57

10000

1300

115.61

340

115.63

757

120.24

15000

1300

115.62

340

115.62

757

120.19

20000

1300

115.67

340

115.66

757

120.16

The use of path-relinking is shown to be effective without an increase of CPU time. For our implementation, the most expensive procedure is the local search.

A scheduling problem with unrelated parallel machines

397

4.4.5 Fifth part Until now, we always used the same machine configuration, a group of four with identical characteristics and two unrelated machines. For this particular configuration, the use of GRASP shows a satisfactory performance. In this experiment we test the flexibility of GRASP by analysing the performance of the method for other configurations of machines. The tested configurations are: •

M6: six identical machines.



M321: a group of three identical machines, two identical machines but different from the rest and one unrelated machine



M311: a group of three identical machines and other three unrelated machines



M222: three different groups of two identical machines



M2211: two different groups with two identical machines and two unrelated machines



M21111: a group of two identical machines and four unrelated machines.

In this experiment we used five instances of 150 jobs. The seed remains the same for all the tests (1.00), and the number of iterations is set to 15,000. Because of the different configurations we can not compare the results directly. Still, in Table 15 we can observe that the metaheuristic produces interesting results for these new instances. The use of this methodology seems to be very suitable for the UPMSP environment. Table 15

APD for different configurations of machines M6

M3111

321

222

2211

21111

GP

0.386

0.719

0.635

0.478

0.561

0.616

GPL

0.378

0.720

0.636

0.482

0.558

0.618

GPLF

0.491

1.008

0.936

0.699

0.837

0.896

GPLFC

0.533

1.041

1.037

0.771

0.884

0.989

GPS

0.392

0.726

0.652

0.480

0.571

0.625

5

Concluding remarks

In this paper, our research focuses on a scheduling problem with features that are not often found in the literature. The problem considers sequence and machine dependent setup times, unrelated parallel machines, and due dates. The objective is to schedule the jobs in order to minimise the sum of the makespan and the weighted delays. A mathematical model of the problem is formulated and a metaheuristic based on GRASP is proposed. Several test instances based on a real case were generated and used to evaluate the GRASP performance.

398

M.G. Ravetti et al.

The computational results indicate that the most consistent versions of the metaheuristic are the GP and the GPL versions. It is valuable to note that all versions of the metaheuristic show themselves as good methods to solve this kind of scheduling problem. This approach proved to be very simple and flexible; it is relatively simple to modify this algorithm to other purposes and situations, and it has only a few main parameters that are easily adjusted. In our particular case the time spent by the algorithm is very reasonable according to real-world constraints. Interesting extensions of this work could be the comparison of GRASP with other metaheuristic algorithms, and the investigation of a multi-objective approach in order to obtain a very flexible tool for real applications. Further research is also being made in cooperative methods between GRASP and exact approaches.

Acknowledgement The authors gratefully thank the anonymous reviewers for their helpful comments. This research has been partially supported by NSF, US Air Force and CNPq grants.

References Acero, R. and Delgado, J.T. (2000) Aplicación de Una Heurística de Búsqueda Tabú en un Problema de Programación de Tareas en Línea Flexible de Manufactura, available online at http://hdl.handle.net/1992/570. Adamopolos, G. and Pappis, C. (1998) ‘Scheduling under common due date on parallel unrelated machines’, European Journal of Operational Research, Vol. 105, No. 3, pp.494–501. Armentano, V.A. and Ronconi, D. (1999) ‘Tabu search for total tardiness minimization in flowshop scheduling problems’, Computers and Operations Research, Vol. 26, No. 3, pp.219–235. Binato, S., Hery, W., Loewenstern, D. and Resende, M. (2002) ‘A greedy randomized adaptive search procedure for job shop scheduling’, in Ribeiro, C.C. and Hansen, P. (Eds.): Essays and Surveys on Metaheuristics, Kluwer Academic, Norwell, MA, USA, pp.58–79. Błażewicz, J., Ecker, K., Pesch, E., Schimdt, G. and Węglarz, J. (1996) Scheduling Computer and Manufacturing Processes, Springer-Verlag, Berlin. Brucker, P. (2004) Scheduling Algorithms, Springer-Verlag, Berlin. Cheng, T.C.E., Ding, Q. and Lin, B.M.T. (2004) ‘A concise survey of scheduling with time-dependent processing times’, European Journal of Operational Research, Vol. 152, No. 1, pp.1–13. Cheng, T. and Sin, C. (1990) ‘A state-of-the-art review of parallel-machine scheduling research’, European Journal of Operational Research, Vol. 47, No. 3, pp.271–292. Feo, T. and Resende, M. (1995) ‘Greedy randomized adaptive search procedures’, Journal of Global Optimization, Vol. 6, No. 2, pp.109–133. Feo, T., Sarathy, K. and McGahan, J. (1996) ‘A GRASP for single machine scheduling with sequence dependent setup costs and linear delay penalties’, Computers and Operations Research, Vol. 23, No. 9, pp.881–895. Feo, T., Venkatraman, K. and Bard, J. (1991) ‘A GRASP for a diffcult single machine scheduling problem’, Computers and Operations Research, Vol. 18, No. 8, pp.635–643. Festa, P. and Resende, M. (2002) ‘GRASP: an annotated bibliography’, in Ribeiro, C. and Hansen, P. (Eds.): Essays and Surveys in Metaheuristics, Kluwer Academic Publishers, Norwell, MA, USA, pp.325–367.

A scheduling problem with unrelated parallel machines

399

Garey, M. and Johnson, D. (1979) Computers and Intractability: A Guide to the Theory of NP-Completeness, W H Freeman and Co., Brazil. Glover, F. (2000) ‘Multi-start and starategic oscilation methods – principles to exploit adaptive memory’, in Laguna, M. and Gonzles-Velardes, J.L. (Eds.): Computing Tools for Modeling, Optimization and Simulation: Interfaces in ComputerScience and operations Research, Kluwer Academic Publishers, pp.1–24. Glover, F. and Laguna, M. (1997) Tabu Search, Kluwer Academic Publishers, Boston. Jansen, K. and Porkolab, L. (2001) ‘Improved approximation schemes for scheduling unrelated parallel machines’, Mathematics of Operations Research, Vol. 26, No. 2, pp.324–338. Kim, D-W., Kim, K.H., Jang, W. and Chen, F.F. (2002) ‘Unrelated parallel machine scheduling with setup times using simulated annealing’, Robotics and Computer-Integrated Manufacturing, Vol. 18, Nos. 3–4, pp.223–231. Kim, D-W., Na, D-G. and Chen, F.F. (2003) ‘Unrelated parallel machine scheduling with setup times and a total weighted tardiness objective’, Robotics and Computer-Integrated Manufacturing, Vol. 19, Nos. 1–2, pp.173–181. Laguna, M. and Martí, R. (1999) ‘GRASP and path relinking for 2-layer straight line crossing minimization’, INFORMS Journal on Computing, Vol. 11, No. 1, pp.44–52. Lawler, E., Lenstra, J.K., Kan, A.R. and Shmoys, D. (1993) ‘Sequencing and scheduling: algorithms and complexity’, in Graves, S., Kan, A.R. and Zipkin, P. (Eds.): Handbooks in Operations Research and Management Science 4, Logistics of Production and Inventory, North Holland, pp.445–524. Lee, C-Y. and Pinedo, M. (2002) ‘Optimization and heuristics of scheduling’, in Pardalos, P.M. and Resende, M.G.C. (Eds.): Handbook of Applied Optimization, Oxford University Press, New York, pp.569–583. Manne, A. (1960) ‘On the job – shop scheduling problem’, Operations Research, Vol. 8, No. 2, pp.219–223. Mokotoff, E. (2001) ‘Parallel machine scheduling problems: a survey’, Asia-Pacific Journal of Operational Research, Vol. 18, No. 2, pp.193–242. Mosheiov, G. (2004) ‘Simultaneus minimization of total completion time and total deviation of job completion times’, European Journal of Operational Research, Vol. 157, No. 2, pp.296–306. Oğuz, C., Ercan, M., Cheng, T. and Fung, Y. (2003) ‘Heuristic algorithms for multiprocessor task scheduling in a two-stage hybrid flow-shop’, European Journal of Operational Research, Vol. 149, No. 2, pp.390–404. Pereira-Lopes, M.J. and de Carvalho, J.V. (2006) ‘A branch-and-price algorithm for scheduling parallel machines with sequence dependent setup times’, European Journal of Operational Research, Vol. 176, No. 3, 1 February 2007, pp.1508–1527 Pinedo, M. (1995) Scheduling Theory, Algorithm and System, Prentice-Hall, NJ. Resende, M. and Ribeiro, C. (2003a) ‘GRASP and path-relinking: recent advances and applications’, in Ibaraki, T. and Yoshitomi, Y. (Eds.): Proceedings of the Fifth Metaheuristics International Conference (MIC2003), pp.T6-1–T6-6. Resende, M. and Ribeiro, C. (2003b) ‘Greedy randomized adaptive search procedures’, in Glover, F. and Kochenberger, G. (Eds.): Handbook of Metaheuristics, Kluwer Academic Publishers, pp.219–249.

Note 1

Laboratory of Operational Research – DCC/UFMG – Federal University of Minas Gerais. Url: http://www.dcc.ufmg.br/lapo.