Scheduling for Parallel Dedicated Machines with a Single Server

11 downloads 10 Views 447KB Size Report

[PDF]Scheduling for Parallel Dedicated Machines with a Single Server

https://www.researchgate.net/...Dedicated...Server/.../Scheduling-for-Parallel-Dedicate...
by CA Glass - ‎Cited by 57 - ‎Related articles
The client reads data from the server, processes it, and writes the result back to the server. The tasks can be distributed in advance, in which case the machines are dedicated; an alternative version assumes that the decision-maker has also to assign the tasks to machines.

Scheduling for Parallel Dedicated Machines with a Single Server Celia A. Glass,1 Yakov M. Shafransky,2 Vitaly A. Strusevich3 1 2

City University, London, United Kingdom

Institute of Engineering Cybernetics, National Academy of Sciences of Belarus, Minsk, Republic of Belarus 3

University of Greenwich, London, United Kingdom

Received August 1998; revised October 1999; accepted October 22, 1999

Abstract: This paper examines scheduling problems in which the setup phase of each operation needs to be attended by a single server, common for all jobs and different from the processing machines. The objective in each situation is to minimize the makespan. For the processing system consisting of two parallel dedicated machines we prove that the problem of finding an optimal schedule is N P -hard in the strong sense even if all setup times are equal or if all processing times are equal. For the case of m parallel dedicated machines, a simple greedy algorithm is shown to create a schedule with the makespan that is at most twice the optimum value. For the two machine case, an improved heuristic guarantees a tight worst-case ratio of 3/2. We also describe several polynomially solvable cases of the later problem. The two-machine flow shop and the open shop problems with a single server are also shown to be N P -hard in the strong sense. However, we reduce the two-machine flow shop no-wait problem with a single server to the Gilmore–Gomory c 2000 John Wiley & Sons, Inc. Naval traveling salesman problem and solve it in polynomial time. Research Logistics 47: 304–328, 2000

Keywords: scheduling; parallel dedicated machines; single server; complexity; approximation

1.

INTRODUCTION

Scheduling models that allow the handling of pre-operational work (the setup) have been of major interest for some decades because of their practical relevance and theoretical impact. Modern computing and manufacturing processes provide a continuous supply of new models with various types of setups. Such problems may appear to be tractable by the extensions of classical methods and algorithms, while in many other cases new specialized techniques have to be developed. In this paper, we study scheduling models with an additional constraint on the setups. Using the traditional terminology, we have a collection of jobs to be performed on one or all of several given machines. Each operation of a job on a machine consists of two phases: the setup phase and the processing phase. In our models, it is required that the setup phase of any operation is attended by a single server, which is different from any of the processing machines. In all problems considered in this paper the objective is to minimize the makespan, i.e., the maximum completion time of

Correspondence to: C. A. Glass c 2000 John Wiley & Sons, Inc.

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

305

all jobs on all machines. The mathematical interest of these problems lies in the interlacing of scheduling considerations of the machines and the server. Models having these features are common in network computing. In the most typical situation, the network server sets up the workstations by loading the required software. In the context of parallel computing, the analogous feature is data transfer when farming operations from a central processor to slave processors. In production applications, the setting up of machines involves simultaneous use of a common resource which might be a robot or a human operator attending each setup. We analyze problems with various machine environments. Our main focus is on the simplest configuration consisting of parallel machines each dedicated to its own set of preassigned jobs. For parallel dedicated machines we establish that arising problems are N P -hard in the strong sense even for two machines. This complements a previous result by Hall et al. [11] for the case of (undedicated) parallel machines. We also demonstrate that these complexity results carry over to multistage models of flow shop and open shop. Having established that we cannot expect to solve any of the general problems to optimality in reasonable time, we concentrate on two realistic goals. These goals are: (i) to develop polynomial time approximation algorithms which guarantee a solution close to the optimum and (ii) to find special cases which are tractable in polynomial time. The remainder of this paper is organized as follows. We start by formalizing our scheduling model in Section 2 and giving a short overview of related models and results in the literature in Section 3. Then in Section 4 we examine the computational complexity of the scheduling problem with parallel dedicated machines and a single server. For that model, approximation algorithms are presented in the following two sections for the general case and for the two machine case, respectively. Their worst-case performance ratio bounds are shown to be 2 and 3/2, respectively. Several polynomially solvable cases of our parallel machine scheduling problem are described in Section 7. In Section 8 we turn to the flow shop and open shop scheduling problems with a single server. In particular, we adapt the Gilmore–Gomory traveling salesman algorithm for solving the two-machine flow shop problem with no-wait in process, i.e., when no delay is allowed between the processing of a job on the machines. Concluding remarks are given in Section 9. 2.

THE MODEL

This section is devoted to formalizing our model. We first specify the precise problem which we are addressing and give relevant definitions and notation. Then we recall and extend standard three field scheduling notation which provides a succinct way of referring to individual types of problems. Formally, we are given a set N = {J1 , J2 , . . . , Jn } of jobs, several processing machines M1 , M2 , . . . , Mm and a single server MS . For an operation Oij of performing job Jj on machine Mi , the setup phase takes sij ≥ 0 time units and is followed by the processing phase that lasts pij ≥ 0 time units. Once the setup of some operation is completed, the machine starts the processing phase of that operation, possibly after some idle time. There is no preemption in performing any phase of any operation. A job can be assigned to at most one processing machine at a time, and no machine deals with more than one job at a time. Moreover, during the setup phase of an operation Oij both machine Mi and the server MS can only perform the setup of job Jj on machine Mi . We consider several scheduling models sharing the features described above. Our main model assumes that the processing machines are parallel and dedicated. This implies that each job consists of exactly one operation and that the set N of jobs is in advance partitioned into m

306

Naval Research Logistics, Vol. 47 (2000) Table 1.

An instance of the problem with three dedicated machines

Job

Machine

J1 J2 J3 J1

M1 M2 M3 M1

Setup Time s11 s22 s33 s14

Processing Time

=1 =2 =2 =1

p11 p22 p22 p14

=3 =2 =2 =1

subsets, N1 , N2 , . . . , Nm , so that the jobs of set Ni and only these are processed on machine Mi , 1 ≤ i ≤ m. Table 1 presents an instance of the problem with three parallel dedicated machines. A feasible schedule for the instance is shown in Figure 1. In this figure and in the figures to follow, the setups are shown as double-framed bars. The model with parallel dedicated machines is essentially a single-stage model. Still, it can be related to some multistage models, such as the flow shop and the open shop. In these models, each job has to be processed on all machines Mi , i = 1, 2, . . . , n. In the case of the flow shop, all jobs have the same processing route, i.e., they pass through the machines in the same order M1 , M2 , . . . , Mm . For the open shop, the job processing routes are not given but must be chosen, different jobs being allowed to follow different routes. A flow shop or an open shop schedule is said to satisfy the no-wait in process restriction if, for any job and any pair of consecutive operations in the processing route, the processing stage of the latter operation must start exactly when the processing stage of the former operation is completed. In what follows we also consider the no-wait versions of the flow shop and the open shop problems with a single server. Throughout this paper we consider the case of sequence-independent setup times, assuming that for any machine Mi the setup times of the jobs assigned to that machine do not depend on the sequence in which the jobs are performed on Mi . Furthermore, for the multistage models, throughout the paper we assume that the setups are anticipatory. This implies that, for a pair of consecutive operations of the same job the setup for the second operation can be done simultaneously with the processing stage of the first. This type of setups is applied when the presence of the job is not needed in order to prepare the machine for processing that job (e.g., changing

Figure 1.

A schedule for three dedicated machines.

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

307

a program for a numerically controlled machine-tool). Setups of this type differ from nonanticipatory setups, where for any job no setup can be done on the downstream machine before the job is fully completed on the previous machine. Our interpretation of anticipatory setups is also different from the interpretation studied in [15]. An instance of the problem may contain jobs with very small processing and/or setup time. Such short operations have insignificant impact on the value of the objective function. These small times can be considered as essentially zero, and we write pij = 0 (or sij = 0) to denote them. In this case the corresponding operation is completed at the same time it starts. Still, we cannot simply drop the small operations, since they have influence on the structure of the schedule producing possible machine or server interference. For a schedule S, the value of the makespan is denoted by Cmax (S). In the remaining part of the paper we often deal with the two machine cases of various scheduling problems, and it is convenient to adopt the following special notation in these cases. Let A, B denote the machines, aj , bj the processing times of job Jj on machines A and B, respectively, and αj , βj the setup times of job Jj on machines A and B, respectively. For the two parallel dedicated machine problem we suppose that set N of jobs is partitioned into two subsets: N = NA ∪ NB , and the jobs from the first subset are to be processed on machine A, while those from the second subset are processed on machine B. Extending standard notation for scheduling problems [14], and also following Hall et al. [11] where similar models are studied, we use the following notation to refer to the problems under consideration: P D, SkCmax —parallel dedicated (P D) machine scheduling problem with a single server (S); F, SkCmax —flow shop scheduling problem with a single server; O, SkCmax —open shop scheduling problem with a single server; F, S|no-wait|Cmax —flow shop no-wait scheduling problem with a single server; O, S|no-wait|Cmax —open shop no-wait scheduling problem with a single server. Notice that we use the notation above when the number of the processing machines m is variable. If m is fixed, this is indicated explicitly; for example, we write P D2, SkCmax to refer to the scheduling problem with two parallel dedicated machines and a single server. Following standard notation, we use F kCmax and OkCmax to denote the classical flow shop and the open shop scheduling problems, respectively, with no additional constraints. If the no-wait restriction is imposed, we write F |no-wait|Cmax and O|no-wait|Cmax . 3.

RELATED SCHEDULING MODELS

Before proceeding with the analysis of our problem, we compare our model with other relevant models in the scheduling theory literature. First, the recent paper [2] by Blazewicz, Dell’Olmo, and Drozdowski considers a more general situation called client–server scheduling model. In their model there are m parallel machines (clients). The client reads data from the server, processes it, and writes the result back to the server. The tasks can be distributed in advance, in which case the machines are dedicated; an alternative version assumes that the decision-maker has also to assign the tasks to machines. The paper presents a series of interesting applications of the client–server model, some of which are relevant for our model. A number of complexity and approximation results are presented in [2]. Some of them are quoted in the corresponding sections of our paper. We now pass to the P D2, SkCmax problem. Hall et al. [11] study a similar model in which the processing machines are not dedicated but parallel identical. They give the detailed complexity classification of scheduling problems with parallel identical machines and a single server under

308

Naval Research Logistics, Vol. 47 (2000)

various assumptions regarding the setup times and the processing times for most common objective functions. In the case of the makespan to be minimized, it is shown that the problem with two parallel machines and a single server, denoted by P 2, SkCmax , is N P -hard in the strong sense even if all setup times are equal. Also, even if all setup times are unit and all processing times are integer, the P 2, SkCmax problem is N P -hard in the ordinary sense, and it is an open question whether the problem is N P -hard in the strong sense. For the m-machine case of the problem, Hall et al. [11] present several heuristic algorithms, the best of which guarantees a worst-case performance ratio of 2 − 1/m. Consider the flow and open shop problems with a single server. Notice that the server in the model under consideration can be viewed as an additional renewable resource such that the resource is required at any time of the setup phase of any operation. This resembles various scheduling models with resource constraints (see [3] for a review). The point of difference, however, is that in the models discussed in [3] the resource is consumed in the processing stage, not in the setup stage, and in addition any setup times are normally incorporated into the processing times. We use res111 in the middle field of the standard scheduling notation to denote the special case in which some of the operations require one unit of a single resource and one unit of the resource is available at a time. Another set of scheduling problems relevant to the models studied in this paper comprises the scheduling problems with setup times separated (see [1] for a review). As in our model, an operation is assumed to consist of two phases: the setup and the processing; however, no server is needed to run the setup. We use setup to denote a situation in which the setup times are separated from the processing times. The two-machine flow shop problem with a single server is considered in [4]. For this model the server performs not only the setup phase, but also the dismounting phase which follows the actual processing. The complexity and approximation results are given for a specific pattern of the server movements. Table 2 presents the known complexity results on the scheduling problems with a single server and some relevant problems. In Table 2, we write ‘‘N P ’’ if the corresponding problem is N P hard in the strong sense; we write ‘‘N P ?’’ if the problem is N P -hard in the ordinary sense and is not known to be N P -hard in the strong sense; otherwise, we present the running time of an algorithm that solves the problem. If the complexity status of a problem is not known, we write ‘‘Open.’’ The references are provided to the published papers. 4.

COMPLEXITY RESULTS FOR PARALLEL DEDICATED MACHINES

We start our analysis by demonstrating that even under several simplifying assumptions the problems with a single server are difficult to solve. In this section, we investigate the computational complexity of two special cases of the parallel dedicated machine problems with just two machines, P D2, SkCmax . We prove that each of them is N P -hard in the strong sense. We write P D2, S|aj = bk = a|Cmax to denote the P D2, SkCmax problem if all processing times are equal, i.e., aj = a and bk = a for all j ∈ NA and k ∈ NB . Similarly, we write P D2, S|αj = βk = α|Cmax to denote the P D2, SkCmax problem if all setup times are equal. The following problem will be used for the reduction. P 3-Partition: Given 3t positive integers e1 , e2 , . . . , e3t , where i∈T ei = tE with T = {1, 2, . . . , 3t}, for some E and E/4 < ei < E/2 for iP= 1, 2, . . . , 3t, does there exist a partition of the index set T into t disjoint subsets Tj such that i∈Tj ei = E for each j = 1, 2, . . . , t? It is well known that 3-Partition is N P -hard in the strong sense (see, e.g., [5]).

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines Table 2.

309

Complexity of related shop scheduling problems

The Model

Complexity

Reference

P 2, S|αj = βk = α|Cmax

NP

[11]

F 2kCmax F 3kCmax F 2|res111|Cmax F 2|setup|Cmax

O(n log n) NP NP O(n log n)

[12] [6] [3] [20]

F 2|no-wait|Cmax F 3|no-wait|Cmax F 2|no-wait, res111|Cmax F 2|no-wait, setup|Cmax

O(n log n) NP Open O(n log n)

[7, 8] [17] [3] [10]

O2|Cmax O3kCmax O2|res111|Cmax O2|setup|Cmax

O(n) NP ? O(n) O(n)

[9] [9] [16, 13] [19]

O2|no-wait|Cmax O2|no-wait, res111|Cmax O2|no-wait, setup|Cmax

NP NP NP

[18] [18] [18]

THEOREM 1: The P D2, S|aj = bk = a|Cmax problem is N P -hard in the strong sense. PROOF: Given an arbitrary instance of 3-Partition, define the following instance of the P D2, SkCmax problem with n = 6t + 1 jobs. The set of jobs is divided into three groups: U jobs, denoted by Ui , i = 1, 2, . . . , 3t; V -jobs, denoted by Vk , k = 1, 2, . . . , 2t + 2; and W -jobs, denoted by Wl , l = 1, 2, . . . , t − 1. All U -jobs are assigned to be processed on machine A, while the V -jobs and the W -jobs are to be processed on machine B. The setup and the processing times are defined as follows: αUi = ei ,

aUi = E,

βVk = 0,

bVk = E,

βWl = E,

bWl = E,

i = 1, 2, . . . , 3t, k = 1, 2, . . . , 2t + 2, l = 1, 2, . . . , t − 1.

Observe that all processing times are equal to E. To prove the theorem we show that in this constructed instance of the P D2, S|aj = bk = a|Cmax problem a schedule S0 satisfying Cmax (S0 ) ≤ y = 4tE exists if and only if 3-Partition has a solution. Suppose that 3-Partition has a solution, and Tj , j = 1, 2, . . . , t, are the required subsets of Pset T . Notice that each set Tj contains precisely three elements, since E/4 < ei < E/2 and i∈Tj ei = E for all j = 1, 2, . . . , t. Let π denote a permutation of the elements of set T for which Tj = {π(3j − 2), π(3j − 1), π(3j)}, for j = 1, 2, . . . , t. The desired schedule S0 exists and can be described as follows. No machine has intermediate idle time. Machine A processes the U -jobs in order of the permutation π, i.e., in the sequence (Uπ(1) , Uπ(2) , Uπ(3) , . . . , Uπ(3t−2) , Uπ(3t−1) , Uπ(3t) ),

310

Naval Research Logistics, Vol. 47 (2000)

Figure 2.

Schedule S0 .

while machine B processes the jobs in the sequence (V1 , V2 , V3 , W1 , V4 , V5 , W2 , . . . , Wt−2 , V2t−2 , V2t−1 , Wt−1 , V2t , V2t+1 , V2t+2 ). It is easy to check that the suggested schedule is feasible, i.e., produces no server interference, and that Cmax (S0 ) = y. See Figure 2. Suppose now that a desired schedule S0 exists. Since the total workload on each machine is equal to y, it follows that Cmax (S0 ) = y and neither machine has any idle time. Without loss of generality, we may assume that machine B processes the V -jobs and the W -jobs in increasing order of their numbering. Let Ilβ = [τl , τl + E] denote the time interval in which machine B performs the setup of job Wl , l = 1, 2, . . . , t − 1. For l, 2 ≤ l ≤ t − 1, introduce the interval Ilb = [τl−1 + E, τl ]. We also introduce the time interval I1b = [τ0 , τ1 ], where τ0 = 0 for convenience, and the interval Itb = [τt−1 + E, y]. Notice that in each interval Ilb , 2 ≤ l ≤ t, machine B at least performs the processing of job Wl−1 . The length of the interval I1b is also strictly positive. This follows from the observation that job W1 cannot be the first on machine B; otherwise, machine A would be idle in the time interval [0, E]. Since all nonzero setup times and all processing times on machine B are equal to E, we conclude that the length of each interval Ilb , 1 ≤ l ≤ t, is a multiple of E. Since machine A has no idle time, it follows that A must be busy in each interval Ilβ performing the processing of exactly one U -job. Let Pl denote the set of indices of the U -jobs which are set up in interval Ilb , for l = 1, 2, . . . , t. For each l, l = 1, 2, . . . , t − 1, the U -job that is processed in must be set up in interval IlbP ; therefore, set Pl is not empty and the length of interval Ilb interval Ilβ P is equal to i∈Pl (αUi + aUi ) − E = i∈Pl ei + |Pl |E − E. In interval Itb machine A processes at least one U -job which must be setP up in that interval; therefore, set Pt is not empty as well and P the length of interval Itb is equal to i∈Pt (αUi + aUi ) = i∈Pt ei + |Pt |E. For all these values to be multiples of E, the inequalities |Pl | ≥ 3 must hold for all l = 1, 2, . . . , n, since E/2 < ei < E/4, i ∈ T . There are exactly t intervals Ilb , andPin these intervals exactly 3t jobs have to be set up. Therefore, we must have that |Pj | = 3 and i∈Pj ei = E for all j = 1, 2, . . . , t. Thus, the sets Tj = Pj give a solution to 3-Partition. We now consider the case of the P D2, SkCmax problem where the processing times are arbitrary, while all setup times are identical. THEOREM 2: The P D2, S|αj = βk = α|Cmax problem is N P -hard in the strong sense. PROOF: Given an arbitrary instance of 3-Partition, define the following instance of the P D2, SkCmax problem with two machines A and B and n = 5t − 1 jobs. The set of jobs is divided into three groups: U -jobs denoted by Ui , i = 1, 2, . . . , 3t; V -jobs, denoted by Vj , j = 1, 2, . . . , t − 1; and W -jobs denoted by Wj , j = 1, 2, . . . , t. All U -jobs and V -jobs are assigned

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

311

to be processed on machine A, while the W -jobs are to be processed on machine B. The setup and processing times are defined as follows: αUi = E,

aUi = ei ,

i = 1, 2, . . . , 3t,

αVj = E,

aVj = E,

j = 1, 2, . . . , t − 1

βWk = E,

bWk = 5E,

βWt = E,

bWt = 4E.

k = 1, 2, . . . , t − 1,

Notice that all setup times are equal to E. To prove the theorem, we show that in the constructed instance of the problem a schedule S0 such that Cmax (S0 ) ≤ y = 6tE − E exists if and only if 3-Partition has a solution. Suppose that 3-Partition has a solution, and Tj , j = 1, 2, . . . , t, are the required subsets of set T . Let π denote a permutation of the set T for which Tj = {π(3j − 2), π(3j − 1), π(3j)}, for j = 1, 2, . . . , t. The desired schedule S0 exists and can be described as follows. Machine A processes the jobs in the sequence (Uπ(1) , Uπ(2) , Uπ(3) , V1 , Uπ(4) , Uπ(5) , Uπ(6) , V2 , . . . , Vt−1 , Uπ(3t−2) , Uπ(3t−1) , Uπ(3t) ), while machine B processes the jobs in the sequence (W1 , . . . , Wt ). In the interval [0, E] machine A is idle, while job W1 is set up on machine B. Subsequent jobs Wk are set up on machine B while job Vk−1 is being processed on machine A, k = 2, 3, . . . , t. While job Wk is being processed on machine B, machine A performs the setup and the processing of three jobs Ui for i ∈ Tk , and the setup of job Vk−1 . Thus, the suggested schedule is feasible, i.e., produces no server interference, and Cmax (S0 ) = 6tE − E = y. Suppose now that the desired schedule S0 exists. Since the total workload on machine B is equal to y, it follows that Cmax (S0 ) = y and, moreover, that machine B is never idle. Thus, in the time interval [0, E] machine B performs the setup of some W -job. Since any job assigned to machine A requires nonzero setup time, no activity can be done on A in the interval [0, E], so that this machine is idle in [0, E]. As the total workload on machine A equals y − E, machine A is not idle in the time interval [E, y]. Since the jobs Wk , 1 ≤ k ≤ t − 1, are identical, we assume that they are processed on machine B in increasing order of their numbering. To describe how machine B operates in schedule S0 , we need to specify the position of the unique job Wt . Assume that Wt occupies the rth position, 1 ≤ r ≤ t, so that machine B processes the W -jobs in the sequence σB = (W1 , . . . , Wr−1 , Wt , Wr , . . . , Wt−1 ). Let Ikβ and Ikb denote the time intervals during which machine B performs the setup and the processing, respectively, of the W -job that occupies the kth position in sequence σB , 1 ≤ k ≤ t. During each of the Ikβ intervals machine A may be doing only the processing, not the setup, of some job. Since only V -jobs have the processing time equal to E and machine A does not have idle time in the interval [E, y], we conclude that in each of t − 1 time intervals Ikβ for k = 2, 3, . . . , t machine A performs the processing of exactly one V -job. Moreover, since the V -jobs are identical, we may assume that in the interval Ikβ , 2 ≤ k ≤ t, machine A performs the processing of job Vk−1 . Thus, exactly E time units of each interval Ikb , k ≤ t − 1, must be taken by the setup of job Vk on machine A. The remaining intervals provide room for setting up and processing the U -jobs on A. Suppose that job Wt is placed at the rth position in sequence σB , where r < t. Then in the interval Irb there are 3E time units left for setting up and processing some of the U -jobs. Since

312

Naval Research Logistics, Vol. 47 (2000)

the setup time of any U -job equals E, it follows that exactly two U -jobs, say, Ui1 and Ui2 , can be assigned to the interval Irb , and that their combined processing time must be ei1 + ei2 = E, which is impossible since ei < E/2 for all i, i = 1, 2, . . . , 3t. Therefore, job Wt occupies the last position in sequence σB , and no V -job is processed on A in this interval. Thus, in each interval Ikb , 1 ≤ k ≤ t − 1, there are 4E time units left for setting up and processing the U -jobs. Also, the interval Itb associated with job Wt provides 4E time units. Therefore, exactly three U -jobs with total processing time equal to E must be assigned to be processed in each of the intervals Ijb , j = 1, 2, . . . , t. Denoting by Tj the index set of the U -jobs processed in interval Ijb , we obtain a solution to 3-Partition. 5.

APPROXIMATION ALGORITHM FOR PARALLEL DEDICATED MACHINES

The N P -hardness of our parallel machine problem with a single server motivates the search for polynomial-time approximation algorithms. In this section we show that a simple greedy algorithm applied to the P D, SkCmax problem creates a schedule with a makespan that is at most twice as large as the optimal value. We also prove that the bound of 2 is tight, i.e., for any m there exists an instance of the problem for which this bound is attained. Let S ∗ denote an optimal schedule for the P D, SkCmax problem. We have that Cmax (S ∗ ) ≥

m X X

sij ,

(1)

i=1 j∈Ni

since the server can do only one setup at a time. Also, for any machine Mi , 1 ≤ i ≤ m, we have Cmax (S ∗ ) ≥

X

X

pij +

j∈Ni

sij .

(2)

j∈Ni

For the P D, SkCmax problem, we consider a simple greedy algorithm that scans the jobs in the arbitrary order. In the resulting schedule S0 a machine is idle only if neither the setup nor the processing of a job may be started on it. Algorithm 1 1. For each machine Mi make an arbitrary sequence (a list) Li of jobs of set Ni , i = 1, 2, . . . , m. 2. At any time when both a machine Mi and the server become available, the next job from the list Li is assigned to be processed on Mi , 1 ≤ i ≤ m. The ties for machines are broken arbitrarily. The assigned job is removed from the list. For the assigned job the processing immediately follows the setup. Stop when all lists are empty. Call the resulting schedule S0 . It is evident that Algorithm 1 takes O(mn) time. THEOREM 3: For the P Dm, SkCmax problem, let S0 be a schedule created by Algorithm 1. Then Cmax (S0 ) ≤ 2, Cmax (S ∗ ) and this bound is tight for any m.

(3)

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

313

PROOF: Consider schedule S0 , and suppose that a machine Mq , 1 ≤ q ≤ m, terminates this schedule. Let Iq (S0 ) denote the idle time on machine Mq . Then X

Cmax (S0 ) =

pqj +

j∈Nq

X

sqj + Iq (S0 )

j∈Nq

and Iq (S0 ) ≤

m X X

sij −

i=1 j∈Ni

X

sqj ,

j∈Nq

since whenever machine Mq is idle, the server is busy doing the setup of a job on some other machine. Thus, due to (1) and (2) we have Cmax (S0 ) ≤

X

pqj +

m X X

sij ≤ 2Cmax (S ∗ ),

i=1 j∈Ni

j∈Nq

which corresponds to (3). To see that this bound is tight, consider the following instance of the P Dm, SkCmax problem. There are n = m + 2 jobs with the setup and processing times defined as follows. For any value of m, s11 = ε, s23 = ε,

p11 = ε, p23 = ε,

s12 = 1, s24 = ε,

p12 = ε, p24 = 1,

where ε < 1 is an arbitrarily small number. Also, if m ≥ 3, define si,i+2 = δ,

pi,i+2 = 1,

i = 3, . . . , m,

where δ = ε/(m − 2). The jobs are assigned to the machines as follows: N1 = {J1 , J2 }, N2 = {J3 , J4 }, and, if m ≥ 3, Ni = {Ji+2 }, for i = 3, . . . , m. Let m X X sij = 1 + 4ε. t0 = i=1 j∈Ni

The makespan for an optimal schedule S ∗ is no smaller than the sum of all setup times plus the smallest processing time, i.e., Cmax (S ∗ ) ≥ t0 + ε. A schedule that attains this lower bound can be found by allocating the jobs to the server in the order 5, 6, . . . , m, 4, 2, 3, 1. None of the machines has intermediate idle time, the server is busy between the first and last setups, machine M1 terminates the schedule, and Cmax (S ∗ ) = t0 + p11 = t0 + ε = 1 + 5ε. See Figure 3(a). On the other hand, Algorithm 1 creates a schedule S0 in which the jobs are allocated to the server in the order 5, 6, . . . , m, 1, 3, 2, 4. The server is permanently busy between the first and last job setups, machine M2 is the last to finish and the only one with intermediate idle time, and Cmax (S0 ) = t0 + p24 = t0 + 1 = 2 + 4ε. See Figure 3(b). Thus, Cmax (S0 )/Cmax (S ∗ ) = (2 + 4ε)/(1 + 5ε) → 2 as ε → 0. This completes the proof of the theorem.

314

Naval Research Logistics, Vol. 47 (2000)

Figure 3.

Tightness of Algorithm 1: (a) schedule S ∗ ; (b) schedule S0 .

Algorithm 1 can be extended to be applied to the client-server scheduling problem with dedicated machines; as shown in [2] the performance ratio of 2 is maintained for that more general problem. Notice that Algorithm 1 takes jobs one at a time as they appear in the lists. This property allows the algorithm to be run in the on-line mode, which can be important for computing applications. Because the algorithm deals with arbitrary lists, it is clear that the performance bound of 2 is preserved for the on-line version of the algorithm. 6.

HEURISTIC FOR TWO PARALLEL DEDICATED MACHINES

In this section, we develop an approximation algorithm for the P D2, SkCmax problem with a better worst-case bound than the one presented above for an arbitrary number of machines. To be more precise, the algorithm runs in O(n log n) time and generates a schedule with the makespan that is at most 3/2 times that of an optimal schedule. This bound of 3/2 is proved to be tight. Recall that in the two machine case the set N of jobs is partitioned into two subsets NA and NB to be processed on machine A and machine B, respectively. Assume that there are n = p + q and NB = {Jp+1 ,P Jp+2 , . . . , Jp+q }. It P is convenient to jobs and that NA = {J1 , J2 , . . . , Jp }P p p q use the following notation, α(NA ) = j=1 αj , a(NA ) = j=1 aj , β(NB ) = j=1 βp+j , and Pq b(NB ) = j=1 bp+j . For m = 2 the bounds (1) and (2) can be written as Cmax (S ∗ ) ≥ α(NA ) + β(NB )

(4)

Cmax (S ∗ ) ≥ max{α(NA ) + a(NA ), β(NB ) + b(NB )}.

(5)

and

For a schedule S, let IA (S) denote the total idle time on machine A in S. The value IB (S) is defined analogously. We have that Cmax (S) = max{β(NB ) + b(NB ) + IB (S), α(NA ) + a(NA ) + IA (S)}.

(6)

Now we will establish a new lower bound on the optimal value of the makespan by estimating the total idle time on machine B. Unless stated otherwise, in the remainder of this section, it is assumed that the jobs of set NA are numbered in such a way that α1 ≥ α2 ≥ · · · ≥ αp ,

(7)

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

315

and the jobs of set NB are numbered in such a way that bp+1 ≥ bp+2 ≥ · · · ≥ bp+q .

(8)

LEMMA 1: Suppose that for the P D2, SkCmax problem the numbering of the jobs satisfies (7) and (8). For an optimal schedule S ∗ , the following bound, Cmax (S ∗ ) ≥ β(NB ) + b(NB ) + γ, holds, where    k   X (αj − bp+j )|1 ≤ k ≤ p γ = max 0, max   

(9)

j=1

and bp+q+1 = · · · = b2p = 0 if p > q. k (S ∗ ) denote the total length of all idle intervals on machine B during which PROOF: Let IB machine A performs the setup of one of the jobs J1 , J2 , . . . , Jk , 1 ≤ k ≤ p. While machine A performs the setup of some job of set NA , machine B can only do the processing of at most one job of set NB . Since the jobs of set NB are numbered in nonincreasing order of their processing times, we have that for any k, 1 ≤ k ≤ p, the following inequality:

  k k  X  X k (S ∗ ) ≥ max 0, αj − bp+j IB   j=1

j=1

holds. This implies that k (S ∗ )|k = 1, 2, . . . , p} ≥ γ, IB (S ∗ ) = max{IB

which, in turn, being combined with the bound (6) with S = S ∗ , proves the lemma. For the P D2, SkCmax problem, we introduce a heuristic algorithm that either accepts a schedule found by Algorithm 1, or constructs a schedule by running a greedy algorithm that uses a specific list of jobs for one of the machines. Algorithm 2 1. Find the values α(NA ), a(NA ), β(NB ), and b(NB ). 2. If α(NA ) ≥ a(NA ), β(NB ) ≥ b(NB )

(10)

or α(NA ) ≤ a(NA ), β(NB ) ≤ b(NB ), then run Algorithm 1 to find schedule S0 for the P D2, SkCmax problem and stop. Otherwise, go to Step 3.

316

Naval Research Logistics, Vol. 47 (2000)

3. Suppose that α(NA ) > a(NA ), β(NB ) ≤ b(NB );

(11)

otherwise rename the machines. a) Let LA be an arbitrary list of the jobs J1 , J2 , . . . , Jp . If necessary, renumber the other jobs in such a way that (8) holds and make the list LB = (Jp+1 , Jp+2 , . . . , Jp+q ). b) Start a schedule by assigning the setup of job Jp+1 to start on machine B at time zero. Remove Jp+1 from list LB . c) At any time when both a machine Q ∈ {A, B} and the server become available, the next job from the list LQ is assigned to be processed on Q. The assigned job is removed from the list. For the assigned job the processing immediately follows the setup. Repeat until both lists are empty. Call the resulting schedule S0 and stop. It is easy to verify that the running time of Algorithm 2 is O(n log n). We demonstrate that a worst-case performance ratio of Algorithm 2 cannot be better than 3/2. Consider the following instance of the P D2, SkCmax problem. There are four jobs assigned to the machines as follows: NA = {J1 , J2 } and NB = {J3 , J4 }. The setup and processing times are defined below: α1 = 1, β3 = ε,

a1 = ε, b3 = 1 + ε,

α2 = 1, β4 = ε,

a2 = ε, b4 = 1,

where ε < 1 is an arbitrarily small positive number. The makespan for an optimal schedule S ∗ is equal to 2 + 3ε, which is the total workload on machine B. This schedule allocates the jobs to the server in the order 4, 1, 3, 2. The conditions of Step 3 of Algorithm 2 hold, so that schedule S0 may be created in which the jobs are allocated to the server in the order 3, 1, 2, 4 and hence Cmax (S0 ) = 3+3ε. Thus, Cmax (S0 )/Cmax (S ∗ ) → 3/2 as ε → 0. We now show that 3/2 is an upper bound on the worst-case performance ratio of Algorithm 2. THEOREM 4: For schedule S0 created by Algorithm 2 for the P D2, SkCmax problem the following bound, 3 Cmax (S0 ) ≤ , ∗ Cmax (S ) 2

(12)

holds, and this bound is tight. PROOF: The tightness of the bound (12) follows from the example above. To prove inequality (12), we split our consideration into several cases. CASE 1: Suppose that the conditions of Step 2 hold. Without loss of generality, assume that the inequalities (10) are valid; in the other case, the proof is similar. Thus, using bound (5), we have a(NA ) ≤ 12 Cmax (S ∗ ).

(13)

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

317

Consider schedule S0 found by Algorithm 1. If this schedule is terminated on machine A, then whenever machine A is idle the server is busy doing setup of some job on the other machine. Thus, IA (S0 ) ≤ β(NB ). Moreover, it follows from (6) that Cmax (S0 ) = α(NA ) + a(NA ) + IA (S0 ). Combining these two relations with (4) gives (12). Similarly, if schedule S0 is terminated on machine B, the same result holds since machines A and B are interchangeable in the statement of the problem, in Algorithm 1 and in the relations (4), (5), (10), and (12). CASE 2: Suppose that the conditions of Step 2 do not hold. Then (11) of Step 3 holds, possibly after the machines are renamed. Applying (5), we derive that (13) holds. If schedule S0 is terminated on machine A, then the above argument may be applied to give (12). Thus, in the rest of the proof, we assume that schedule S0 is terminated on machine B. For schedule S0 , let ∆ denote the total length of all time intervals during which both machines are simultaneously busy. We derive that Cmax (S0 ) = α(NA ) + a(NA ) + β(NB ) + b(NB ) − ∆. CASE 2.1: Suppose that ∆ ≥ α(NA ) + a(NA ) − 12 (β(NB ) + b(NB )) − 32 γ,

(14)

where γ is defined by (9). Rewrite (14) as ∆ ≥ α(NA ) + a(NA ) + β(NB ) + b(NB ) − 32 (β(NB ) + b(NB ) + γ). Then, using Lemma 1, we obtain ∆ ≥ α(NA ) + a(NA ) + β(NB ) + b(NB ) − 32 Cmax (S ∗ ), which leads to (12). CASE 2.2: In the remainder of this proof we assume that (14) does not hold. Let the order of the jobs in the list LA be given by some permutation π, so that LA = (Jπ(1) , Jπ(2) , . . . , Jπ(p) ). Consider the last idle interval on machine B in schedule S0 and assume that this idle interval is caused by the setup of a job Jπ(u) , 1 ≤ u ≤ p. Let v denote the number of jobs completed on machine B before that idle interval. The total idle time on A before machine B starts the processing of job Jp+v+1 does not exceed β(NB ). Therefore, we have that Cmax (S0 ) ≤

u X

(απ(j) + aπ(j) ) + β(NB ) +

q X

bp+j .

j=v+1

j=1

Recall that the jobs on A are assumed to be numbered in such a way that (7) holds, although Algorithm 2 does not necessarily scans the jobs in that order. This implies that u X j=1

απ(j) ≤

u X j=1

αj .

318

Naval Research Logistics, Vol. 47 (2000)

Using this observation, we obtain Cmax (S0 ) ≤

u X

αj + a(NA ) + β(NB ) +

j=1

q X

bp+j .

(15)

j=v+1

We consider the cases when u ≤ v and when u ≥ v + 1 separately. CASE 2.2.1: u ≤ v. It follows that q X j=v+1

q X

bp+j ≤

bp+j = b(NB ) −

j=u+1

u X

bp+j ,

j=1

and hence from (9) and (15) we derive Cmax (S0 ) ≤ a(NA ) + β(NB ) + b(NB ) + γ. Due to (13) and Lemma 1, the last relation yields (12). CASE 2.2.2: u ≥ v + 1. Recall that machine B is assumed to terminate schedule S0 , so that (6) yields Cmax (S0 ) = β(NB ) + b(NB ) + IB (S0 ). Observe that the number of idle intervals on machine B cannot exceed the number of jobs v completed on B before the last idle interval on that machine. Therefore, these idle intervals are caused by no more than v setups performed on machine A. Since the jobs of set NA are numbered in nonincreasing order of their setup times, we derive the inequality IB (S0 ) ≤

v X

αj .

j=1

If

Pv

αj ≤ Cmax (S ∗ )/2, then the required bound (12) follows immediately from (5). Pv Consider the remaining case when j=1 αj > Cmax (S ∗ )/2 and (14) does not hold, so that j=1

∆ < α(NA ) + a(NA ) − 12 (β(NB ) + b(NB )) − 32 γ. Since in schedule S0 machine A is not idle while machine B performs the processing of the jobs Jp+1 , . . . , Jp+v , we derive that ∆≥

v X

bp+j .

j=1

It follows from (9) that γ≥

v X j=1

(αj − bp+j ).

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

Combining the previous three inequalities, we obtain Pv 1 j=1 bp+j < α(NA ) + a(NA ) − 2 (β(NB ) + b(NB )) −

3 2

Pv

j=1 (αj

319

− bp+j ).

Rearranging the terms and scaling up by a factor of 2 results in the inequality v X

αj +

j=1

q X

bp+j ≤ 2

j=v+1

p X

αj + 2a(NA ) − β(NB ).

j=v+1

Rewriting (15) and then substituting the latter inequality yields   q v u X X X αj + bp+j  + αj + a(NA ) + β(NB ) Cmax (S0 ) ≤  j=1

≤ 2 ≤ 3

p X

j=v+1

j=v+1

αj + 3a(NA ) +

u X

j=v+1 p X

v X

j=1

j=1

αj

j=v+1

αj + 3a(NA ) − 3

αj .

Pv Applying bound (5) and the condition j=1 αj ≥ Cmax (S ∗ )/2 gives the required inequality (12). Thus, we have proved that schedule S0 found by Algorithm 2 is no more than 3/2 times worse than an optimal schedule. This completes the proof of the theorem. 7.

POLYNOMIALLY SOLVABLE CASES

We now present several polynomially solvable cases of the P D2, SkCmax problem. In this section, for finding optimal schedules, we often use Algorithm 1 modified to ensure that the server starts with machine B. Since no confusion arises, we still refer to this modified algorithm as Algorithm 1. Throughout this section we assume that NA = {J1 , J2 , . . . , Jp }, NB = {Jp+1 , Jp+2 , . . . , Jp+q }, and p ≤ q; otherwise, the machines may be renamed. Unless stated otherwise, in all cases considered in this section the following conditions: aj ≤ βk + bk

and

bk ≤ αj + aj

(16)

simultaneously hold for all j ∈ {1, 2, . . . , p} and k ∈ (p+1, . . . , n}. Let S1 be the schedule found by Algorithm 1. It can be verified that (16) guarantees that in schedule S1 the server alternates between the machines as long as there are jobs still to be scheduled on machine A. Therefore, schedule S1 may be described by the permutation (Jp+1 , J1 , Jp+2 , J2 , . . . , J2p , Jp , J2p+1 , J2p+2 , . . . , Jn ),

(17)

according to which the server sets up the jobs, with each machine starting the processing of a job immediately after its setup is completed. Recall that in Section 3 we proved that the problem is N P -hard in the strong sense even if all processing times are equal, i.e., aj = bk = a for all j ∈ {1, 2, . . . , p} and k ∈ {p + 1, . . . , n}, or else if all setup times are equal, i.e., αj = βk = α. We now examine the case when both conditions hold.

320

Naval Research Logistics, Vol. 47 (2000)

PROPOSITION 1: The P D2, S|αj = βk = α, aj = bk = a|Cmax problem in which all processing times are equal and all setup times are equal is polynomially solvable. PROOF: We show that the running time required for finding an optimal schedule S ∗ is polynomial with respect to the input length of the problem. The conditions (16) obviously hold. Schedule S1 represented by the permutation (17) found by Algorithm 1 can be proved to be optimal. Notice, however, that finding this representation takes O(n) time, and this time cannot be considered as polynomial. Indeed, any instance of the problem may be given by four numbers: p, q, α, and a. Thus, the length of the input does not exceed log p + log q + log α + log a. Notice that n = p + q. On the other hand, the same schedule S1 can be represented by a function ϕ, where ϕ(j) is equal to the starting time of the setup of job Jj , j = 1, 2, . . . , n. We have two cases to consider, when α ≤ a and when α > a. (a) α ≤ a. Simple calculations result in the formula  jα + (j − 1)a, if j ≤ p, ϕ(j) = (j − p − 1)α + (j − p − 1)a, if j > p. It follows that



q(α + a), if q > p, q(α + a) + α, if q = p, which implies that S1 is an optimal schedule. To see this, notice that the makespan of S1 is equal to the total workload on machine B if q > p. Otherwise, if q = p, the value q(α + a) + α cannot be reduced because the workload of each machine equals q(α + a) and one of them cannot start before time α. (b) α > a. In this case we obtain  if j ≤ p,  (2j − 1)α, if p < j ≤ 2p + 1, ϕ(j) = 2(j − p − 1)α,  (j − 1)α + (j − 2p − 1)a, if j > 2p + 1. Cmax (S1 ) =

It follows that

 Cmax (S1 ) =

nα + (q − p)a, if q > p, nα + a, if q = p.

If q > p, then the makespan of S1 meets the lower bound established in Lemma 1, and so S1 is an optimal schedule. Observe that the sum of all setup times plus the smallest processing time is an obvious lower bound on the makespan of any feasible schedule. Therefore, if q = p, the value nα + a cannot be reduced and again S1 is an optimal schedule. Every value of function ϕ can be found in constant time, which implies that this representation of an optimal schedule is polynomial. PROPOSITION 2: The P D2, SkCmax problem is solvable in O(n log n), provided that the following conditions, min{αj |j ∈ {1, 2, . . . , p}} ≥ max{bl |l ∈ {p + 1, p + 2, . . . , n}}, min{βj |j ∈ {p + 1, p + 2, . . . , n}} ≥ max{aj |j ∈ {1, 2, . . . , p}}, hold.

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

321

PROOF: If necessary, renumber the jobs so that a1 ≥ a2 ≥ · · · ≥ ap and bp+1 ≥ bp+2 ≥ · · · ≥ bn , and assume that Algorithm 1 scans the jobs according to these sequences. Observe that the problem satisfies the conditions (16), so that schedule S1 found by Algorithm 1 may be described by the permutation (17). If q > p, then in schedule S1 the server is not idle while there are jobs on both machines A and B yet to be completed, since the processing of a job on some machine takes less time than the setup of another job being done in parallel on the other machine. We deduce that Cmax (S1 ) =

p X

αj +

j=1

n X

βl +

l=p+1

n X

bk .

(18)

k=2p+1

Notice that this value cannot be reduced because of the numbering of the jobs P assigned to p machine B. Moreover, the right-hand side of Eq. (18) is equal to β(NB ) + b(NB ) + j=1 (αj − bp+j ) which by Lemma 1 is a lower bound on the optimal makespan. Therefore, the schedule S1 is optimal. Observe that the assumption that a1 ≥ a2 ≥ · · · ≥ ap is used here only for consistency reasons. Algorithm 1 may scan the jobs assigned to machine A in any order and still will find a schedule with the makespan satisfying (18). If q = p, then running Algorithm 1 gives Cmax (S1 ) =

p X

αl +

l=1

n X

βl + ap ,

l=p+1

which is the best schedule among those terminated by machine A. Again notice that in fact a special order of jobs on machine B, and, moreover, the full ordering a1 ≥ a2 ≥ · · · ≥ ap are not required; it is sufficient to ensure that ap is the smallest processing time on machine A. Reverse the roles of machine A and machine B and run Algorithm 1 again. Let S10 denote the schedule found by this second run of Algorithm 1. This time the server starts on machine A. Since p = q, schedule S10 terminates on machine B, so that Cmax (S10 ) =

p X l=1

αl +

n X

βl + b2p .

l=p+1

Notice that for the second run of Algorithm 1 we only need to ensure that b2p is the smallest processing time on machine B; the other assumptions regarding the orders of jobs are immaterial. The better of the two schedules S1 and S10 yields schedule S ∗ with Cmax (S ∗ ) =

p X l=1

αl +

n X

βl + min{aj , bp+j |j = 1, 2, . . . , p}.

l=p+1

This schedule is optimal, since the makespan equals the total setup time plus the smallest processing time. The running time required for finding an optimal schedule does not exceed O(n log n). If q > p, then the above construction, optimality argument and the running time consideration may be applied to the following more general case. Notice that in this case we do not need inequality (16).

322

Naval Research Logistics, Vol. 47 (2000)

PROPOSITION 3: The P D2, SkCmax problem with q > p is solvable in O(n log n) time, provided that the jobs can be numbered in such a way that α1 ≥ bp+1 , α2 ≥ bp+2 , . . . , αp ≥ b2p , α1 ≥ α2 ≥ · · · ≥ αp , bp+1 ≥ bp+2 ≥ · · · ≥ bn , and min{βk |k = p + 1, . . . , n} ≥ max{aj |j = 1, . . . , p}. Moreover, the following special case within Proposition 2 is worth stating separately as a less complex algorithm may be used to find an optimal solution. PROPOSITION 4: The P D2, S|aj = bk = 1, αj ≥ 1, βk ≥ 1|Cmax problem with unit processing times is solvable in O(n) time. In this case we may apply the algorithm described in the proof of Proposition 2 but omitting the ordering of the processing times. This reduces the required running time to O(n). Observe that this is in contrast to the problem P D2, S|aj = bk = a|Cmax , which has been shown to be N P -hard in the strong sense in Theorem 1. Notice also that in the case of two undedicated parallel machines the problem with unit processing times is N P -hard in the ordinary sense as proved by Hall et al. in [11]. 8.

SHOP SCHEDULING PROBLEMS

In this section we consider some shop scheduling problems with a single server. Since all relevant classical scheduling problems with three or more machines are N P -hard (see Table 2) we expect to derive new complexity results for two machine shop problems. We demonstrate that most of these problems are N P -hard, except the two machine no-wait flow shop problem which is shown to be polynomially solvable. Observe that the P D2, SkCmax problem can be seen as either the F 2, SkCmax problem or the O2, SkCmax problem, provided that only one operation of each job has nonzero setup and/or processing times. Thus, Theorem 1 implies the following statement. COROLLARY 1: Both the F 2, SkCmax problem and the O2, SkCmax problem are N P -hard in the strong sense. We now concentrate on the no-wait problems. First, notice that the O2, S|no-wait|Cmax problem is a generalization of the O2|no-wait|Cmax problem with no server. Since the latter problem is N P -hard in the strong sense (see [18]), the following statement holds. THEOREM 5: The O2, S|no-wait|Cmax problem is N P -hard in the strong sense. The remainder of this section is devoted to the F 2, S|no-wait|Cmax problem. Recall that the complexity of the flow shop problems with no-wait in process (without a server) essentially depends on the interpretation of the zero processing times. If zero processing time of an operation is understood so that the corresponding operation is missing, then the F 2|no-wait|Cmax problem is N P -hard in the strong sense [18]. Therefore, under this interpretation of zero processing (and setup) times, the F 2, S|no-wait|Cmax problem has the same complexity status. On the other

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

323

hand, if zero processing times are treated as arbitrarily small positive values, then for the F 2|nowait|Cmax problem in any schedule both machines process the jobs in the same sequence and the problem is solvable in O(n log n) time by the Gilmore–Gomory algorithm [8]. We show that, under the latter interpretation of zero times, this algorithm can be extended for solving the F 2, S|no-wait|Cmax problem. We start with the algorithm that finds a schedule S for the F 2, S|no-wait|Cmax problem associated with a given permutation π = (π(1), π(2), . . . , π(n)) of jobs. In other words, given a permutation π, we show how to determine the start or completion time of each setup and processing operation so that: (i) both machines process the jobs according to the permutation π; (ii) no server conflict occurs; (iii) the processing phase of each job on machine B starts at the time the processing of that job on machine A is completed; and (iv) the makespan of the resulting schedule cannot be reduced. For an arbitrary schedule S, let Cja (S) and Cjb (S) denote the completion time of the processing of job Jj on machines A and B, respectively. Similarly, Cjα (S) and Cjβ (S) denote the completion times of the setup of job Jj on machines A and B, respectively. The starting times for these processing and setup operations are denoted by Rja (S), Rjb (S), Rjα (S), and Rjβ (S), correspondingly. When it is clear which schedule is being considered, we may write Cja , etc., omitting the reference to S. Clearly, the start and completion times of an operation differ by the duration of that operation, e.g., Cja = Rja + aj . Notice that the no-wait condition implies that Cja = Rjb for every job Jj in any feasible schedule. We denote τj = Cja = Rjb and call this time the changeover time for job Jj . Let the jobs be considered in the sequence π = (π(1), π(2), . . . , π(n)). The processing of job Jπ(1) on machine B may not start earlier than time τπ(1) ≥ max{απ(1) + aπ(1) , απ(1) + βπ(1) }. For a job Jπ(j) , 1 < j ≤ n, consider the time interval [τπ(j−1) , τπ(j) ]. In this interval machine A performs the setup and the processing of job Jπ(j) , while machine B performs the processing of job Jπ(j−1) followed by the setup of job Jπ(j) . As above, the processing of job Jπ(j) on machine B may not start earlier than each of two events happens: (i) the processing of that job is completed on machine A; (ii) the setup phase of that jobs is completed on machine B. This implies that τπ(j) ≥ τπ(j−1) + max{απ(j) + aπ(j) , απ(j) + βπ(j) , bπ(j−1) + βπ(j) }. Since we want to find the schedule with the minimum makespan, we need to guarantee that the values τπ(j) are as small as possible for all j = 1, 2, . . . , n. Defining bπ(0) = τπ(0) = 0, we can write the smallest τπ(j) as τπ(j) = τπ(j−1) + Wπ(j) ,

j = 1, 2, . . . , n.

(19)

where Wπ(j) = max{απ(j) + aπ(j) , απ(j) + βπ(j) , bπ(j−1) + βπ(j) }.

(20)

324

Naval Research Logistics, Vol. 47 (2000)

The following algorithm determines the completion times for all operations in a schedule S that is associated with a permutation π and is feasible for the F 2, S|no-wait|Cmax problem. Algorithm 3 1. Define bπ(0) = τπ(0) = 0 and compute the values τπ(j) and Wπ(j) by formulae (19) and (20). 2. For j from 1 to n compute the following: β a b = Cπ(j) = Rπ(j) = τπ(j) , Cπ(j) β β = Cπ(j) − βπ(j) , Rπ(j)

a a Rπ(j) = Cπ(j) − aπ(j) ,

β α a = min{Rπ(j) , Rπ(j) }. Cπ(j)

Stop. Algorithm 3 is illustrated in Figure 4. Observe the following properties of a schedule found by Algorithm 3. First, all setups are scheduled to be started as late as possible, provided that the values of τπ(j) , j = 1, 2, . . . , n, β are maintained. Another feature of this schedule is that in each time interval [Rπ(j) , τπ(j) ] both machines deal with job Jπ(j) alone: machine B performs the setup, while machine A performs the processing and, possibly, is idle as, e.g., shown in Figure 1(b). β , τπ(j) ], Let us transform schedule S found by Algorithm 3 by removing all time intervals [Rπ(j) j = 1, 2, . . . , n. The makespan of the resulting schedule is β(N ) less than that of S. Notice that β(N ) is a constant that does not depend on the permutation π associated with S. Also observe that the resulting schedule can be viewed as a feasible schedule for the F 2|no-wait|Cmax problem of processing some artificial jobs (or at least different from the original ones). An optimal permutation of these artificial jobs can be found by the Gilmore–Gomory algorithm (see, e.g., [8]). Based on that permutation, we can determine a schedule which is optimal for the original F 2, S|no-wait|Cmax problem. Algorithm 4 and Theorem 6 below give formal statements of these intuitive arguments. Algorithm 4 1. Associate a job Jj of the original instance of the F 2, S|no-wait|Cmax problem ¯j and ¯bj on machines A with an artificial job J¯j . With the processing times a and B, respectively, defined as a ¯j = αj + max{aj − βj , 0},

¯bj = bj , j = 1, 2, . . . , n.

Denote the resulting instance of the F 2|no-wait|Cmax problem by Problem P¯ . 2. Using the Gilmore–Gomory algorithm find a permutation π ∗ that determines an optimal schedule, S¯∗ in Problem P¯ . 3. Run Algorithm 3 to find a schedule S ∗ for the original F 2, S|no-wait|Cmax problem that is associated with permutation π ∗ . Stop. The running time of Algorithm 4 is determined by Step 2 and does not exceed O(n log n). THEOREM 6: Schedule S ∗ found by Algorithm 4 is optimal for F 2, S|no-wait|Cmax problem.

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

Figure 4.

325

A part of schedule S found by Algorithm 3.

PROOF: Take an arbitrary permutation π = (π(1), π(2), . . . , π(n)) and run Algorithm 3 to find schedule S associated with that permutation. We first show how S is related to a permutation schedule for the associated problem defined in Step 1. In schedule S, each time interval [τπ(j−1) , τπ(j) ], for 1 ≤ j ≤ n, between two consecutive β ] and changeover times is of length Wπ(j) and may be divided into two parts, [τπ(j−1) , Rπ(j)

β β , τπ(j) ]. Consider the activity done by machine A in the time interval [τπ(j−1) , Rπ(j) ]. [Rπ(j) Figure 1 illustrates various cases. Machine A may have some idle time at the beginning of that interval, it then starts and completes the setup of job Jπ(j) , and finally, if aπ(j) > βπ(j) , it performs a part of the processing of job Jπ(j) .

326

Naval Research Logistics, Vol. 47 (2000)

β The last two of these activities done by machine A in the interval [τπ(j−1) , Rπ(j) ] can be ¯ ¯ interpreted as the processing of a new job Jπ(j) . Start the processing of Jπ(j) on B at time τπ(j) . β , τπ(j) ] from the schedule. Define Its processing takes bπ(j) time. Now remove all intervals [Rπ(j) τ¯π(0) = 0 and τ¯π(j) = τ¯π(j−1) + Wπ(j) − βπ(j) , for j = 1, . . . , n. As a result, we obtain a schedule S¯ that is feasible for the F 2|no-wait|Cmax problem of processing the jobs J¯1 , J¯2 , . . . , J¯n (Problem P¯ ). In this no-wait schedule, in each time interval [¯ τπ(j−1) , τ¯π(j) ] machine A processes job J¯π(j) and machine B processes job J¯π(j−1) , 1 ≤ j ≤ n. Since schedule S¯ has been obtained from schedule S by reducing the length of each of n time intervals [τπ(j−1) , τπ(j) ] by βπ(j) , we deduce that

¯ + β(N ). Cmax (S) = Cmax (S) Now consider the permutation, π ∗ , derived in Step 2 of Algorithm 4. The associated schedule, S , is optimal for Problem P¯ , while S ∗ denotes a schedule for the original F 2, S|no-wait|Cmax problem found by Step 3 Algorithm 4. It follows that ¯∗

Cmax (S) ≥ Cmax (S¯∗ ) + β(N ) = Cmax (S ∗ ), so that S ∗ is an optimal schedule. 9.

CONCLUSION

In this paper we have addressed the issues of computational complexity and approximation of various scheduling problems with a common single server. The obtained complexity results are summarized in Table 3 and may be compared to those already known for related problems, presented in Table 2. In Table 3, we write ‘‘N P ’’ if the corresponding problem is N P -hard in the strong sense; otherwise, we present the running time of an algorithm that solves the problem; ‘‘O(1)’’ indicates that for the problem in question finding a time slot for each operation takes constant time. For the latter problem, our algorithm finds a representation of an optimal schedule in time that is polynomial with respect to the length of the problem input, not just in n and m. Finally, we turn to open problems raised by this research. At this stage we can say nothing about the complexity of the problem for dedicated parallel machines with both equal setup times and equal processing times, for more than two machines. Moreover, even for the two machine Table 3.

Complexity of shop scheduling problems with a single server

The Model

Complexity

Reference

P D2, S|aj = bk = a|Cmax P D2, S|αj = βk = α|Cmax P D2, S|aj = bk = a, αj = βk = α|Cmax P D2, S|aj = bk = 1, αj ≥ 1, βk ≥ 1|Cmax

NP NP O(1) O(n)

Theorem 1 Theorem 2 Proposition 1 Proposition 4

F 2, SkCmax

NP

Corollary 1

F 2, S|no-wait|Cmax

O(n log n)

Theorem 6

O2, SkCmax

NP

Corollary 1

O2, S|no-wait|Cmax

NP

Theorem 5

Glass, Shafransky and Strusevich: Scheduling Parallel Dedicated Machines

327

case the complexity status of the problem with unit setup times is still unknown. It is also an interesting research goal to design an approximation algorithm for the problem with m parallel dedicated machines that delivers a worst-case performance ratio better than 2. ACKNOWLEDGMENTS The authors are grateful to the referees for their constructive comments and drawing to our attention the paper [2]. This research was partly supported by the International Association for the Promotion of Cooperation with Scientists from the Independent states of the Former Soviet Union, INTAS 93257-Ext. The second author was also supported by International Science and Technology Center, Project B-104-98. The final version of this paper was written when the second author was visiting the University of Greenwich. Support of this visit from The Royal Society and the University of Greenwich is gratefully acknowledged. REFERENCES [1] K.R. Baker, Scheduling groups of jobs in the two-machine flow shop, Math Comput Model 13 (1990), 29–36. [2] J. Blazewicz, P. Dell’Olmo, and M. Drozdowski, Scheduling of server–client applications, Int Trans Oper Res 6 (1999), 345–363. [3] J. Blazewicz, J.K. Lenstra, and A.H.G. Rinnooy Kan, Scheduling subject to resource constraints, Discrete Appl Math 5 (1983), 11–24. [4] T.C.E. Cheng, G. Wang, and C. Sriskandarajah, One-operator two-machine flow-shop scheduling with setup and dismounting times, Comput Oper Res 26 (1999), 715–730. [5] M.R. Garey and D.S. Johnson, Computers and intractability. A guide to the theory of N P completeness, Freeman, San Francisco, 1979. [6] M.R. Garey, D.S. Johnson, and R. Sethi, The complexity of flowshop and jobshop scheduling, Math Oper Res 1 (1976), 117–129. [7] P.C. Gilmore and R.E. Gomory, Sequencing a one-state variable machine: A solvable case of the traveling salesman problem, Oper Res 12 (1964), 655–679. [8] P.C. Gilmore, E.L. Lawler, and D.B. Shmoys, ‘‘Well solved special cases,’’ in The traveling salesman problem. A guided tour of combinatorial optimization, E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Shmoys (Editors), Wiley, New York, 1986, pp. 87–143. [9] T. Gonzalez and S. Sahni, Open shop schedules to minimize finish time, J Assoc Comput Mach 23 (1976), 665–679. [10] J.N.D. Gupta, V.A. Strusevich, and C.M. Zwaneveld, Two-stage no-wait scheduling models with setup and removal times separated, Comput Oper Res 24 (1997), 1025–1031. [11] N.G. Hall, C.N. Potts, and C. Sriskandarajah, Parallel machine scheduling with a common server, Working Paper Series WP 94-21, Max M. Fisher College of Business, The Ohio State University, Columbus, OH, 1994. [12] S.M. Johnson, Optimal two- and three-stage production schedules with setup times included, Nav Res Logistics Quart 1 (1954), 61–68. [13] W. Kubiak, C. Sriskandarajah, and K. Zaras, A note on the complexity of open-shop scheduling problems, INFOR 29 (1991), 284–294. [14] E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Shmoys, ‘‘Sequencing and scheduling: Algorithms and complexity,’’ in Handbooks in operations research and management science, Vol. 4, Logistics of production and inventory, S.C. Graves, A.H.G. Rinnooy Kan, and P.H. Zipkin (Editors), North-Holland, Amsterdam, 1993, pp. 455–522. [15] R. Logendran and C. Sriskandarajah, Two-machine group scheduling problem with blocking and anticipatory setups, Eur J Oper Res 69 (1993), 467–481.

328

Naval Research Logistics, Vol. 47 (2000)

[16] I.N. Lushchakova and V.A. Strusevich, Two-stage systems with nonfixed routes and resource constraints, Zh Vychisl Mat Mat Fiz 29 (1989), 1393–1407 (in Russian). [17] H. R¨ock, The three-machine no-wait flow-shop problem is N P -complete, J Assoc Comput Mach 31 (1984), 336–345. [18] S. Sahni and Y. Cho, Complexity of scheduling shop with no wait in process, Math Oper Res 4 (1979), 448–457. [19] V.A. Strusevich, Two-machine open shop scheduling problem with setup, processing and removal times separated, Comput Oper Res 20 (1993), 597–611. [20] T. Yoshida and K. Hitomi, Optimal two-stage production scheduling with setup times separated, AIIE Trans 11 (1979), 261–263.

Suggest Documents