An Efficient Algorithm for Finding Ideal Schedules - Semantic Scholar

3 downloads 1494 Views 143KB Size Report
The problem of scheduling UET jobs with precedence constraints has ... for the minimization of total completion time for jobs with release dates; this is a major ...
Acta Informatica 49(1) (2012) 1–14

An Efficient Algorithm for Finding Ideal Schedules Edward G. Coffman Jr.∗

Dariusz Dereniowski

Wiesław Kubiak

Department of Electrical Engineering

Department of Algorithms

Faculty of Business Administration

Columbia University

and System Modeling

Memorial University

New York, USA

Gda´nsk University of Technology

St. John’s, Canada

Gda´nsk, Poland

Abstract We study the problem of scheduling unit execution time (UET) jobs with release dates and precedence constraints on two identical processors. We say that a schedule is ideal if it minimizes both maximum and total completion time simultaneously. We give an instance of the problem where the min-max completion time is exceeded in every preemptive schedule that minimizes total completion time for that instance, even if the precedence constraints form an intree. This proves that ideal schedules do not exist in general when preemptions are allowed. On the other hand, we prove that, when preemptions are not allowed, then ideal schedules do exist for general precedence constraints, and we provide an algorithm for finding ideal schedules in O(n3 ) time, where n is the number of jobs. In finding such ideal schedules we resolve a conjecture of Baptiste and Timkovsky [1]. Further, our algorithm for finding min-max completion-time schedules requires only O(n3 ) time, while the most efficient solution to date has required O(n9 ) time.

Keywords: ideal schedules, parallel processors, scheduling algorithms

1

Introduction

The problem of scheduling UET jobs with precedence constraints has repeatedly attracted the attention of scheduling theory researchers, ever since the matching-based algorithm of Fujii, Kasami and Ninomiya [6] and the job labeling algorithm of Coffman and Graham [2]. For example, Garey and Johnson have devised O(n2 ) and O(n2.81 ) algorithms for minimizing maximum lateness for jobs without release dates, [8], and with release dates, [9], respectively. Gabow [7] has designed an almost linear-time algorithm for the minimum-makespan problem. Recently, Baptiste and Timkovsky [1] switched the focus to minimization of total completion time and have presented a shortest-path optimization algorithm for scheduling jobs with release dates. They also conjectured that there always exist schedules minimizing both maximum completion time and total completion time for jobs with release dates. This is known to hold true for equal release dates without preemptions, Coffman and Graham [2], and with preemptions, Coffman, Sethuraman, and Timkovsky [3]. It can be argued that, historically, the performance objectives of greatest interest in combinatorial scheduling theory have been minimum makespan (i.e., maximum job completion time Cmax ) and the minimum total compleP tion (or flow) time Ci . The advantages of the two measures are self-evident but they are essentially orthogonal, so it comes as no surprise that schedules minimizing both simultaneously do not always exist. Schedules that do succeed in minimizing both are called ideal schedules, and problems for which ideal schedules exist for every instance are said to be ideal. This paper returns to the conjecture of Baptiste and Timkovsky and proves that it holds true for the nonpreemptive problem with arbitrary integer release dates. However, an analogous statement fails to hold if preemptions are allowed, even for intree precedence constraints. Of equal if not greater interest is the fact that the proof converting the Baptiste-Timkovsky conjecture to a theorem leads to an O(n3 ) time algorithm for the minimization of total completion time for jobs with release dates; this is a major improvement over the O(n9 ) algorithm in [1]. ∗ Corresponding

author: Ph: 212-854-2152, Fax: 212-932-9421, email: [email protected]

1

P In this paper we consider the scheduling problems P2|pmtn, intree, r j , p j = 1| Ci and P2|prec, r j , p j = P 1| Ci , and the corresponding problems of Cmax minimization, – we use the well know three-field notations introduced by Graham et al. [10]. In both cases the set of jobs is denoted by J. Each J j ∈ J has its integer release date r j . The execution time of each job J j ∈ J equals 1, i.e., p j = 1 in the standard notation of scheduling problems. Given a schedule S, Cmax (S) denotes the maximum completion time of S. The total completion time of S is the sum of completion times of all jobs in S. The plan for the paper is as follows. We prove by example in Section 2 that the problem P2|pmtn, intree, r j , p j = P 1| Ci is not ideal. We note in passing that solving this total-completion-time problem in polynomial time remains P an open problem. Coffman, Sethuraman, and Timkovsky [3] show that P2|pmtn, prec, p j = 1| Ci is an ideal problem. However, this paper focuses on problems with general release times. In particular, in Section 3 we conP sider the problem P2|prec, r j , p j = 1| Ci with no preemptions allowed, a problem studied earlier by Baptiste and Timkovsky [1]. Section 3.1 proves that, given any instance of this problem, each minimum total completion time P schedule is ideal, and consequently P2|prec, r j , p j = 1| Ci is ideal. This result resolves a conjecture of Baptiste and Timkovsky in [1]. Then, in Section 3.2, we give an algorithm for finding ideal schedules in O(n3 ) time. This P algorithm greatly improves on the O(n9 ) time complexity of the algorithm for finding min Ci schedules using the shortest-path approach presented in [1].

2

P2|pmtn, intree, r j , p j = 1|

P

Ci is not ideal

In the following we give an example to prove that there exists an instance for which no min-max completion-time preemptive schedule for UET jobs with given release dates and intree precedence constraints simultaneously minP imizes the total completion time. Moreover, the maximum completion time of all min Ci preemptive schedules exceeds the maximum completion time of ideal nonpreemptive for this instance. (a)

a (0)

h (1)

(b) 0

b (0) c (0) f1 (2) f2 (3) ... fk-1 (k) fk (k+1)

1

2

c

a b

g1 (2)

g2 (3) (c) ... 0 1 gk-1(k) a b gk (k+1)

b

3

f1

f2

...

g1

g2

...

2

3

4

h

f1

f2

g1

c

k

4

g2

... ...

k+2

k+1

k+3

fk-1

fk

h

gk-1 gk

r

k+1

k+2

fk-1

fk

gk-1 gk

k+3

r

r (k+2)

Figure 1: (a) intree precedence relation among UET tasks (release dates are given in brackets); (b) min Cmax preemptive and at the same time ideal nonpreemptive schedule S0 ; (c) a preemptive schedule S00 .

Theorem 1 Given the intree in Figure 1(a), let S be a min-max completion time preemptive schedule, and let P F be a min Ci preemptive schedule for this instance. It holds Cmax (S) < Cmax (F ). Moreover, the difference between the total completion time of S and F can be made arbitrary large. Proof: Figure 1(b) gives a schedule S0 of the maximum completion time k+3. Due to the chain of jobs f1 , . . . , fk , r, there is no schedule, preemptive or not, shorter than k + 3. Observe that S0 minimizes total completion time for the nonpreemptive case, because one unit idle time cannot be avoided within the first two time slots. Thus this schedule is an ideal nonpreemptive schedule. In the following, C(x, S) denotes the completion time of job x in schedule S. Consider a schedule S00 in Figure 1(c), and note that X X k C(x, S00 ) = C(x, S0 ) − . (1) 2 x∈J x∈J We now argue that there exists no preemptive schedule of length k + 3 that minimizes total completion time. Suppose there is a preemptive schedule F 0 of length k + 3 that minimizes total completion time. Then, the jobs

2

fi , gi , r, i = 1, . . . , k, must be executed in F 0 in the same time slots as in S0 , which follows by a simple induction on k. Moreover, C(a, F 0 ) +C(b, F 0 ) +C(c, F 0 ) ≥ C(a, S0 ) +C(b, S0 ) +C(c, S0 )=4. Finally, the precedence constraints between h and a, b, c imply that the job h cannot complete prior to 2 in F 0 . Thus, since the execution slots of the jobs fi , gi , r, i = 1, . . . , k, remain as in S0 , then C(h, F 0 ) > k + 2. (Even if h starts prior to 2 in F 0 , then it is preempted at 2 or earlier and resumes at k + 2 or later.) Therefore, X X C(x, F 0 ) > C(x, S0 ) − 1. x∈J

x∈J

P P By (1), x∈J C(x, F 0 ) − x∈J C(x, S0 ) > 2k − 1. Therefore, F 0 does not minimize the total completion time for any P P k ≥ 2 — a contradiction. Note that the difference x∈J C(x, F 0 ) − x∈J C(x, S0 ) > 2k − 1 can be made arbitrary large by choosing sufficiently large k. 

3

Ideal schedules for P2|prec, r j , p j = 1|

P

Ci

P In Subsection 3.1 we prove that the problem P2|prec, r j , p j = 1| Ci is ideal. The ideas used in proving the existence of ideal schedules are turned into an efficient algorithm for their construction in Subsection 3.2. First we introduce some more notation used in the remaining part of this paper. The symbol ≺ denotes the precedence constraints among the jobs, i.e. J j ≺ Jk means that J j must complete before Jk starts; J j is a predecessor of Jk and Jk is a successor of J j . Also, when a job is said to be minimal or maximal the assertion is relative to the partial order ≺. Furthermore, J 0 / J denotes the fact that J 0 and J are unrelated, i.e., J 0 ⊀ J and J ⊀ J 0 . Jobs J 0 and J are related if either J 0 ≺ J or J ≺ J 0 . In the following we consider schedules on two processors and as is normally the case, all start and completion events occur at integer times. Given a nonpreemptive schedule S, we use the symbol Si , i ≥ 0, to refer to the set of jobs executed in time interval [i, i + 1]. Bear in mind that for each i ≥ 0, |Si | ∈ {0, 1, 2}. Also, let for brevity Si, j = Si ∪ · · · ∪ S j for any 0 ≤ i ≤ j. Definition 1 We say that a triple (l, J1 , J2 ) is a decomposition of J if l > 0 is an integer and J1 , J2 is a partition P of J (J1 ∪ J2 = J, J1 ∩ J2 = ∅) such that in each min Ci nonpreemptive schedule all jobs in J1 are executed in [0, l] and each job in J2 starts at l or later. P It follows from the above definition that, if (l, J1 , J2 ) is a decomposition, then finding a min Ci schedule for P J = J1 ∪ J2 reduces to separately finding min Ci schedules for J1 to complete by l, and for J2 to start from l.

3.1

An ideal schedule always exists

P Let F be a min Ci schedule for J. Let l > 0 be the smallest integer such that there is an idle time in interval [l − 1, l] in F , i.e., |Fl−1 | < 2. Define the set of jobs that execute prior to l in F J1 = F0,l−1 , the set of remaining jobs that execute after time l J2 = J \ J1 . In the following we prove that (l, J1 , J2 ) is always a decomposition. We limit ourselves to the case where there is exactly one job, say X, in Fl−1 , and there exists a job X 0 ∈ J that is not executed in the interval [0, l] in F , but has a release date less than l. Otherwise we immediately obtain a decomposition (l, J1 , J2 ). The decomposition is also obvious for l = 1 thus we assume l ≥ 2 from now on in this subsection. Clearly, X ≺ X 0 , because otherwise we could execute X 0 earlier, i.e., together with X in Fl−1 , which would reduce the total completion time of F — a contradiction. We continue with the assumption that X 0 is an immediate successor of X, and define the set of predecessors of X 0 running in the first l time units J 0 = {J ∈ J : J ≺ X 0 }. We now informally sketch the main idea of the proof leaving details to Lemmas 1 - 12 and Theorem 2. 3

The proof is by contradiction. We suppose there is a schedule S of J 0 ∪{X 0 } that completes by l, i.e. Cmax (S) ≤ l holds. (See Figure 2(a) and 2(b), respectively, for an example of F and S.) By rearranging the jobs in S we convert F into a schedule F 00 that completes all the jobs in J1 ∪ {X 0 } by l, and thus without idle time in [0, l]. This leads to a contradiction, for this schedule reduces the total completion time of F . (a)

(b)

l

0

u0

u1

f2

f3

f4

f5

f7

f9

f0

f1

u2

u3

u4

f6

f8

f10

(d)

X' l

0

f0 (c)

X

0

0

f2

f3

f5

f7

f9

f1

f4

f6

f8

f10

X

X'

k

l

f3

f5

f7

f9

f4

f6

f8

f10

X

X'

l

k

f3

f5

f7

f9

X

f4

f6

f8

f10

u3

X' u4

Figure 2: (a) F ; (b) S; (c) F 0 ; (d) F 00 ; in all cases the arrows indicate the precedence relation To obtain F 00 we first focus on the jobs in J 0 ∪ {X 0 } (J 0 = {X, f0 , . . . , f10 } in Figure 2). Since all these jobs complete by l in S we can obtain a schedule F 0 (see Figure 2(c)) for J 0 ∪ {X 0 } that meets the following conditions: 0 • There exists an integer k ≥ 0 (k = 3 in Figure 2) such that Fk,l−1 ∩ J 0 ⊆ Fk,l−1 ∩ J 0 (Fk,l−1 ∩ J 0 = {X, f3 , . . . , f10 } in Figure 2). This will be shown in Lemma 1 (note that, by the definitions in (2) and (3), the above condition is equivalent to the condition given in Lemma 1).

• At least one processor is occupied in F 0 at all times in [k, l], see Lemma 2. • X 0 and some of its predecessors start in F 0 earlier than in F , however, never earlier than in S, which guarantees that the final schedule F 00 respects release dates, see Lemma 4. • all jobs respect the precedence constraints in F 0 , see Lemmas 5 and 6. Next, we use F 0 to construct a schedule F 00 for all jobs in Fk,l−1 ∪ {X 0 } (see Figure 2(d)). To that end, the jobs in Fk,l−1 \ J 0 fill in the remaining empty slots in [k, l] of F 0 , see Lemma 9. Since all empty slots are on the same processor, the precedence constraints among jobs independent of X 0 can be guaranteed, see Lemma 10. (Note that we cannot, in general, use the schedule S for this purpose due to precedence constraints, e.g. we could not execute the jobs u3 and u4 in time slot [3, 4], where both processors are idle in S, if the jobs were related.) Moreover, the construction ensures that no job independent of X 0 starts earlier in F 00 than it does in F , which implies that release dates of independent jobs are also respected. We now proceed to definitions of concepts used in the formal proofs. Given F and S, we say that a job J is: • late if J ∈ J 0 and J starts later in F than in S, that is J ∈ Fi , J ∈ S j , for some j < i, i, j ∈ {0, . . . , l − 1}, • early if J ∈ J 0 and J starts earlier in F than in S, that is J ∈ Fi , J ∈ S j , for some j > i, i, j ∈ {0, . . . , l − 1}, • just-in-time if J ∈ J 0 and J starts at the same time both in F and in S, that is J ∈ Fi , J ∈ Si , for some i ∈ {0, . . . , l − 1}, • independent if J < J 0 ∪ {X 0 } and J ∈ F0,l−1 .

4

For each i = 0, . . . , l − 1 the set T i (respectively, Ei , Di , Ii ) consists of all late (early, just-in-time, independent, resp.) jobs in Fi . According to this classification of jobs in Fi it holds Fi = T i ∪ Di ∪ Ei ∪ Ii , i = 0, 1, . . . , l − 1, We assume in the discussion below that S is a schedule not longer than l that minimizes the total earliness of jobs P in F relative to S, i.e., S minimizes J∈S max{0, ∆(J)}, where ∆(J) is the starting time of J in S minus its starting time in F . Informally speaking, S and F are used to construct schedule F 0 of the jobs in J 0 , and the requirement that S minimizes the total earliness will guarantee that while filling out F 0 (to obtain F 00 ) with the independent jobs, none of them needs to be executed earlier than in F , which ensures that the independent jobs do not violate their release dates in F 00 . Now we iteratively define a schedule S0 as follows: S0l−1 = {X 0 }, and once S0i+1 , . . . , S0l−1 have been constructed, then we determine S0i , i ∈ {0, . . . , l − 2}, in two steps: Step 1: add to S0i all jobs in Si \ S0i+1,l−1 , Step 2: if |S0i | < 2, then add to S0i as many as possible minimal jobs in (Fi,l−1 ∩ J 0 ) \ (S0i+1,l−1 ∩ J 0 ) so that |S0i | does not exceed 2. The jobs added to S0i have to be unrelated to the jobs that have been added to S0i in Step 1. Define Al = {X 0 } and Ai = (Fi,l−1 ∩ J 0 ) \ (S0i,l−1 ∩ J 0 ), i = 0, . . . , l − 1. 0

(2)

0

In other words, Ai consists of those jobs in J that are executed prior to i in S , but at i or later in F . We show in Lemma 1 that Ak = ∅, i.e, Fk,l−1 ∩ J 0 ⊆ S0k,l−1 ∩ J 0 , for some k ≥ 0. We set k to the largest such index: k = max{i, 0 ≤ i ≤ l : Ai = ∅}. Finally, define the schedule F 0 in [0, l] as follows: Fi0 = ∅, for i = 0, . . . , k − 1 and Fi0 = S0i for i = k, . . . , l − 1.

(3)

We illustrate the definitions of Ai , Fi0 and S0i by the example in Figure 3. Figure 3(a) depicts a schedule F (a) 0

(b)

1

5

4

6

7

U2

X6

X4

X2 U1

U6

X10

X9

X8

X7

X5

X3

1

2

X9 1

2

3

5

4

6

X6

X2

X7

X5

X3

X4

X1

3

5

4

X1 7

X8

6

X6

X2

X7

X5

X3

X4

X1

X

9

8

X 7

X8

l=9

8

U3

0

0

3

U4

X10 (c)

2

U5

X' 9

8

X

X'

Figure 3: (a) a schedule F restricted to [0, l] given together with release dates and precedence constraints for the jobs X, X1 , . . . , X10 in J 0 (the arrows define the precedence constraints, the symbol ` starts at a release date and ends at the corresponding job); (b) S; (c) F 0 restricted to interval [0, l], where jobs X, X1 , . . . , X10 make up J 0 , and jobs U1 , . . . , U6 are all independent and released at 0. The precedence and release date constraints for the jobs in J 0 are shown in Figure 3(a) along with F , where the `-pointer notation refers to job release dates (the pointer starts at a release date on the left hand side and reaches the job on the right hand side that corresponds to this release date). In particular, the release date of 5

Step: Si Si \ S0i+1,l−1 Step 2 S0i Fi0 Ai

0 X10 ∅ ∅ ∅ ∅ ∅

1 X9 ∅ X10 X10 ∅ ∅

2 X5 , X8 ∅ X9 X9 ∅ ∅

3 ∅ ∅ X5 , X8 X5 , X8 X5 , X8 ∅

4 X3 , X6 X3 , X6 ∅ X3 , X6 X3 , X6 X5

5 X2 , X4 X2 , X4 X3 X2 , X4 X2 , X4 X3 , X5

6 X1 , X7 X1 , X7 ∅ X1 , X7 X1 , X7 X2 , X3

7 X X ∅ X X X1

8 X0 X0 ∅ X0 X0 X

9 X0

Table 1: The notation corresponding to Figure 3 (the row marked as ‘Step 2’ contains all the minimal unrelated jobs that can be added to S0i in Step 2) X3 is 4; X5 and X1 are, respectively, the predecessor and successor of X3 and have the respective release dates 2 and 6. Note that l = 9. Figures 3(b) and 3(c) present S and F 0 , respectively. The only early job is X7 ∈ E4 and the total earliness of S is 2. The just-in-time jobs are X4 ∈ D5 and X6 ∈ D4 . All the remaining jobs in J 0 are late. Table 3.1 illustrates the iterative process of obtaining the schedules S0 and F 0 . Note that k = 3 in this example. We now return to Lemma 1 announced earlier. Lemma 1 There exists k ≥ 0 such that Ak = ∅. Proof: By contradiction. Suppose Ak , ∅ for each k = 0, . . . , l. Let J ∈ A0 . Clearly, J , X 0 , because X 0 ∈ S0l−1 . (Note that we have already argued that l ≥ 2.) Also, J ∈ J 0 , because, by construction, Ak ⊆ J 0 for each k = 0, . . . , l − 1. There is an i ∈ {0, . . . , l − 1} such that J ∈ Si . Consider the iteration in which S0i is constructed. If J ∈ S0i+1,l−1 , then clearly, by (2), J < A0 . If, on the other hand, J < S0i+1,l−1 , then J is added to S0i in Step 1, which again results in J < A0 . This contradicts the assumption that J ∈ A0 and completes the proof.  We assume in the following that k is the largest integer that satisfies Lemma 1. The following lemmas characterize the schedule F 0 in time interval [k, l]. Recall that Fi0 = S0i for i = k, ..., l − 1. Lemma 2 |Fi0 | ≥ 1 for i = k, . . . , l − 1. 0 Proof: By induction on i = l − 1, . . . , k. The lemma holds for i = l − 1 since Fl−1 = S0l−1 = {X 0 }. Now assume that it holds for i + 1 for some i ∈ {k, . . . , l − 2}. We show that it then holds for i as well. If Si \ S0i+1,l−1 , ∅, then, by Step 1 and by (3), Si \ S0i+1,l−1 ⊆ S0i = Fi0 . Otherwise, by the choice of k we have Ai+1 , ∅. By (2), Ai+1 = (Fi+1,l−1 ∩ J 0 ) \ (S0i+1,l−1 ∩ J 0 ) ⊆ (Fi,l−1 ∩ J 0 ) \ (S0i+1,l−1 ∩ J 0 ). Thus, (Fi,l−1 ∩ J 0 ) \ (S0i+1,l−1 ∩ J 0 ) is non-empty in Step 2, so it contains at least one minimal job, and thus at least one of the minimal jobs is added to  S0i in Step 2. Therefore, in both cases we have that Fi0 , ∅ which proves the lemma for all i = k, . . . , l − 1.

Define for brevity E=

l−1 [ i=k

Ei ,

T=

l−1 [

Ti,

i=k

D=

l−1 [

Di .

i=k

Then we obtain 0 Lemma 3 T ∪ D ∪ E ⊆ Fk,l−1 .

Proof: Let J ∈ T ∪ D ∪ E. By definition, T ∪ D ∪ E ⊆ Fk,l−1 ∩ J 0 . Thus, J ∈ Fk,l−1 ∩ J 0 . Note that J ∈ S0k,l−1 , because otherwise J ∈ (Fk,l−1 ∩ J 0 ) \ (S0k,l−1 ∩ J 0 ), which implies J ∈ Ak by (2) and gives a contradiction with 0 Lemma 1. Therefore, J ∈ S0k,l−1 . Hence, by (3), J ∈ Fk,l−1 .  Lemma 4 An early job J ∈ E is scheduled in Si and Fi0 for some i ∈ {k, . . . , l − 1}. A just-in-time job J ∈ D is scheduled in Si , Fi0 and Fi for some i ∈ {k, . . . , l − 1}. A late job J ∈ T is scheduled in S s , Fi0 and F f , where s ≤ i ≤ f for some f, s, i ∈ {k, . . . , l − 1}. Proof: Follows from the construction of S0 , (3) and Lemma 3. 0 Lemma 4 ensures that the schedule Fk,l−1 respects release dates since both S and F do.

6



Lemma 5 All jobs in E respect the precedence constraints in F 0 . Proof: Consider an early job J ∈ F f for some f ∈ {k, . . . , l − 1}. By definition and by Lemma 4, J ∈ Fi0 , where 0 f < i and J ∈ Si . For any other job J 0 ∈ Fk,l−1 we have the following two cases: Case 1. J 0 ∈ E. Since J ∈ E, Lemma 4 implies that J and J 0 respect the precedence constraints for S respects the precedence constraints. Case 2. J 0 ∈ D ∪ T . Let J 0 ∈ Fi00 . By Lemma 4, J 0 ∈ F f 0 , where f 0 ≥ i0 . If i0 > i, then the claim follows from the inequality f < i < i0 ≤ f 0 . If i0 = i, then J, J 0 ∈ Fi0 . Then, however, the construction of S0 and (3) imply that J and J 0 are unrelated. Finally suppose that i0 < i and J ≺ J 0 . Since J 0 ∈ S j , where j ≤ i0 (for J 0 is late or just-in-time), and J ∈ Si this leads to a contradiction, for S respects the precedence constraints.  Lemma 6 All jobs in D ∪ T respect the precedence constraints in F 0 . 0 Proof: Let J ∈ D ∪ T . By Lemma 4, J ∈ Fi0 , J ∈ F f and J ∈ S s , where s ≤ i ≤ f . Consider J 0 ∈ Fk,l−1 . If J 0 ∈ E 0 0 then, by Lemma 5, J and J respect the precedence constraints. Thus, let J ∈ D ∪ T . From Lemma 4 we have J 0 ∈ Fi00 , J 0 ∈ F f 0 and J 0 ∈ S s0 , where s0 ≤ i0 ≤ f 0 . Argue by contradiction, and suppose that J and J 0 violate the precedence constraints, i.e. J ≺ J 0 and i0 ≤ i (the case J 0 ≺ J and i ≤ i0 is symmetric). Then, clearly, s < s0 and f < f 0 for both F and S respect the precedence constraints. Thus, s < s0 ≤ i0 ≤ i ≤ f < f 0 . Now, consider the construction of S0i . By (3), J, J 0 < S0i+1,l−1 Since s < i, J is not added to S0i in Step 1. Thus, J is added to S0i in Step 2. However, J, J 0 ∈ (Fi,l−1 ∩ J 0 ) \ (S0i+1,l−1 ∩ J 0 ) and therefore J is not minimal in (Fi,l−1 ∩ J 0 ) \ (S0i+1,l−1 ∩ J 0 ) since J ≺ J 0 — a contradiction. 

Let us define the set of strictly early jobs to the right of i in F 0 , 0 Zi = F0,i−1 ∩ Fi,l−1 ∩ J0

for i = k, . . . , l − 1. The following two additional characterizations are the consequence of the assumption that S minimizes total earliness. Lemma 7 Zk = ∅. Proof: By contradiction. Suppose Zk , ∅. The total earliness of S restricted to the jobs in Y = F0,k−1 ∩ J 0 is e so that the jobs in Y are scheduled in S e in [0, k] exactly greater than zero, because Zk ⊆ Y. Now change S to S 0 e exactly as they are scheduled in as they are scheduled in F in [0, k], while the jobs in J \ Y are scheduled in S e is feasible. Moreover, each S0 in [k, l]. Lemmas 4, 5 and 6 and the feasibility of S imply that the new schedule S 0 e e while job in J is scheduled in S, because Ak = ∅. Finally, the total earliness of the jobs in Y equals zero in S, 0 e the earliness of the jobs in J \ Y remains the same as in S by Lemma 4. Thus, S reduces the total earliness of S which leads to a contradiction.  Lemma 8 If |Fi0 | = 1, then Zi = ∅ for each i = k, . . . , l − 1. Proof: If i = k, then the lemma follows directly from Lemma 7, thus i > k in the following. By contradiction. Suppose |Fi0 | = 1 and Zi , ∅ for some i ∈ {k + 1, . . . , l}. Let J be a maximal job in Zi . By definition J ∈ F0,i−1 and J ∈ F j0 for some j ∈ {i, . . . , l − 1}. Since J is early, J ∈ S j . Moreover, by the choice of k, (Fi,l−1 ∩ J 0 ) \ (S0i+1,l−1 ∩ J 0 )0 ⊆ (Fi,l−1 ∩ J 0 ) \ (S0i,l−1 ∩ J 0 ) = Ai , ∅. Therefore, if j = i, then J ∈ Si \ S0i+1,l−1 which implies that either |Si \ S0i+1,l−1 | = 2 or a job in (Fi,l−1 ∩ J 0 ) \ (S0i+1,l−1 ∩ J 0 ) is added to S0i in Step 2 of the construction of S0i . By Lemma 4 all successors of J are in S0i+1,l−1 ∩ J 0 , and since J ∈ F0,i−1 all predecessors of J are in F0,i−1 , then each job in (Fi,l−1 ∩ J 0 ) \ (S0i+1,l−1 ∩ J 0 ) is unrelated to J. Therefore, in both cases |Fi0 | = |S0i | = 2. 0 Suppose, j > i. If there exists a job J 0 ∈ Fi,0 j such that J 0 ≺ J, then clearly J 0 ∈ F0,i−1 and J 0 ∈ Fi,l−1 , which 0 implies that J ∈ Zi . This leads to a contradiction since J is a maximal job in Zi . Therefore, no predecessor of J ∈ F j0 is scheduled in Fi,0 j . Consequently, we can delete J from F j0 and add it to Fi0 , which results in F 0 with 7

smaller than S total earliness when restricted to the jobs in J 0 since i < j. This follows from the observation that if J 0 is any other early job in E, then, by Lemma 4, J 0 is scheduled at the same moment in both S and F 0 . Moreover, by Lemma 4, jobs in D ∪ T do not affect the total earliness of F 0 . This leads to a contradiction since S minimizes total earliness.  It remains to schedule independent Ii for i = k, . . . , l−1. To that end we build a schedule F 00 in [k, l] as follows. Initially let I 0j = I j for each j = k, . . . , l − 1. Start with i = k. If |Fi0 | = 1, then let j = min{ j0 ∈ {k, . . . , i} : I 0j0 , ∅}. Take a job J ∈ I 0j and add it to Fi0 to make a two job Fi00 . Delete J from I 0j0 and proceed to i + 1. If |Fi0 | = 2, then 00 make Fi00 = Fi0 and proceed to i + 1. Finish, when you reach l − 1. Observe that the independent jobs in Fk,l do not start earlier than in Fk,l , and thus they respect their release dates. We have the following two lemmas characterizing F 00 . Lemma 9 |Fi00 | = 2 for i = k, . . . , l − 1. Proof: By Lemma 2, we have |Fi0 | ≥ 1 for each i = k, . . . , l − 1. Thus, it suffices to prove that if |Fi0 | = 1, then { j ∈ {k, . . . , i} : I 0j0 , ∅} is non-empty. By contradiction. Suppose |Fi0 | = 1 and { j ∈ {k, . . . , i} : I 0j0 , ∅} = ∅, and suppose that i is the earliest moment for which this holds. We have Fk,l−1 ∩ J 0 ⊆ S0k,l−1 ∩ J 0 by definition of k. 0 0 Moreover, by Lemma 8 no job in Fk,i−1 ∩ J 0 is in Fi,l−1 . Therefore, Fk,i−1 ∩ J 0 ⊆ Fk,i−1 . However, by assumption 0 0 0 0 0 0 |Fk,i−1 ∩ J | = |Fk,i−1 |. Thus, Fk,i−1 ∩ J = Fk,i−1 , and consequently Fi,l−1 ∩ J ⊆ Fi,l−1 which means Ai = ∅ – contradiction provided that i > k. Thus, it remains to consider k = i. Then our assumption can be restated as Ik = Ik0 = ∅. That is, there are no independent jobs in Fk . Since |Fk0 | = 1, at least one of the jobs in Fk , say J is early, that is, by Lemma 4, it is executed at j in S and in F 00 for some j > k. Clearly, J is maximal in Fk,l−1 . Thus, all its predecessors, if any, belong to F0,k−1 . However, by Lemma 7, Zk = ∅. Therefore, none of these predecessors 0 0 is in Fk,l−1 , and thus J is maximal in Fk,l−1 . Construct a new schedule S as follows: take the old S in [0, k] (with 0 0 the jobs in Fk,l−1 removed) and F in [k, l], and in this new S start J at k. This schedule is feasible, since |Fk0 | = 1 0 by assumption, and J is maximal in Fk,l−1 . However, by starting J at k in the new S the total earliness of the initial S reduces by j − k > 0 – contradiction. Therefore, the lemma holds.  Lemma 10 No two independent jobs are done in parallel in Fi00 for i = k, . . . , l − 1. All independent jobs satisfy the precedence constraints. Proof: The first part of the lemma follows directly from Lemma 2 and from the construction of F 00 . The second part is trivial for intrees. However, it needs to be proved for general precedence constraints. The proof is by contradiction. Suppose that J 0 ≺ J, and J ∈ Fi00 and J 0 ∈ F j00 , where j ≥ i. First note that, by Lemmas 5 and 6, one of these two jobs must be independent and the other must belong to J 0 . Clearly J 0 cannot be an independent job. Otherwise, J 0 ≺ J would imply that so is J. Thus, J 0 ∈ J 0 . Since no independent job is executed in F 00 earlier than in F , we have J ∈ Ft where i ≥ t. Thus, J 0 ≺ J guarantees that J 0 ∈ Ft0 , where t0 < t ≤ i in F . This leads to a contradiction since by Lemma 2 |Fi0 | = 1, and thus, by Lemma 8, Zi = ∅.  We are now ready to show that F 00 in [k, l − 1] can be extended to a schedule for J with the total completion time being less than this of F . 00 Lemma 11 Fk00 , · · · , Fl−1 can be extended to a feasible schedule F 00 with the total completion time being less than the total completion time of F .

Proof: Define Fi00 = Fi for each i = 0, . . . , k − 1. Thus, F 00 is feasible when restricted to [0, k]. The schedule 00 Fk00 , . . . , Fl−1 respects the precedence constraints by Lemmas 5, 6 and 10. Since Ak = ∅ and Zk = ∅, the precedence 00 00 constraints between two jobs J ∈ F0,k−1 and J 0 ∈ Fk,l−1 are not violated in F 00 . By Lemma 9 all jobs in J1 ∪ {X 0 } are scheduled in [0, l] in F 00 . To make F 00 a schedule for J let Fi00 = Fi \ {X 0 } for each i = l, . . . , Cmax (F ) − 1. It is easy to see that the total completion time of jobs in J1 ∪ {X 0 } is strictly less in F 00 than in F . Moreover, the completion time of each job in J2 \ {X 0 } is the same in F 00 as it is in F . Since J = J1 ∪ J2 , we have that F 00 is a feasible schedule for J.  Finally, the following lemma characterizes schedule F .

8

Lemma 12 If F is a minimum total completion time schedule for J and the earliest idle time occurs in [l − 1, l], l < Cmax (F ), in F , then (l, J1 , J2 ) is a decomposition. Proof: Let m = Cmax (F ). Assume that l is the smallest integer such that |Fl−1 | < 2. By assumption l < m. If Fl−1 = ∅, then the release date of each job in Fl,m−1 is at least l, which results in a decomposition. Let Fl−1 = {X}. If there is no X 0 ∈ J such that X ≺ X 0 , then again each job in Fl,m−1 has the release date at least l, otherwise we could schedule a job in parallel with X which would reduce the total completion time of F . Therefore, there is X 0 ∈ J such that X ≺ X 0 . If there exists a schedule S for J 0 ∪ {X 0 } with Cmax (S) ≤ l, then by Lemma 11 we could obtain a schedule F 00 with total completion time being strictly less than that for F which contradicts the assumption on F . Consequently, we must have Cmax (S) > l, which means that X 0 cannot be executed in [0, l] in any feasible schedule, because by the definition all jobs in J 0 are predecessors of X 0 . Hence, all successors of X have to be scheduled at l or later in any feasible schedule. Finally, if a job X 00 ∈ Fl,m−1 is not a successor of X 0 , then its release date is at least l. This proves that no job in Fl,m−1 can be scheduled in [0, l] in any feasible schedule, which again results in a decomposition.  We are now ready to prove the conjecture of Baptiste and Timkovsky. Theorem 2 (Conjecture of Baptiste and Timkovsky [1]) All optimal schedules for P2|prec, r j , p j = 1| ideal.

P

Ci are

Proof: By induction on the cardinality of |J|. For |J| = 1, the theorem trivially holds. Assume that the theorem holds for any set of jobs with fewer than n jobs and consider a set J with |J| = n. Let F be a minimum total completion time schedule for J. If |Fi | = 2 for l = 0, ..., Cmax (F ) − 2, then the theorem clearly holds for |J| = n. Thus, consider the smallest integer such that |Fl−1 | < 2 with l < Cmax (F ) − 1. By Lemma 12, there exists a decomposition (l, J1 , J2 ), where J1 = F0,l−1 , J2 = Fl,m−1 , ∅, with m = Cmax (F ). By the induction hypothesis, both F restricted to J1 and F restricted to J2 minimize both total completion time and maximum completion time at the same time. Finally, each job in J2 , by definition of decomposition, starts at l or later in any feasible schedule. This proves that the theorem holds for n.  This gives the following main result. Corollary 1 The problem P2|prec, r j , p j = 1|

3.2

P

Ci is ideal.



Finding ideal schedules

We start by describing a procedure which takes an integer l, 0 < l ≤ |J|/2, and the set of jobs J as input and decides whether there exists any schedule F for J without idle time in [0, l]. (Note that l > |J|/2 immediately implies that such a schedule F does not exist since there are only |J| jobs to be scheduled in [0, l] on two parallel processors.) Let r = max{r j : J j ∈ J}. e of 2(r − l) + |J| − 2l pairwise unrelated jobs First, we define a set J e = {J 0 : j = 1, . . . , 2(r − l) + |J| − 2l}. J j e has its release date equal to l, and it is unrelated to any job in J. Second, let Each job in J e Jl = J ∪ J. We have the following lemma. Lemma 13 There exists a schedule for Jl not longer than r + |J| − 2l if and only if there exists a schedule for J with no idle time in [0, l]. e are scheduled in [l, r + |J| − 2l]. Proof: Let F be a schedule for Jl with Cmax (F ) ≤ r + |J| − 2l. All jobs in J e = 2(r −l)+|J|−2l of them Thus, at most 2(r +|J|−2l)−2l jobs can be scheduled in [l, r +|J|−2l], and exactly |J| e must come from J. Therefore, at most |J| − 2l jobs from J are scheduled in [l, r + |J| − 2l] in F . Consequently, at least 2l jobs from J are scheduled in [0, l] in F . Thus, the restriction of F to the jobs in J gives the desired schedule with no idle time in [0, l]. 9

To prove the reverse implication, note that if there exists a schedule for J with no idle time in [0, l], then there exists such a schedule where all jobs in J end by r + |J| − 2l, which follows immediately from the definition of e can fill in the idle times in this schedule in time interval [l, r + |J| − 2l], because they are r. Thus, the jobs in J pairwise unrelated, which proves the thesis.  We now describe algorithm FD (Find Decomposition), which takes a set of jobs J as input and finds the smallest integer l > 0 such that there must be idle time in [0, l] in every feasible schedule for J. Then, the algorithm returns a decomposition at l, (l, J1 , J2 ). The FD is as follows: Procedure FD (Find Decomposition) Input: a set of UET jobs J with release dates and precedence constraints. Output: a decomposition (l, J1 , J2 ) of J. Step 1. Let l = 1 and r = max{r j : J j ∈ J}. Step 2. Find a schedule F l for Jl not longer than r+|J|−2l, that is, with no idle time in [0, l] (see Lemma 13) if one exists. This is achieved by the algorithm in [8], which is actually given for P2|prec, p j = 1, d j |Cmax , i.e., for the corresponding problem with deadlines instead of release dates. However, since we know in advance the maximum completion time m = r + |J| − 2l of a schedule for Jl that we want to find, the release date r j of a job in Jl can be turned into the corresponding deadline d j = m−r j and the precedence constraints between jobs ‘reversed’. Then, the algorithm in [8] can be applied to the resulting instance to find a schedule not longer than r + |J| − 2l, if one exists. The schedule F l for Jl can then be obtained by simply reversing the order of execution of jobs in that schedule. If F l exists and 2l = |J|, which implies that all jobs from J are scheduled in [0, l], then return (l, J, ∅) and terminate the algorithm. If F l exists and 2l < |J|, then increase l by 1 and repeat Step 2. If F l does not exist, i.e. all schedules for Jl are longer than r + |J| − 2l, then go to Step 3. l−1 Step 3. Let J 00 = ∅ for l = 1 and let J 00 = F0,l−2 for l > 1. (Note that J 00 is a subset of J that can be scheduled with no idle time in [0, l − 1].) If all jobs in J \ J 00 are released at l or later, then return the decomposition (l, J 00 , J \ J 00 ) and terminate the algorithm. Otherwise, proceed to Step 4.

Step 4. Let x ∈ J \ J 00 be a job released by l − 1 with no predecessors in J \ J 00 . Return the decomposition (l, J 00 ∪ {x}, J \ (J 00 ∪ {x})) and terminate the algorithm. e are executed at l or later due to their release dates. Note that J 00 ⊆ J, because all jobs in J Lemma 14 Given a set of jobs J with release dates and precedence constraints, the procedure FD always finds a decomposition of J. Proof: If J can be scheduled without idle time in [0, |J|/2], then clearly |J| must be even, and FD returns a decomposition (|J|/2, J, ∅) in Step 2. Thus, let l∗ be the smallest integer such that there must be idle time in [0, l∗ ] in any feasible schedule for J. By Lemma 13, for each l = 1, . . . , l∗ − 1 the algorithm finds a feasible schedule F l for Jl no longer than r + |J| − 2l in Step 2. This step terminates when l = l∗ , and the algorithm has ensured that there is a subset J 00 of J that can be scheduled with no idle time in [0, l∗ − 1]. Then, Step 3 is executed. At Step 3, if all jobs in J \J 00 are released at l∗ or later, then clearly (l∗ , J 00 , J \J 00 ) is a decomposition. Finally, in Step 4, there exists a schedule F 0 for J 00 ∪ {x} with the only unit idle time being in [l∗ − 1, l∗ ]. This schedule can be easily extended to a schedule F for Jl∗ that completes at r+|J|−2l∗ +1 and the second idle time in F occurs in [Cmax (F ) − 1, Cmax (F )]. The schedule F minimizes the total completion time of Jl∗ , because the two unit idle time intervals must occur in any schedule; however, they are scheduled at the latest possible moments, i.e., in the intervals [l∗ − 1, l∗ ] and [Cmax (F ) − 1, Cmax (F )] in F . By Lemma 12, (l∗ , J 00 ∪ {x}, Jl∗ \ (J 00 ∪ {x})) is  a decomposition for Jl∗ , which implies that (l∗ , J 00 ∪ {x}, J \ (J 00 ∪ {x})) is a decomposition for J. Step 2 of FD relies on an algorithm for P2|prec, p j = 1, r j |Cmax . An algorithm for this problem running in time O(n2 ) has been given by Garey and Johnson in [8]. Its O(n2 ) running time is achieved only for precedence constraints given in transitively closed form. This form can be calculated in time O(nlog2 7 ) [4] or O(n3 ) [5, 11].

10

The transitive closure will be calculated once, and prior to the execution of FD.1 Also, note the we can without loss of generality assume that r = O(n). This gives the following. Lemma 15 The running time of FD is O(n2 l∗ ) provided that the precedence constraints are given in a transitively closed form, where l∗ is the minimum integer such that there exists an idle time in [0, l∗ ] in each feasible schedule for J.  Now we are ready to describe algorithm FIS (Find Ideal Schedule), which finds an ideal schedule for J. Procedure FIS (Find Ideal Schedule) Input: a set of UET jobs J with release dates and precedence constraints. P Output: an ideal schedule for P2|prec, r j , p j = 1| Ci . Step 1. Find the transitive closure of ≺. Initially, F is an empty schedule; let m := 0 (m is the maximum completion time of F ). Step 2. If J = ∅ then return F . Otherwise, proceed to Step 3. Step 3. Find a decomposition (l, J1 , J2 ) by invoking the procedure FD(J). Proceed to Step 4. Step 4. Schedule the jobs in J1 in [m, m + l] in F and let m := m + l. Proceed to Step 5. Step 5. Decrease the release date of each job in J2 by l and let J := J2 . Go to Step 2. Theorem 3 There exists a O(n3 )-time algorithm for finding ideal two processor schedules for UET jobs with arbitrary release dates and arbitrary precedence constraints. Proof: The most time consuming part of the computation is Step 3 of FIS. Lemma 15 gives that the running time of FIS is O(n3 ). By the previous comments, calculating the transitive closure of ≺ does not increase the complexity. Let F be a schedule for J calculated by FIS. By Lemma 14 and the definition of decomposition, one can observe that both maximum and total completion time of F are minimized for J, which proves that F is ideal. 

4

Final remarks

In summary, the advances in scheduling theory presented in this paper are 3-fold: P • A proof that P2|prec, r j , p j = 1| Ci and hence P2|prec, r j , p j = 1|Cmax are ideal problems: each instance has a schedule that minimizes both the maximum completion time and the total completion time. • To the extent that ideal schedules are desirable, the introduction of greater scheduling flexibility in the form of preemptions is a curse as well as a blessing: We have shown by example that P2|pmtn, prec, r j , p j = P 1| Ci can not be ideal even if the precedence relation is simplified to intrees. The pursuit of both good, P P if not optimal, Cmax and Ci performance can then turn to bi-criterion scheduling whereby a min Ci schedule is found only within the class of min Cmax schedules, or vice versa: a min Cmax schedule is found P only within the class of min Ci schedules. • The algorithm proposed in this paper for finding ideal schedules yields a novel algorithm for solving P P2|prec, r j , p j = 1| Ci ; the algorithm has a running-time complexity of O(n3 ) that improves substantially over the O(n9 ) running-time complexity of the most efficient algorithm known heretofore. Acknowledgment The authors are grateful to two anonymous referees for constructive comments that helped to improve the presentation. This research has been supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Grant OPG0105675 and the National Science Centre (NCN) of Poland Grant N519 643340. Moreover, Dariusz Dereniowski has been supported by the Polish Ministry of Science and Higher Education (MNiSW) grant N N516 196437. observe for the sake of completeness that Garey and Johnson in [9] give an algorithm for a more general problem P2|prec, p j = 1, r j , d j |Cmax with both release dates and deadlines. Their algorithm runs in O(n3 ). 1 We

11

References [1] P. Baptiste and V.G. Timkovsky. Shortest path to nonpreemptive schedules of unit-time jobs on two identical parallel machines with minimum total completion time. Mathematical Methods of Operations Research, 60(1):145–153, 2004. [2] E.G. Coffman Jr. and R.L. Graham. Optimal scheduling for two-processor systems. Acta Inf., 1:200–213, 1972. [3] E.G. Coffman Jr., J. Sethuraman, and V.G. Timkovsky. Ideal preemptive schedules on two processors. Acta Inf., 39(8):597–612, 2003. [4] M.J. Fischer and A.R. Meyer. Boolean matrix multiplication and transitive closure. In SWAT ’71: Proceedings of the 12th Annual Symposium on Switching and Automata Theory (swat 1971), pages 129–131, Washington, DC, USA, 1971. IEEE Computer Society. [5] R.W. Floyd. Algorithm 97: Shortest path. Commun. ACM, 5(6):345, 1962. [6] M. Fujii, T. Kasami, and K. Ninomiya. Optimal sequencing of two equivalent processors. SIAM J Applied Mathematics, 17:784–789; Erratum 20 (1971) 141, 1971. [7] H.N. Gabow. An almost-linear algorithm for two-processor scheduling. J. ACM, 29:766–780, 1982. [8] M.R. Garey and D.S. Johnson. Scheduling tasks with nonuniform deadlines on two processors. J. ACM, 23(3):461–467, 1976. [9] M.R. Garey and D.S. Johnson. Two-processor scheduling with start-times and deadlines. SIAM J. Comput., 6(3):416–426, 1977. [10] R.L. Graham, E.L. Lawler, J.K. Lenstra, and A.H.K. Rinnooy Kan. Optimization and approximation in deterministic sequencing and scheduling: a survey. Ann. Discrete Math., 5:287–326, 1979. [11] S. Warshall. A theorem on boolean matrices. J. ACM, 9(1):11–12, 1962.

12