Bicriteria Scheduling for Parallel Jobs - Semantic Scholar

1 downloads 0 Views 134KB Size Report
scheduling the jobs in the order of their completion times in a preemptive ... eral versions of the off-line preemptive deterministic algorithm PSRS, which can be ...
Bicriteria Scheduling for Parallel Jobs Dror G. Feitelson and Ahuva Mu’alem School of Computer Science and Engineering The Hebrew University, Jerusalem 91904, Israel email: feit,ahuva  @cs.huji.ac.il

Abstract We study off-line and on-line algorithms for parallel settings which simultaneously yield constant approximation factors for both minimizing total weighted completion time and makespan objectives. Introducing preemptions, the list scheduling algorithm introduced by Garey and Graham is extended in a way in which a parallel job completion time is solely dictated by its predecessors in the input instance, as is the case with Graham’s classical list algorithm for the identical parallel machine model. This suggests the applicability of many ordering techniques which have previously been adapted to the single machine and identical parallel machine settings.

Introduction Recently scheduling so as to approximate more than one criterion received considerable attention [ARSY99, CPSSSW96, Sch96, SW97]. We present approximation techniques and new results for several scheduling models of parallel jobs in order to minimize both total (weighted) completion time and makespan objectives. In these problems each job may demand more than one processor for execution, and hence is defined as a parallel job. We consider  identical processors and  independent parallel jobs to be scheduled. The jobs are independent in the sense that no precedence constraints are imposed. Each job  has positive processing time  , and a number of processors needed for execution  ,    . A machine can process at most one job at a time. Exactly   processors are required at the same starting time to run job  ; any set of   processors is proper for execution of  . A job  may be preempted from time to time and later continued to run on a possibly different set of   processors, without affecting its total processing time   . A job  may have a release date    before which it cannot be processed. Let   denote the completion time of job  in a feasible schedule. Our goal is to design bicriteria approximation algorithms that simultaneously minimize the total (weighted) completion time      (or     ) together with the makespan, the largest   , denoted by  . We consider three models. The weighted with release dates is denoted   !  " #$% 

   " & 1 , 1 - In this notation suggested by Graham et al. [GLLR79], each scheduling problem is denoted by ')( *+( , , where ' is either 1 or , denoting that the scheduling environment contains either one processor, or . identical processors; , indicates the objectives 0/#13241 or 5241 and/or 2687:9 ; and * optionally contains ; 1= .A@CB < . 1 , indicating whether there are non-trivial release date constraints, whether preemption is allowed, and whether more than one processor may be required by a job. Here . 1 differs from its original definition by [GLLR79] where it was used to denote the number of operations in a job shop model (they did not refer to

1

Table 1: Summary of Schwiegelshohn’s deterministic off-line results [Sch96] approx. factor

 

 

 

 



                                 

for 

model

approx. factor for



2.37

3.20

2.42

3.00

7.11

9.60

7.26

9.00

 & is the unweighted general model. and denoted the general model. Similarly   "  ! #$%   "  The restricted model has no release dates and denotes   ! #  $       ! & . Review of Techniques Our algorithms and their analysis are inspired by the work of Philips et al. [PSW98]. They introduced algorithms for converting preemptive schedules into nonpreemptive ones, by scheduling the jobs in the order of their completion times in a preemptive schedule. This technique was generalized in many deterministic and randomized off-line and on-line results (e.g., [HSSW97, SS02, CMNS97]). Our main contribution is the extension of this ordering technique to the case of parallel jobs settings. Our on-line technique give constant competitive factors for both criteria and can be considered as a simple yet non-trivial generalization to the one-machine preemptive approximation algorithm introduced by Chekuri et al. [CMNS97]. Plugging our preemptive parallel list algorithm, we get a straight forward generalization of the work of Hall et al. [HSSW97], and achieve constant approximation factors for both criteria in the off-line case. This demonstrates the applicability of our technique to the framework of the ordering techniques, as most of existing algorithms for these realistic parallel job settings are rather complicated. Related work and Results The problem   ! #$%      was shown to be NP-hard by Labetoulle et al. [LLLR84]. The problem   " #$% & is NP-hard [Droz94]. A detailed overview on parallel job scheduling problems is given in [Droz96]. For parallel jobs with no preemptions and trivial release dates Schwiegelshohn et al. [SLWTY98] introduced the SMART off-line algorithms. They achieved an 8-approximation for the model        and a 10.45-approximation for the weighted model          . Schwiegelshohn [Sch96] presented several versions of the off-line preemptive deterministic algorithm PSRS, which can be fine tuned to minimize either total completion time or makespan (see Table 1; for each model there are two possible scheduling algorithms, the first algorithms better approximate total completion time and the second better approximate the makespan). Chakrabarti et al. [CPSSSW96] presented a general technique for designing on-line bicriteria algorithms with constant approximation ratios. Their on-line framework is based on partitioning the time horizon into intervals, where in the beginning of each interval they run a dual fpas packing to choose a promising subset of already released jobs to list schedule in the current interval. For the model of weighted malleable jobs 2 )-approximation with no preemption allowed they construct a deterministic algorithm that gives factors for both criteria. Allowing randomization they construct an algorithm that produces a schedule that has makespan at most 8.67 times the optimal and total weighted completion time at most times

! #"%$&

+-,/.0

! #"'$(&*)

parallel jobs). Drozdowski [Droz96] uses instead the notation 1 to denote the number of processors needed for execution of a parallel job. 2 A malleable job is one whose execution time is a function of the number of (identical) processors alloted to it.

2

Table 2: Summary of results approx./competitive ratio

 

 

 

 

    *                                      

model

/ 

for 

type

for

 

off-line, deterministic

4

2

off-line, deterministic

5

3

on-line, deterministic

6

3

on-line, deterministic

12

3

optimal. The result also holds with the constants reversed. As our general model is a special case of this weighted malleable model, suggesting a simple modification with preemptions allowed, we show how to reduce the approximation ratio of the makespan to 3. The rest of our results is summarized in table 2. Organization of this paper In section 1 we review a nonpreemptive list scheduling algorithm introduced by Garey and Graham and its preemptive modification. In section 2 we consider on-line algorithms for the general and the unweighted general models. In section 3 we consider off-line algorithms for the general and restricted models.

1 Preemptive List scheduling Algorithm First we review a nonpreemptive parallel list scheduling algorithm introduced by Garey and Graham [GG75] for the model       . The virtue of adding preemptions to it is that each completion time of a job in the modified algorithm is solely dictated by its predecessors in the input permutation and not by successive jobs, consistently with the classical list algorithm developed by Graham for the identical parallel machine model [Graham66], and in contrast with the original parallel list algorithm [GG75]. This enables us to use various ordering techniques to the general case of parallel jobs. The main idea of the modified algorithm is that jobs are packed onto selves in the order defined by the input list. If the next job in the list requires too many processors, it is skipped, and jobs further down in the list are considered. Thus the shelf is considered full only when no additional job from the list can fit in. However, the height of each shelf is only until the first job in the shelf terminates, or a new job that precedes some scheduled job is released, whichever comes first. When this happens, all jobs are preempted and a new shelf is packed in the same way.

       #" !

    

    denote the total “area” which is occupied by job  . Let   &           and   denote the largest release date and the largest processing time among the jobs   , respectively. We shall denote the makespan of algorithm by  , and the makespan of the optimal algorithm as  . Notations: Let 





 !

The optimum makespan is bounded from below by

& )'   ( $ " ! % & & *  '  '     

that is the maximum between the total area divided by the total number of machines, maximal running time, and the maximal release date. This trivial lower bound holds for the preemptive and nonpreemptive settings.

3

Algorithm 1 (List-GG) [GG75] Whenever there are sufficiently many processors idle, schedule some job on as many processor as it requires.

'

Lemma 1 [GG75] by the greedy algorithm List-GG is (tightly) bounded from  The makespan produced  &  , regardless the rule by which we choose the job to be  above by: 

  scheduled. In particular, List-GG is a 2-approximation for the model      & .

" 

!

'

Now we describe our modified parallel list algorithm. Algorithm 2 (List-GG-PR) A job list  is given, ordered by some permutation  . Each job in the list has    . Base step: Set       . Set  . Iterative step: Construct shelf  : schedule the first job  in the list  such that     . Greedily select the next jobs to be scheduled from the remaining jobs in  by scanning through  in order and selecting the next job on the list whose processors demand is not greater than the number of the currently free processors at time   , and whose release date is not greater than   . Now, run the jobs in the shelf  . Preempt shelf  from running at the first point in time at which the shortest job in shelf  terminates, or some job which is placed in the permutation before a certain job in the current shelf is released, whichever is earlier. Update the remaining processing times of the jobs in the current shelf. If some job has completed its execution remove it from  . Set   . Set   to be the current time, or   !!"   whichever is larger. Repeat until the list  is empty.





$

%  

 

Consider the following example of 5 processors and 4 jobs all released at time zero:  $#4   ) +* ,* -# . In List-GG the completion times would be: ('  % &)   %  ) .#4  * /# , where in List-GG-PR: 0 1% 0 2) 0 3' 2* 0 54 (job 4 is preempted right after job 1 finishes).



&%





 "





 "



 "







 "



Observe that the greedy style of List-GG-PR causes a preemption to affect only jobs which run on the tail of the current shelf. The completion times produced by List-GG-PR on an induced ordered list consisting of any prefix of  , are identical to the respective completion times produced by List-GG-PR on the entire list  . In the above example if we drop job 4 then both algorithms would produce the same schedule with 20 2% 0 2) 0 6' . Note that the times are different from the completion times of List-GG for the original instance. However, they are identical to the completion times of List-GG-PR for the original instance.

 

 "



For a given job list  ordered by  consider the job fragments list 7 created by List-GG-PR on  , by splitting each job  into sub jobs  7 7 % according to the natural way dictated by the shelves and the :  preemptions that occurred. Note that  98