Multiprocessors Scheduling for Imprecise ... - Semantic Scholar

1 downloads 0 Views 147KB Size Report
4] Charles Martel, Preemptive Scheduling with Re- lease Times, Deadlines, and Due Times, J. ACM,. Vol.29, No.3, July 1982, pp.812-829. 5] J, Blazewicz, and ...
Multiprocessors Scheduling for Imprecise Computations in a Hard Real-Time Environment Ashok Khemka K.V. Subrahmanyam Tata Institute of Fundamental Research Bombay- 400 005, INDIA [email protected]

Abstract This paper discusses the problem of scheduling multiprocessors in a hard real-time environment allowing imprecise computations. When results of the desired quality cannot be produced in time, then intermediate, imprecise results of acceptable quality are accepted. In an imprecise computation model, a task may be terminated any time after it has produced an acceptable result. Such tasks are logically decomposed into two parts: the mandatory part followed by an optional part. In a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part re nes the result produced by the mandatory part to reduce the error in the result. The optional parts need not ever be completed. The quality of the result of each job is measured in terms of the average error in the results over several consecutive periods. So given n real-time periodic jobs which has a periodicity and a processing requirement per period, the problem of determining whether there exists a preemptive schedule on m identical or uniform machines which completes the mandatory portion of each job in the time interval in a period is examined. A combination of network ow techniques and convex programming formulation is used to construct a minimum error schedule whenever there exists a feasible schedule. The error produced due to the uncomputed portions of tasks is assumed to be some real-valued convex functions of the uncomputed portions.

1 Introduction In a hard real-time system, a timing fault is said to occur when a real-time task fails to complete before its deadline. A new approach, called the imprecise computation approach, has been proposed as a means to avoid these timing faults of real-time tasks [1, 2, 3,

R.K. Shyamasundar Computer Science Group Tata Institute of Fundamental Research Bombay- 400 005, INDIA [email protected] 9]. This approach makes available results of poorer, but acceptable, quality when the desired quality result may not be available in time. We assume that the realtime processes are monotone, i.e., the error function is a non-increasing function of time. As more time is spent on the task, the accuracy of the intermediate result is nondecreasing. The result upon completion of both the parts of the real-time task is said to be precise. In a system that supports imprecise computations, run-time support is provided to record intermediate results produced by each real-time process at appropriate instances of the job's execution. There have been a number of results in scheduling uniprocessor systems for imprecise computations [1,2,3,9]. In this paper, we are concerned with the problem of scheduling periodic tasks on multiprocessors for imprecise computation approach. A monotone process that may be terminated any time after it has produced an acceptable result is modelled as a task that is logically decomposed into a mandatory part followed by an optional part. The mandatory part must be completed to produce an acceptable result; in a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part re nes and improves the result produced by the mandatory part. The error in the result is further reduced as the optional part is allowed to execute longer. In this way, our workload di ers from the traditional, deterministic models [6]. There is another type of workload model of imprecise computation [3], where errors in di erent periods of a task have cumulative e ects, making it necessary to complete the optional part in one period among several consecutive periods. Hence, it becomes necessary to generate precise results in some periods. Examples of this model include tracking and control. The issue of schedulability of this type of workload model is discussed in [3]. For our purpose, we consider only the average errors

of jobs overs several consecutive periods. Examples of this type of model include image enhancement and speech processing. The optional portions of tasks need never be completed. A predictable strategy is needed to schedule the mandatory parts of tasks, guaranteeing that all deadlines are met, and a less conservative strategy to schedule the optional parts, making the best possible use of processor time. In this paper, we formulate the problem of scheduling such a set of n periodic jobs on m (m > 1) uniform and identical processors as a min-cost-max- ow network ow problem. We have further shown how to reduce the edge complexity of the network by organizing the nodes corresponding to the time intervals in a binary search tree organization. When the error is a linear function of the uncomputed optional portions of the tasks we construct a schedule from the ow values obtained by solving a min-cost-max- ow problem. For real-valued convex error functions a convex programming formulation of the network is presented. A solution to this optimization problem can then be used to construct a feasible schedule.

2 Workload Model Consider a set of preemptable periodic tasks f1; 2; :::;ng with periodicity Ti for task i with the following characteristics:  Each version of the periodic task must complete before the arrival of the next version of the task. This implies that the deadline, Di , of each task i is the same as its periodicity, Ti .  At any instant, at most one version of a task remains active.  All tasks are in phase and their arrival times are all zero. Therefore, the kth version of a task i is active during the time interval (k ? 1)  Ti and k  Ti .  The timing requirements are rational. Let D = LCM 1 fTig and T = GCD2fTi g. Let the total computation time of task i be Ci . We further assume that tasks are mutually independent. The minimum execution time mi is the amount of processor time required for every task Ti to produce an acceptable result. Each task Ti is logically decomposed into, 1. A mandatory part, Mi , with execution time mi . The start time and deadline of Mi is the same as the start time and deadline of Ti . 1 2

Least Common Multiple. Greatest Common Divisor.

2. An optional part, Oi , with execution time oi = Ci ? mi . The start time of Oi is the nish time of Mi and the deadline of Oi is the deadline of Ti . 3. A task i is further characterized by a parameter, wi, which describes the relative urgency or importance of the task. The j th version of task Ti ; Mi ; Oi are referred to as Ti;j ; Mi;j ; Oi;j with computational requirements Ci;j ; mi;j ; oi;j respectively. Further, we assume that there are m parallel processors. Processors are either identical with equal processing speeds or uniform with processing speed si for processor i. Each task requires at most one processor at a time and each processor can execute on at most one processor. The processors are capable of executing any task. Tasks are independent and can be preempted with no cost. Using the network ow approach, it has been shown in [4] that there exists a preemptible schedule on m identical and uniform machines given n tasks with release times, computation times and deadlines. A schedule which minimizes the mean weighted execution time loss is described in [5] using the min-cost-max- ow approach. In this paper, we extend this technique to the case of periodic tasks and also reduce the edge complexity of the constructed network by a factor of O( log( DT )= DT ). For generalised convex error function, we reduce the network ow formulation into a convex programming problem to solve for the ow values. We also show how to construct an actual schedule for each of the intervals of length T from the ow values of the revised network. This workload model of imprecise computations differs from traditional scheduling models because the total processor time assigned to a task in a valid schedule can be less than its execution time. Hence, in this model, a valid schedule means that the assigned time of every task is at least equal to its minimumexecution time. If the assigned time equals its total computation time, the error due to the task is said to be zero. If the assigned time is less than its execution time, the error is a nonincreasing function of the assigned time. A schedule in which the assigned time of every task is equal to its execution time is called a precise schedule. Only precise schedules are valid schedules in the traditional sense. The term feasible schedule of the task set refers to a schedule in which every task meets its deadline; in other words, the total processor time assigned to every version of the task is at least equal to mi for all tasks i.

3 Reduction To Network Flow problem First, let us construct the basic network model for each task on the lines of [4,5]. There is a T ?node for each task and two MO?nodes corresponding to its mandatory and optional subparts. The I ? node stands for the disjoint time intervals. An arc connects a MO?node to an I ?node if the task corresponding to the node is available in the corresponding interval. The structure of the network for each task is shown in Figure 1a. To model it for the periodic task case, corresponding to each task i, we have to consider D=Ti tasks corresponding to that many versions of the task i in an interval of length D. Our time interval of interest in the periodic task case is [0; D].Thus, there are a total of D=T I ?nodes, each I ?node corresponding to an interval of length T. If the max ow without the MO?nodes corresponding to thePoptional of tasks in Figure Pj=1portions 1a is less than ni=1 D=T mi:j then the ow is infeasible. If there is a feasible ow, a feasible schedule is constructed from the ow values. To minimise the mean weighted execution time loss we could instead adopt a min-cost-max ow approach as in [5]. For generalised real valued convex error functions we formulate the problem as that of minimising a convex function with linear constraints obtained from the underlying network. A solution to the minimisation problem can be used to construct a feasible schedule. In the underlying network O(V ) = O(n  D=T) and O(E ) = O(n  D=T). We can reduce O(E ) to O(n log D=T) by a binary search tree organization as shown in Figure 1b. This reduces the complexity of the network and hence the number of linear constraints in the minimisation problem. We are of the opinion that the number of edges can be further reduced; however, we have not been able to do so.

2. For each interval at level l, if there are tasks with some ow in the interval, then apply algorithm SA1 given in the Appendix taken from [7] to such set of tasks for the respective intervals. 3. l l + 1 ( consider the next level of tree ) 4. If l > level of root, then stop else goto step 2. Consider step 2 of the algorithm for the non-leaf nodes of the tree in detail. Intervals corresponding to the leaf nodes of the binary search tree are referred to as simple intervals and intervals corresponding to the nonleaf nodes are referred to as compound intervals. It is easy to see that a compound interval is a combination of consecutive simple intervals. For a simple interval, it is easy to apply algorithm SA1. A simple interval, k, can be denoted by values ik ; jk values which correspond to the least indexed available processor and the earliest available time unit on that processor. Let the compound interval for which a schedule is to be built be [i  T; j  T], where i < j. A compound interval is can be obtained as sequence of simple intervals as follows: 1. Find the interval with the least current i value (least indexed processor available). If there is more than one such interval then take the least indexed interval. 2. Allocate tasks on processor i in that interval only. If the task's requirement is not met, then apply step 1 again. 3. Go to next task of that interval. Lemma1 : There are at most (n + m ? 1)  D=T preemptions in an interval of length D. Proof : Since in any simple interval, each task is considered just once, there can be at most (n + m ? 1) preemptions per interval, hence the lemma. 2

3.1 Schedule Construction: Identical Machine

3.2 Schedule Construction: Uniform Processor

i

Let us denote the interval [(i ? 1)  T; i  T] as interval i. We will now show the construction of a schedule from the ow values of a binary search tree organization. We consider the intervals at the lowermost levels rst and then move up the tree level by level.

Scheduling Algorithm 1. l 0; ( here, l denotes the level of the intervals in the binary tree)

Index the m processors as 1; 2; :::;m with speeds s1  s2  s3  :::  sm . A task runs on machine i for t time units completes si  t units of processing. Thus, if tij = the time job j runs on machine i, it is necessary that, m X si  tij = pj i=1

in order for task j to be completed. Processors with such speeds are called uniform. The solution for each

interval in the case of uniform processors is again obtained as a network ow problem. Within each interval the set of available tasks does not change; so, given the amount of processing to be done on each job (the ow values on arcs from job set to the intervals) within an interval, scheduling these jobs within the interval becomes an instance of the problem where all tasks have a common release time and a common deadline. The necessary and sucient condition for schedulability on m uniform processors has been considered by Horvath et al. in [8] where it is shown that if q1  q2  :::  qk are the amounts of processing to be done on k jobs, then there is a schedule which completes these processing amounts within an interval of length t i q1  s1  T q1 + q2  (s1 + s2 )  T ::: q1 + q2 + ::: + qm?1  (s1 + ::: + sm?1 )  T q1 + q2 + ::: + qk  (s1 + ::: + sm )  T Thus, it can be easily determined if the processing amounts within an interval can be scheduled. Assuming that tasks are preemptible at rational time units, it is easy to build a schedule for each interval from the ow values of arcs from task nodes to interval nodes using algorithm SA1 described in the Appendix. Only we need to modify the lengths of the intervals to si  T for processor i. Note that by the above m inequalities, we do not execute any single task on more than one processor simultaneously. As before to reduce the edge complexity of the whole network we organize the interval nodes into a binary search tree organization and get the actual schedule from the ow values as in the case of identical nodes. Lemma 2:DThe edge complexity reduces by a factor of O( (log T )=(D=T)). Proof : Same as in the identical machine case. Note that we would have m rooted trees instead of just one as in the identical processor case and we build schedule for the lowermost level of all trees rst and then to the second level and so on.

4 Convex Programming Formulation We assume that the error functions are some realvalued convex functions of the uncomputed portions of the optional parts. Then, the ow values on arcs of the min-cost-max- ow problem can be obtained by

solving the following convex programming optimization problem obtained from the network ow problem: minimize

X w  (f ) i2O

i

i

where (fi ) are some convex functions of the ow values on arcs from nodes corresponding to the optional portion Oi to the sink S2 and O is the set of all optional tasks Oi . The set of linear constraints are 0  fi;j  ci;j ; 8(i; j) 2 E

X

fj j(i;j )2E g

X

fi;j ?

fj j(i;j )2E g

X

fj j(j;i)2E g

fi;j ?

X

fj;i 

fj j(j;i)2E g

Xn X m i=1 j

fj;i = 0;

i;j ;

i = S1

i 2= (S1 ; S2)

where E denotes the set of all edges of the directed graph corresponding to the network. Values ci;j are the capacities of the corresponding edges, while S1 and S2 are the source and sink nodes respectively. If there is a feasible solution, then there is an optimal solution to the above problem which can be used to build an actual schedule. Reducing the edge complexity of the network reduces the number of linear constraints of this convex programming formulation.

5 Conclusion In this paper, we have described preliminary results to model the imprecise computation scheduling of multiprocessors in a hard real-time environment, as a network ow problem. When the error is a linear function of the execution time loss, we solve a min-costmax- ow problem. The complexity of the network is reduced by organising the interval nodes in a binary tree. Further, we construct an actual schedule for each of the time intervals in the case when the processors are identical and uniform. An interesting quetion is how to reduce the network complexity? Further, for general convex error functions, we reduce the network ow problem to a convex programming problem. It will be interesting to study some simple heuristics and approximation techniques to solve this minimisation problem.

References [1] K.J. Lin, S.Natarajan, and J.W.S. Liu, Imprecise Results : Utilizing Partial Computations in Real-Time Systems, Proc. IEEE Real-Time Sys-

tems Symp.,1987.

[2] J.W.S. Liu, K.J. Lin, and S. Natarajan, Scheduling Real-Time, Periodic Jobs Using Imprecise Results, Proc. IEEE Real-Time Systems Symp. 1987. [3] J-Y Chung, J.W.S Liu, and K-J Lin, Scheduling Periodic Jobs That Allow Imprecise Results, IEEE Trans. On Computers, Vol.39, No.9, Sept.1990. [4] Charles Martel, Preemptive Scheduling with Release Times, Deadlines, and Due Times, J. ACM, Vol.29, No.3, July 1982, pp.812-829. [5] J, Blazewicz, and G. Finkel, Minimizing Mean Weighted Execution Time Loss On Identical And Uniform Processors, Information Processing Let-

ters 24 (1987) 259-263. [6] E.G. Co man Jr., and R. Graham, Scheduling Theory, New York : Wiley, 1976. [7] Ashok Khemka, and R.K. Shyamasundar, Multiprocessor Scheduling Of Periodic Tasks In A Hard Real-Time Environment, Proc. IPPS 92, pp. 76-

81, March 1992 (full version to appear in IJHSC). [8] Horvath, E.C., Lam, S., and Sethi, R. A level algorithm for preemptive scheduling, J.ACM 24, 1 (Jan 1977), 32-43. [9] Shih, W., Liu, J.W.S., and Chung,J., Algorithms

for scheduling imprecise computations with timing constraints, SIAM J. Computing, 20, 3, pp. 537-

552, June 1991.

APPENDIX

Let f?1; ?2; :::; ?mg be a set of m periodic jobs with computation times fC1; C2; :::; Cmg, and periodicity fD1 ; D2; :::; Dmg respectively. Consider the problem of constructing a valid preemptive schedule on n processors with m tasks, such that (1) at any instant, at most one task can be executed on any single processor and (2) no single task can be executed on more than one processor at the same instant. Assumption (2) implies that the kth ; k  1, instance of a task ?i , must be computed in full between the time units (k ?1)Di and k  Di . Let the computation times and periodicity of tasks be expressed as integral multiples of the processor clock tick { thus, a task can be preempted only at integral time units. Let T = GCDfD1 ; D2; :::; Dmg and D = LCM fD1 ; D2; :::; Dmg and Ci  Di , for each task ?i . Let the utilization P factor U of a set of tasks f?1; ?2; :::; ?mg be mi=1 (Ci =Di ). We call the time intervals [0; T]; [T; 2T]; [2T; 3T]; ::: blocks of length T each. Let U  n, for some integer n. The scheduling steps described in [7] is given below:

Algorithm SA1 Let Ni = T  Ci=Di and ti;j be the ith time unit corresponding to the j th processor. Then, the steps of SA1 are given by: 1. Initialize : i; j; k 1; (i denotes the processor, j the time unit and k the task number) 2. If j + Nk ? 1  T, then i i + 1; j j + Nk ? T (task k is scheduled on processor i and processor (i + 1), i.e., task k is allotted the time units [ti+1;1; ti+1;j +N ?T ?1] and [ti;j ; ti;T ].) else j j + Nk (task k is scheduled on processor i only, i.e., task k is allotted the time units [ti;j ; ti;j +N ?1 ].) 3. Iterate : k k + 1; If k  m, then goto step 2. (Generate the schedule for the next task.) k

k