mm mm m

0 downloads 12 Views 18MB Size Report
and scheduling, fault tolerance, programming tools, processor selection criteria, ...... assigned to machines), and leaf nodes represent final ...... salient characteristics of a very high-level language based ...... [2] D. Gannon, P. Beckman, E. Johnson, T. Green, and ...... purpose application emulator whose parameters could be.










^^»Ä »•■■'




» fei III






19990913 011

Form Approved OMB No. 0704-0188


Pnhiir- mnnrtinn burdan for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, SSÄÄlnSllTÄto nSdÄ Äflng md reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this ffitton of informat on Includina suoaes Ions for reducing this burden, to Washington Headquarters Services, Directorate «or Information Operations and Reports 1215 Jefferson C& wShwa" sXlW^X. V^g^3OT,M the Office of Management and Budget, Paperwork Reduction Project (0704-0188 , Washington, DC 20503. 1. AGENCY USE ONLY (Leave Blank)



Sent. 9. 1999

final. Nov.



A 1999 Workshop on Heterogeneous Computing

198 to Sept. 30. 1999 5. FUNDING NUMBERS N00014-99-1-0117

6. AUTHOR(S) H. J. Siegel


School of Electrical and Computer Engineering Purdue University West Lafayette, IN 47907-1285 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES)

Dr. Andre M. van Tilborg, Director Math, Computer & Information Sciences Division Office of Naval Research Arlington, VA 22217-5660






13. ABSTRACT (Maximum 200 words)



This grant funded the proceedings of the 8th Heterogeneous Computing Workshop CHCW yy;, which was held on April 12, 1999. HCW '99 was part of the merged symposium of the 13th 'international Parallel Processing Symposium and the 10th Symposium on Parallel and Distribute Processing (IPPS/SPDP 1999), which was sponsored by the IEEE Computer Society Technical Committee on Parallel Processing and held in cooperation with ACM SIGARCH. Heterogeneous recomputing systems range from diverse elements within a single computer to coordinated, geographically distributed machines with different architectures. A heterogeneous computing system provides a variety of capabilities that can be orchestrated to execute multiple tasks with varied computational requirements. Applications in these environments achieve ?..• performance by exploiting the affinity of different tasks to different computational platforms or paradigms, while considering the overhead of inter-task communication and the coordination of distinct data sources and/or administrative domains. Topics representative of those in the proceedings include: network profiling, configuration tools, scheduling tools, analytic benchmarking, programming paradigms, problem mapping, processor assignment and scheduling, fault tolerance, programming tools, processor selection criteria, and compiler assistance. 15. NUMBER OF PAGES


heterogeneous computing, distributed computing, high-performance computing 17. SECURITY CLASSIFICATION OF REPORT

UNCLASSIFIED NSN 7540-01-280-5500 ä1171 0-08-96






UNLIMITED Stanaard Form 298 (Rev. 2-89) Prescribed by ANSI Std. Z39-18 298-102

PURDUE UNIVERSITY School of Electrical and Computer Engineering 1285 Electrical Engineering Building West Lafayette, Indiana 47907-1285, USA E-mail: [email protected]

Prof. H. J. Siegel Office Phone: 765-494-3444 Office Fax: 765-494-2706 Home Phone: 765-743-3290

September 9,1999 Defense Technical Information Center 8725 John J. Kingman Road STE0944 Ft. Belvoir, Virginia 22060-6218

Enclosed is the final report (that is, form SF-298) for ONR grant number N00014-99-10117, which supported the publication of the enclosed workshop proceedings. ONR's support is greatly appreciated. Yours truly,

H. J. Siegel Professor of Electrical and Computer Engineering cc: Dr. Andre van Tilborg, ONR Grant Administrator, ONR Purdue ECE Business Office


Eighth Heterogeneous Computing Workshop (HCW '99)


Eighth Heterogeneous Computing Workshop (HCW '99) April 12, 1999 San Juan, Puerto Rico Edited by Viktor K. Prasanna, University of Southern California

Cosponsored by IEEE Computer Society's Technical Committee on Parallel Processing U.S. Office of Naval Research

Industrial Affiliate NOEMIX NOEMIX


COMPUTER SOCIETY Los Alamitos, California Washington • Brussels • Tokyo OTIC QUALITY INSPECTED 4

Copyright © 1999 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved

Copyright and Reprint Permissions: Abstracting is permitted with credit to the source. Libraries may photocopy beyond the limits of US copyright law, for private use of patrons, those articles in this volume that carry a code at the bottom of the first page, provided that the per-copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. Other copying, reprint, or republication requests should be addressed to: IEEE Copyrights Manager, IEEE Service Center, 445 Hoes Lane, P.O. Box 133, Piscataway, NJ 08855-1331. The papers in this book comprise the proceedings of the meeting mentioned on the cover and title page. They reflect the authors' opinions and, in the interests of timely dissemination, are published as presented and without change. Their inclusion in this publication does not necessarily constitute endorsement by the editors, the IEEE Computer Society, or the Institute of Electrical and Electronics Engineers, Inc.

IEEE Computer Society Order Number PR00107 ISBN 0-7695-0107-9 0-7695-0108-7 (microfiche) 0-7695-0109-5 (casebound) ISSN 1097-5209

Additional copies may be ordered from: IEEE Computer Society Customer Service Center 10662 Los Vaqueros Circle P.O. Box 3014 Los Alamitos, CA 90720-1314 Tel: + 1-714-821-8380 Fax:+ 1-714-821-4641 E-mail: [email protected]

IEEE Computer Society Asia/Pacific Office Watanabe Bldg., 1-4-2 Minami-Aoyama Minato-kuTokyo 107-0062 JAPAN Tel: +81-3-3408-3118 Fax: + 81-3-3408-3553 [email protected]

IEEE Service Center 445 Hoes Lane P.O. Box 1331 Piscataway, NJ 08855-1331 Tel: + 1-908-981-1393 Fax: + 1-908-981-9667 [email protected]

Editorial production by Lorretta Palagi Cover art design and production by Alex Torres Printed in the United States of America by Technical Communication Services



4» ^ IEEE

Table of Contents Message from the General Chair Message from the Program Chair Message from the Steering Committee Chair. Organizing Committees Session I: Comparisons of Mapping Heuristics Chair: Jon Weissman, University of Texas at San Antonio, TX, USA Task Scheduling Algorithms for Heterogeneous Processors Haluk Topcuoglu, Salim Hariri, and Min-You Wu A Comparison Study of Static Mapping Heuristics for a Class of Meta-tasks on Heterogeneous Computing Systems • Tracy D. Braun, Howard Jay Siegel, Noah Beck, Ladislau L. Bölöni, Muthucumaru Maheswaran, Albert I. Reuther, James P. Robertson, Mitchell D. Theys, Bin Yao, Debra Hensgen, and Richard F. Freund Dynamic Matching and Scheduling of a Class of Independent Tasks onto Heterogeneous Computing Systems Muthucumaru Maheswaran, ShoukatAli, Howard Jay Siegel, Debra Hensgen, and Richard F. Freund

vii •'

viii ix x




Session II: Design Tools Chair: Ishfaq Ahmad, Hong Kong University of Science and Technology, Hong Kong An On-Line Performance Visualization Technology Aleksandar Bakic, Matt W. Mutka, and Diane T. Rover Heterogeneous Distributed Virtual Machines in the Harness Metacomputing Framework Mauro Migliardi and Vaidy Sunderam Parallel C++Programming System on Cluster of Heterogeneous Computers Yutaka Ishikawa, Atsushi Hori, Hiroshi Tezuka, Shinji Sumimoto, Toshiyuki Takahashi, and Hiroshi Harada Are CORBA Services Ready to Support Resource Management Middleware for Heterogeneous Computing? Alpay Duman, Debra Hensgen, David St. John, and Taylor Kidd Session III: Modeling and Analysis Chair: Steve Chapin, University of Virginia, Charlottesville, VA, USA Statistical Prediction of Task Execution Times Through Analytic Benchmarking for Scheduling in a Heterogeneous Environment Michael A. Iverson, Fusun Özgüner, and Lee C. Potter Simulation of Task Graph Systems in Heterogeneous Computing Environments Noe Lopez-Benitez and Ja-Young Hyon Communication Modeling of Heterogeneous Networks of Workstations for Performance Characterization of Collective Operations Mohammad Banikazemi, Jayanthi Sampathkumar, Sandeep Prabhu, Dhabaleswar K. Panda, and P. Sadayappan

47 60 73


99 112


Session IV: Task Assignment and Scheduling Chair: Fusun Özgüner, The Ohio State University, Columbus, OH Multiple Cost Optimization for Task Assignment in Heterogeneous Computing Systems Using Learning Automata Raju D. Venkataramana and N. Ranganathan On the Robustness of Metaprogram Schedules Ladislau Bölöni and Dan C. Marinescu A Unified Resource Scheduling Framework for Heterogeneous Computing Environments Ammar H. Alhusaini, Viktor K. Prasanna, and C. S. Raghavendra

"' 14f



Session V: Invited Case Studies Chair: Noe Lopez-Benitez, Texas Tech University, Lubbock, TX, USA Metacomputing with MILAN A. Baratloo, P. Dasgupta, V. Karamcheti, and Z M. Kedem An Overview of MSHN: The Management System for Heterogeneous Networks Debra A. Hensgen, Taylor Kidd, David St. John, Matthew C. Schnaidt, Howard Jay Siegel, Tracy D. Braun, Muthucumaru Maheswaran, ShoukatAli, Jong-Kook Kim, Cynthia Irvine, Tim Levin, Richard F. Freund, Matt Kussow, Michael Godfrey, Alpay Duman, Paul Carff, Shirley Kidd, Viktor Prasanna, Prashanth Bhat, and Ammar Alhusaini QUIC: A Quality of Service Network Interface Layer for Communication in NOWs R, West, R. Krishnamurthy, W. K. Norton, K Schwan, S. Yalamanchili, M. Rosu, and V. Sarat

^9 184


Adaptive Distributed Applications on Heterogeneous Networks Thomas Gross, Peter Steenkiste, and Jaspal Subhlok


Author Index


Message from the General Chair Welcome to the 8th Heterogeneous Computing Workshop. The field of heterogeneous computing is motivated by the diverse requirements of large-scale computational tasks, and the realization that the features of a single architecture are not always ideal for a wide range of task requirements. HCW '99 is the result of the dedication and hard work of a number of people. I thank Richard F. Freund of NOEMIX for founding this series of workshops and for working hard to ensure its continuity and success. Special thanks also go to our industrial supporter, NOEMIX, for providing plaques of recognition to be awarded to individuals who have contributed to the workshop's success over the years. Viktor K. Prasanna of the University of Southern California is this year's Program Chair. I have worked with Viktor on professional activities before, and as always he went above and beyond the call of duty. In addition to the Program Chair's tasks, he also took responsibility for completing tasks that I probably should have taken on in my role as General Chair. So I owe a special thank you to Viktor. With the able assistance of a terrific program committee, he has put together an excellent program and collection of papers in these proceedings. Thanks are also due to the Steering Committee members for their guidance and support, and for the confidence they had in asking me to serve as this year's General Chair. Special thanks go to H. J. Siegel, the Steering Committee Chair, who did a remarkable job of leading that group and conveying ideas to me for enhancing the quality and prestige of the workshop. H. J. worked with Viktor and myself on numerous occasions with regard to planning and decision-making issues. H. J.'s advice, energy, and dedication to this workshop series are truly keys to its overall success. The Publicity Chair, Muthucumaru Maheswaran of the University of Manitoba, did an outstanding job of publicizing the workshop through print and on the web. His careful and prompt updating of the workshop's web page was especially useful in keeping the authors informed on guidelines for final submission. HCW '99 is being held in conjunction with IPPS/SPDP '99, the second merger of the International Parallel Processing Symposium (IPPS) and the Symposium on Parallel and Distributed Processing (SPDP). I thank the General Co-Chairs of IPPS/SPDP '99, Jose Rolim and Charles Weems, for their cooperation and assistance, with special thanks to Jose for taking on the responsibility of coordinating and organizing the workshops of IPPS/SPDP '99. This year, the workshop is cosponsored by the IEEE Computer Society and the U.S. Office of Naval Research. These proceedings are published by the IEEE Computer Society Press. Deborah Plummer and Lorretta Palagi, both of the IEEE Computer Society Press, deserve special thanks for their punctuality and professionalism in overseeing the publication of these proceedings. Special thanks go to Lorretta for her efficient handling of the papers included here, and for carefully attending to the many details required to take a proceedings to press. I would also like to thank my secretary, Marcelia Sawyers, for her assistance with my duties related to this workshop. Finally, I would like to thank my wife, Robin, for the loving support and patience she has for me. John K. Antonio Texas Tech University


Message from the Program Chair The papers published in these proceedings represent some of the results from leading researchers in heterogeneous computing (HC). The field of heterogeneous computing has matured over the years. A number of experimental as well as commercial sytems continue to be built that integrate hardware, software, and algorithms to realize high-performance systems that satisfy diverse computational needs. The response to the call for participation was excellent. Submissions were sent out to the program committee members for evaluation. In addition to their own reviews, the program committee members sought outside reviews to evaluate the submissions. The final selection of manuscripts was made at USC on November 24, 1998. The contributed papers were grouped into four sessions: Comparisons for Mapping Heuristics, Design Tools, Modeling and Analysis, and Task Assignment and Scheduling. In addition to contributed papers, the program includes a session of Invited Case Studies. I believe the papers represent continuing work as the field matures and I expect to see revised versions of these papers appear in archival journals. I would like to thank many volunteers for their support. First of all, I want to thank John Antonio, General Chair, H. J. Siegel, Steering Committee Chair, and Richard Freund, who initiated the HCW series, for inviting me to be the program chair. Over the past year, John and H. J. provided me with a number of pointers to resolve meeting-related issues. I want to thank them for their invaluable inputs in composing a strong technical program. It was truly a pleasure working with them. I would like to thank the authors for submitting their work and the program committee members and the reviewers for their efforts in reviewing the manuscripts. I would also like to thank Lorretta Palagi for her patience in working with late camera-ready submissions and for her prompt response to proceedingsrelated questions. Finally, I am thankful to my assistant Henryk Chrostek who handled the submitted manuscripts in a timely manner. Viktor K. Prasanna University of Southern California


Message from the Steering Committee Chair These are the proceedings of the 8th Heterogeneous Computing Workshop, also known as HCW '99. Heterogeneous computing is a very important research area with great practical impact. The topic of heterogeneous computing covers many types of systems. A heterogeneous system may be a set of machines interconnected by a wide-area network and used to support the execution of jobs submitted by a large variety of users to process data that is distributed throughout the system. A heterogeneous system may be a suite of high-performance machines tightly interconnected by a fast dedicated local-area network and used to process a set of production tasks, where the subtasks of each task may execute on different machines in the suite. A heterogeneous system may also be a special-purpose embedded system, such as a set of different types of processors used for automatic target recognition. In the extreme, a heterogeneous system may consist of a single machine that can reconfigure itself to operate in different ways (e.g., in different modes of parallelism). All of these types of heterogeneous systems (as well as others) are appropriate topics for this workshop series. I hope you find the contents of these proceedings informative and interesting, and encourage you to look also at the proceedings of past and future HCWs. Many people have worked very hard to make this workshop happen. Viktor Prasanna, University of Southern California, is this year's Program Chair, and he assembled the great program that is represented by the papers in these proceedings. Viktor did this with the assistance of his Program Committee, which is listed on the next page. John Antonio, Texas Tech University, is the General Chair, and he is responsible for the overall organization and administration of this year's workshop, and he's done an outstanding job. I thank Richard F. Freund, NOEMIX, for founding this workshop series, and for asking me to succeed him as Chair of the Steering Committee. This year the workshop is cosponsored by the IEEE Computer Society and the U.S. Office of Naval Research, with additional support from our industrial affiliate NOEMIX. I thank Andre M. van Tilborg, Director of the Math, Computer, & Information Sciences Division of the Office of Naval Research, for arranging funding for the publication of the workshop proceedings (under grant number N00014-99-10117). I thank Richard F. Freund, NOEMIX, for providing the plaque given to Viktor in recognition of his efforts as Program Chair. This workshop is held in conjunction with the Merged International Parallel Processing Symposium & Symposium on Parallel and Distributed Processing (IPPS/SPDP). The HCW series is very appreciative of the constant cooperation and assistance we have received from the IPPS/SPDP organizers. H. J. Siegel School of Electrical and Computer Engineering Purdue University


Organizing Committees

General Chair

John K. Antonio, Texas Tech University

Program Chair

Viktor K. Prasanna, University of Southern California

Steering Committee

Publicity Chair

Program Committee

H. J. Siegel, Purdue University, Chair Francine Berman, University of California at San Diego Jack Dongarra, University of Tennessee and Oak Ridge National Lab Richard F. Freund, NOEMIX Debra Hensgen, Naval Postgraduate School Paul Messina, Caltech Jerry Potter, Kent State University Viktor K. Prasanna, University of Southern California Vaidy Sunderam, Emory University Muthucumaru Maheswaran, University ofManitoba

Gul A. Agha, University of Illinois at Urbana-Champaign Ishfaq Ahmad, The Hong Kong University of Science and Technology Francine Berman, University of California at San Diego Hank Dietz, Purdue University Jack Dongarra, University of Tennessee and Oak Ridge National Lab Ian Foster, Argonne National Laboratory Dennis Gannon, Indiana University Andrew Grimshaw, University of Virginia Babak Hamidzadeh, University of British Columbia Salim Hariri, University ofArizona Debra Hensgen, Naval Postgraduate School Carl Kesselman, ISI/University of Southern California Noe Lopez-Benitez, Texas Tech University Muthucumaru Maheswaran, University ofManitoba Veljko Milutinovic, University of Belgrade Fusun Ozguner, The Ohio State University Beth Plale, Georgia Institute of Technology Cauligi Raghavendra, The Aerospace Corporation Daniel A. Reed, University of Illinois at Urbana-Champaign Jaspal Subhlok, Carnegie Mellon University Rajeev Thakur, Argonne National Laboratory Charles C. Weems, University of Massachusetts Jon B. Weissman, University of Texas at San Antonio

Session I Comparisons of Mapping Heuristics

Chair Jon Weissman University of Texas at San Antonio

Task Scheduling Algorithms for Heterogeneous Processors Haluk Topcuoglu Department of Electrical Engineering and Computer Science Syracuse University, Syracuse, NY 13244-4100 [email protected] Salim Hariri Department of Electrical and Computer Engineering The University of Arizona, Tucson, Arizona 85721-0104 Min-You Wu Department of Electrical and Computer Engineering University of Central Florida

Abstract Scheduling computation tasks on processors is the key issue for high-performance computing. Although a large number of scheduling heuristics have been presented in the literature, most of them target only homogeneous resources. The existing algorithms for heterogeneous domains are not generally efficient because of their high complexity and/or the quality of the results. We present two low-complexity efficient heuristics, the Heterogeneous Earliest-Finish-Time (HEFT) Algorithm and the Critical-Path-on-a-Processor (CPOP) Algorithm for scheduling directed acyclic weighted task graphs (DAGs) on a bounded number of heterogeneous processors. We compared the performances of these algorithms against three previously proposed heuristics. The comparison study showed that our algorithms outperform previous approaches in terms of performance (schedule length ratio and speedup) and cost (timecomplexity).

1. Introduction Efficient scheduling of application tasks is critical to achieving high performance in parallel and distributed systems. The objective of scheduling is to map the tasks onto the processors and order their execution so that task precedence requirements are satisfied and minimum schedule length (or makespan) is given. Since the general DAG scheduling is NP-complete, there are many research efforts that have proposed heuristics for

0-7695-0107-9/99 $10.00 © 1999 IEEE

the task scheduling problem [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. Although a wide variety of different approaches are used to solve the DAG scheduling problem, most of them target only for homogeneous processors. The scheduling techniques that are suitable for homogeneous domains are limited and may not be suitable for heterogeneous domains. Only a few methods [5, 4, 9, 10] use variable execution times of tasks for heterogeneous environments; however, they are either high-complexity algorithms and/or they do not generally provide good quality of results. In this paper we propose two static DAG scheduling algorithms for heterogeneous environments. They are for a bounded number of processors and are based on list-scheduling heuristics. The Heterogeneous EarliestFinish-Time (HEFT) Algorithm selects the task with the highest upward rank (defined in Section 2) at each step; then the task is assigned to the most suitable processor that minimizes the earliest finish time with an insertion-based approach. The Critical-Path-on-aProcessor (CPOP) Algorithm schedules critical-path nodes onto a single processor that minimizes the critical path length. For the other nodes, the task selection phase of the algorithm is based on a summation of downward and upward ranks; the processor selection phase is based on the earliest execution finish time, as in the HEFT Algorithm. The simulation study in Section 5 shows that our algorithms considerably outperform previous approaches in terms of performance

(schedule length ratio and speed-up) and cost (timecomplexity). The remainder of this paper is organized as follows. The next section gives the background of the scheduling problem, including some definitions and parameters used in the algorithms. In Section 3 we present the proposed scheduling algorithms for heterogeneous domains. Section 4 contains a brief review on the related scheduling algorithms that will be used in our comparison, and in Section 5 the performances of our algorithms are compared with the performances of related work, using task graphs of some real applications and randomly generated tasks graphs. Section 6 includes the conclusion and future work.

2. Problem Definition A parallel/distributed application is decomposed into multiple tasks with data dependencies among them. In our model an application is represented by a directed acyclic graph (DAG) that consists of a tuple G = (V,E,P, W, data, rate), where V is the set of v nodes/tasks, E is the set of e edges between the nodes, and P is the set of processors available in the system. (In this paper task and node terms are used interchangeably used.) Each edge (i,j) G E represents the task-dependency constraint such that task m should complete its execution before task rij can be started. W is a v x p computation cost matrix, where v is the number of tasks and p is the number of processors in the system. Each Wij gives the estimated execution time to complete task n,- on processor pj. The average execution costs of tasks are used in the task priority equations. The average execution cost of a node n,- is defined as wj = Yfj Wi,j/P- Data is a v x v matrix for data transfer size (in bytes) between the tasks. The data transfer rates (in bytes/second) between processors are stored in a p x p matrix, rate. The communication cost of the edge (i, j), which is for data transfer from task rc* (scheduled on pm) to task n.j (scheduled on pn), is defined by Cij = data(ni,rij)/rate(pm,pn). When both n,- and rij are scheduled on the same processor, pm = pn, then Cjj becomes zero, since the intra-processor communication cost is negligible compared with the interprocessor communication cost. The average communication cost of an edge is defined by cTJ = data(n,, n,j)/rate, where rate is the average transfer rate between the processors in the domain.

The EST(m,pj) and EFT{n.i,pj) are the earliest execution start time and the earliest execution finish time of node n, on processor pj, respectively. They are defined by EST(ni,Pj}

max{ T-Available\j],



(EFT(nm,Pk) + cm:i)} EFT(ni,Pj)


Wij + EST{ni,Pi)

(1) (2)

where pred(rii) is the set of immediate predecessors of task n,-, and T-Available[j] is the earliest time at which processor pj is available for task execution. The inner max block in the EST equation returns the ready time, i.e., the time when all data needed by n,- has arrived at the host pj. The assignment decisions are stored in a two-dimensional matrix list. The j'th row of list matrix is for the sequence of nodes (in the order of execution start time) that was already scheduled on pj. The objective function of the scheduling problem is to determine an assignment of tasks of a given application to processors such that the schedule length (or makespan) is minimized by satisfying all precedence constraints. After all nodes in the DAG are scheduled, the schedule length will be the earliest finish time of the exit node ne, EFT(ne,pj), where exit node ne is scheduled to processor pj. (If a graph has multiple exit nodes, they are connected with zero-weight edges to a pseudo exit node that has zero computation cost. Similarly, a pseudo start node is added to the graphs with multiple start nodes). The critical path (CP) of a DAG is the longest path from the entry node to the exit node in the graph. The length of this path, \CP\, is the sum of the computation costs of the nodes and inter-node communication costs along the path. The \CP\ value of a DAG is the lower bound of the schedule length. In our algorithms we rank tasks as upward and downward to set the scheduling priorities. The upward rank of a task n, is recursively defined by ranku(ni) = wi +


rij gsucc(n;)

(cJJ + ranku(rij))


where succ(n{) is the set of immediate successors of task 71*. Since it is computed recursively by traversing the task graph upward, starting from the exit node, it is referred to as an upward rank. Basically, ranku(ni) is the length of the critical path (i.e., the longest path)

from m to the exit node, including the computation cost of the node itself. In some previous algorithms the ranks of the nodes are computed using computation costs only, which is referred to as static upward rank, ranku. Similarly, the downward rank of a task n,- is recursively defined by rankd{ni) =





The downward ranks are computed recursively by traversing the task graph downward Basically, the rankd(rii) is the longest distance from the start node to the node rii, excluding the computation cost of the node itself. In some previous algorithms the level attribute is used to set the priorities of the tasks. The level of a task is computed by the maximum number of edges along any path to the task from the start node. The start node has a level of zero.

3. Proposed Algorithms In this section, we present our scheduling algorithms the Heterogeneous Earliest Finish Time (HEFT) Algorithm and the Critical-Path-on-a-Processor (CPOP) Algorithm. 3.1. The HEFT Algorithm The Heterogeneous-Earliest-Finish-Time (HEFT) Algorithm, as shown in Figure 1, is a DAG scheduling algorithm that supports a bounded number of heterogeneous processing elements (PEs). To set priority to a task rii, the HEFT algorithm uses the upward rank value of the task, ranku (Equation 3), which is the length of the longest path from n,- to the exit node. The ranku calculation is based on mean computation and communication costs. The task list is generated by sorting the nodes with respect to the decreasing order of the ranku values. In our implementation the ties are broken randomly; i.e., if two nodes to be scheduled have equal ranku values, one of them is selected randomly. The HEFT algorithm uses the earliest finish time value, EFT, to select the processor for each task. In noninsertion-based scheduling algorithms, the earliest available time of a processor pj, the T-Available\j] term in Equation 1, is the execution completion time of the last assigned node on pj. The HEFT Algorithm,

which is insertion-based, considers a possible insertion of each task in an earliest idle time slot between two already-scheduled tasks on the given processor. Formally, node n,- can be scheduled on processor pj, which holds the following in equality for the minimum value of* EST{listjM1 ,


- EFT{listjtk ,


> witj


where the listj^ is the *th node (in the start time sequence) that was already assigned on the processor pj. Then, T^Available[j] will be equal to EFT(listjtk , pj). The time complexity of the HEFT Algorithm is equal to 0(v2 x p).

3.2. The CPOP Algorithm The Critical-Path-on-a-Processor (CPOP) Algorithm, shown in Figure 2, is another heuristic for scheduling tasks on a bounded number of heterogeneous processors. The ranku and rankd attributes of nodes are computed using mean computation and communication costs. The critical path nodes (CPNi) are determined at Steps 5-6. The critical-path-processor (CPP) is the one that minimizes the length of the critical path (Step 7). The CPOP Algorithm uses rankd{ni) + ranku(m) to assign the node priority. The processor selection phase has two options: If the current node is on the critical path, it is assigned to the critical path processor (CPP); otherwise, it is assigned to the processor that minimizes the execution completion time. The latter option is insertion-based (as in the HEFT algorithm). At each iteration we maintain a priority queue to contain all free nodes and select the node that maximizes rankd{ni) + ran*u(n,). A binary heap was used to implement the priority queue, which has time complexity of 0(logv) for insertion and deletion of a node and 0(1) for retrieving the node with the highest priority (the root node of the heap). The time complexity of the algorithm is 0(v2 x p) for v nodes and p processors.

4. Related Work Only a few of the proposed task scheduling heuristics support variable computation and communication costs for heterogeneous domains: the Dynamic Level Scheduling (DLS) Algorithm [4], the LevelizedMin Time (LMT) Algorithm [9], and the Mapping Heuristic (MH) Algorithm [5]. Although there are genetic algorithm based research efforts [6, 10, 14], most of them are slow and usually do not perform as well as the list-scheduling algorithms.

1. 3. 4. 5. 6. 7.

Compute ranku for all nodes by traversing graph upward, starting from the exit node. Sort the nodes in a list by nonincreasing order of ranku values. while there are unscheduled nodes in the list do begin Select the first task n; in the list and remove it. Assign the task m to the processor pj that minimizes the (EFT) value of m.

8. end


Figure 1. The HEFT Algorithm

1. Compute ranku for all nodes by traversing graph upward, starting from the exit node. 2. Compute rankd for all nodes by traversing graph downward, starting from the start node. 3. \CP\ = ranku(ns), where ns is the start node. 4. For each node m do 5. If (rankd(m) + ranku(rn) = \CP\) then 6. rii is a critical path node (CPN). 7. Select the critical-path-processor that minimizes J2nie cPNWi) Figure 7. (a) Average SLRs of Scheduling Algorithms at Various Graph Sizes for Gaussian Elimination Graph (b) The Efficiency Comparison of the Algorithms

Fast Fourier Transformation The recursive, onedimensional FFT Algorithm [15, 16] and its task graph is given in Figure 9. A is an array of size n, which holds the coefficients of the polynomial; array Y is the output of the algorithm. The algorithm consists of two parts: recursive calls (lines 3-4) and the butterfly operation (lines 6-7). The task graph in Figure 9(b) can be divided into two parts: the nodes above the dashed line are the recursive call nodes (RCNs), and the ones below the line are butterfly operation nodes (BONs). For an input vector of size n, there are 2 x n — 1 RCNs and


n x log2 n BONs. (We assume that n = 2m for some integer m). Each path from the start node to any of the exit nodes in an FFT task graph is a critical path because the computation cost of nodes in any level are equal, and the communication costs of edges between two consecutive levels are equal.

FFT 1. 2. 3 4. 5 R 7. 8.

(A, 0)) n = length(A) if (n=1) return (A) Y = FFT( (A|0|,A|2) -Afn-2]) 0)') V1"« FFT((A[11,A[3] -A|n-1]) of