Main Article Content

Abstract

 In this work, a two-step approach is adopted for scheduling tasks with synchronous inter-task communication. To that end, an efficient algorithm, called GLB-Synch, is introduced for mapping clusters and ordering tasks on processors. The algorithm used the information obtained during the clustering step for selecting a cluster to be mapped on the least loaded processor. A performance study has been conducted on the GLB-Synch algorithm by simulation. A multi-step scheduling setup has been performed based on a previously developed algorithm for clustering DAGs with synchronous communication, called NLC-SynchCom, and using synthesized DAGs. We have shown by analysis and experimentation that the GLB-Synch algorithm retains the same low complexity cost of the first step for clustering. The performance results highlight the drawback of synchronization on speedup scalability.

 

Keywords

Multi-step scheduling Clustering Mapping and ordering Synchronous message passing Duistributed-memory systems

Article Details

How to Cite
Arafeh, B. (2005). A Multi-Step Approach for Scheduling Tasks with Synchronization on Clusters of Computers. The Journal of Engineering Research [TJER], 2(1), 77–89. https://doi.org/10.24200/tjer.vol2iss1pp77-89

References

  1. Arafeh, B., 2003, "Non-linear Clustering Algorithm for Scheduling Parallel Programs with Synchronous Communication on NOWs," Int. J. of Computers and Their Applications, Vol. 10(2), pp. 103-114.
  2. Foster, I. and Kesselman, C., 1997, "Globus: A Metacomputing Infrastructure Toolkit," The Int. J. of Supercomputing Applications and High Performance Computing, Vol. 11(2), pp. 115-128.
  3. Gerasoulis, A. and Yang, T., 1993, ''On the Granularity and Clustering of Directed Acyclic Task Graphs,'' IEEE TPDS, Vol. 4(6), pp. 686-701.
  4. Hwang, K., 1993, “Advanced Computer Architecture: Kadamuddi, D. and Tsai, J., 2000, ''Clustering Algorithm for Parallelizing Software Systems in Multiprocessors,'' IEEE Transactions on Software Engineering, Vol. 26(4), pp. 340-361.
  5. Kasahara Lab., [http://www.kasahara.elec.waseda.ac.jp/] (viewed in July 2004), Japan.
  6. Kim, S. J. and Browne, J.C., 1988, ''A General Approach to Mapping of Parallel Computation upon Multiprocessor Architectures,'' Proceedings of Int. Conf. on Parallel Processing, Vol. II, pp. 1-8.
  7. Kowk, Y. K. and Ahmad, I., 1999, ''Static Scheduling Algorithms for Allocating Directed Task Graphs to Multiprocessors,'' ACM Computing Surveys, Vol. 31(4), pp. 406-471.
  8. Kowk, Y. K. and Ahmad, I., 1996, ''Dynamic Critical-Path Scheduling: An Effective Technique for Allocating Task Graphs onto Multiprocessors," IEEE TPDS, Vol. 7(5), pp. 506-521.
  9. Lee, H., Kim, J., Hong, S. and Lee, S., 2003, "Task Scheduling Using a Block Dependency DAG for Block-Oriented Sparse Cholesky Factorization," Parallel Computing, Vol. 29(1), pp. 135-159.
  10. Liou, J. C. and Palis, M. A., 1997, "A Comparison of General Approaches to Multiprocessor Scheduling,"Proceedings of 11th Int. Parallel Processing Symposium, pp. 152-156.
  11. Papadimitriou, C. and Yannakakis, M., 1999, ''Towards an Architecture-Independent Analysis of Parallel Algorithms,'' SIAM J. on Computing, Vol. 19(2), pp. 322-328.
  12. Radulescu, A., 2001, “Compile-Time Scheduling for Distributed-Memory Systems”. Ph.D. Thesis, Faculty of Information Technology and Systems, Delft University of Technology, Delft, The Netherlands.
  13. Sarkar, V., 1989, “Partitioning and Scheduling Programs for Execution on Multiprocessors”. Cambridge, MA:MIT press.
  14. Shirazi, B., Wang, M. and Pathak, G., 1990, ''Analysis and Evaluation of Heuristic Methods for Static Scheduling,'' J. of Parallel and Distributed Computing, Vol. 10, pp. 222-232.
  15. Wu, M. Y. and Gajski, D.D., 1990, ''Hypertool: A Programming Aid for Message-Passing Systems,'' IEEE TPDS, Vol. 1(3), pp. 330-343.
  16. Yang, T. and Gerasoulis, A., 1994, ''DSC: SchedulingParallel Tasks on Unbounded Number of Processors," IEEE TPDS, Vol. 5(9), pp. 951-967.
  17. Yang, T. and Gerasoulis, A., 1992, "List Scheduling with or without Communication Delays," Parallel Computing, Vol. 19, pp. 1321-1344