Topology-Aware and Dependence-Aware Scheduling and Memory Allocation for Task-Parallel Languages - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue ACM Transactions on Architecture and Code Optimization Année : 2014

Topology-Aware and Dependence-Aware Scheduling and Memory Allocation for Task-Parallel Languages

Résumé

We present a joint scheduling and memory allocation algorithm for efficient execution of task-parallel programs on non-uniform memory architecture (NUMA) systems. Task and data placement decisions are based on a static description of the memory hierarchy and on runtime information about intertask communication. Existing locality-aware scheduling strategies for fine-grained tasks have strong limitations: they are specific to some class of machines or applications, they do not handle task dependences, they require manual program annotations, or they rely on fragile profiling schemes. By contrast, our solution makes no assumption on the structure of programs or on the layout of data in memory. Experimental results, based on the OpenStream language, show that locality of accesses to main memory of scientific applications can be increased significantly on a 64-core machine, resulting in a speedup of up to 1.63× compared to a state-of-the-art work-stealing scheduler.

Dates et versions

hal-01136491 , version 1 (27-03-2015)

Identifiants

Citer

Andi Drebes, Karine Heydemann, Nathalie Drach, Antoniu Pop, Albert Cohen. Topology-Aware and Dependence-Aware Scheduling and Memory Allocation for Task-Parallel Languages. ACM Transactions on Architecture and Code Optimization, 2014, 11 (3), pp.30. ⟨10.1145/2641764⟩. ⟨hal-01136491⟩
439 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More