TY - GEN
T1 - Elastic tasks
T2 - 21st International Conference on Parallel and Distributed Computing, Euro-Par 2015
AU - Sbîrlea, Alina
AU - Agrawal, Kunal
AU - Sarkar, Vivek
N1 - Publisher Copyright:
© Springer-Verlag Berlin Heidelberg 2015.
PY - 2015
Y1 - 2015
N2 - In this paper, we introduce elastic tasks, a new high-level parallel programming primitive that can be used to unify task parallelism and SPMD parallelism in a common adaptive scheduling framework. Elastic tasks are internally parallel tasks and can run on a single worker or expand to take over multiple workers. An elastic task can be an ordinary task or an SPMD region that must be executed by one or more workers simultaneously, in a tightly coupled manner. This paper demonstrates the following benefits of elastic tasks: (1) they offer theoretical guarantees: in a work-stealing environment computations complete in expected time O(W/P + S + E lg P), where E =# of elastic tasks, W =work, S =span, P =# cores. (2) they offer performance benefits in practice by co-scheduling tightly coupled parallel/ SPMD subcomputations within a single elastic task, and (3) they can adapt at runtime to the state of the application and work-load of the machine. We also introduce ElastiJ — a runtime system that includes worksharing and work-stealing scheduling algorithms to support computations with regular and elastic tasks. This scheduler dynamically decides the allocation for each elastic task in a non-centralized manner, and provides close to asymptotically optimal running times for computations with elastic tasks.
AB - In this paper, we introduce elastic tasks, a new high-level parallel programming primitive that can be used to unify task parallelism and SPMD parallelism in a common adaptive scheduling framework. Elastic tasks are internally parallel tasks and can run on a single worker or expand to take over multiple workers. An elastic task can be an ordinary task or an SPMD region that must be executed by one or more workers simultaneously, in a tightly coupled manner. This paper demonstrates the following benefits of elastic tasks: (1) they offer theoretical guarantees: in a work-stealing environment computations complete in expected time O(W/P + S + E lg P), where E =# of elastic tasks, W =work, S =span, P =# cores. (2) they offer performance benefits in practice by co-scheduling tightly coupled parallel/ SPMD subcomputations within a single elastic task, and (3) they can adapt at runtime to the state of the application and work-load of the machine. We also introduce ElastiJ — a runtime system that includes worksharing and work-stealing scheduling algorithms to support computations with regular and elastic tasks. This scheduler dynamically decides the allocation for each elastic task in a non-centralized manner, and provides close to asymptotically optimal running times for computations with elastic tasks.
UR - https://www.scopus.com/pages/publications/84944089026
U2 - 10.1007/978-3-662-48096-0_38
DO - 10.1007/978-3-662-48096-0_38
M3 - Conference contribution
AN - SCOPUS:84944089026
SN - 9783662480953
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 491
EP - 503
BT - Euro-Par 2015
A2 - Traff, Jesper Larsson
A2 - Hunold, Sascha
A2 - Versaci, Francesco
PB - Springer Verlag
Y2 - 24 August 2015 through 28 August 2015
ER -