posted by user: walfredo || 3111 views || tracked by 2 users: [display]

JSSPP 2016 : 20th Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP)


Conference Series : Job Scheduling Strategies for Parallel Processing
When May 27, 2016 - May 27, 2016
Where Chicago, USA
Submission Deadline Feb 21, 2016
Notification Due Mar 13, 2016
Final Version Due Apr 12, 2016
Categories    parallel   scheduling   high performance

Call For Papers

20th Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP)

In Conjunction with IPDPS 2016
Chicago IL
27 May 2016

The JSSPP workshop addresses all scheduling aspects of parallel processing.

Large parallel systems have been in production for more than 20 years, creating the need of scheduling for such systems. This workshop was created in 1995 to provide a forum for the research and engineering community working in the area. Initially, parallel systems were very static. Machines were built in fixed configurations, which would be wholesale replaced every few years. Much of the workload consisted of parallel scientific jobs. These jobs were static, running on a fixed number of nodes. Systems were primarily managed via batch queues. The user experience was far from interactive; jobs could wait in queues for days or even weeks.

A little over 10 years ago, the emergence of large scale, interactive, web applications began to drive the development of a new class of systems and schedulers. These systems would run “services”, which would essentially never terminate (unlike scientific jobs). This created systems and schedulers with vastly different properties. Moreover, this created an enormous demand for computing resources, resulting in a commercial market of competing providers. At the same time, the increasing demands for more power and interactivity have driven scientific platforms in a similar direction, causing the lines between these platforms to blur.

Nowadays, parallel processing is much more dynamic and connected. Many workloads are interactive and make use of variable resources over time. Complex parallel infrastructures can now be built on the fly, using resources from different sources, provided with different prices and quality of services. Capacity planning became more proactive, where resources are acquired continuously, with the goal of staying ahead of demand. The interaction model between job and resource manager is shifting to one of negotiation, where they agree on resources, price, and quality of service. These are just a few examples of the open issues facing our field.

JSSPP solicits papers that address any of the challenges in parallel scheduling, including:

Design and evaluation of new scheduling approaches.
Performance evaluation of scheduling approaches, including methodology, benchmarks, and metrics.
Workloads, including characterization, classification, and modeling.
Consideration of additional constraints in scheduling systems, like job priorities, price, accounting, load estimation, and quality of service guarantees.
Impact of scheduling strategies on application performance, user friendliness, cost efficiency, and energy efficiency.
Scaling and composition of very large scheduling systems.
Cloud provider issues: capacity planning, service level assurance, reliability.
Interaction between schedulers on different levels, like processor level as well as whole single- or even multi-owner systems.
Experience reports from production systems.
Experience reports from large scale compute campaigns.
From its very beginning, JSSPP has strived to balance practice and theory in its program. This combination provides a rich environment for technical debate about scheduling approaches including both academic researchers as well as participants from industry. JSSPP is a high-visibility workshop, which has been ranking repeatedly in the top 10% of Citeseer's venue impact list.
Submission Dates and Guidelines

DEADLINE: 21 February 2016
NOTIFICATION: 13 March 2016
Papers should be no longer than 20 single-spaced pages, 10pt font, including figures and references. All papers in scope will be reviewed by at least three members of the program committee. All submissions must follow the LNCS format, see the instructions at Springer's web site.

Files must be submitted electronically in PDF format and must be formatted for 8.5x11 inch paper. Papers must be submitted via EDAS; To submit a paper, click here.

Workshop organizers

Walfredo Cirne, Google
Narayan Desai, Ericsson
Program Committee

Henri Casanova, University of Hawaii at Manoa
Julita Corbalan, Technical University of Catalonia
Dick Epema, Delft University of Technology
Dror Feitelson, The Hebrew University
Liana Fong, IBM T. J. Watson Research Center
Eitan Frachtenberg, Facebook
Alfredo Goldman, University of Sao Paulo
Allan Gottlieb, New York University
Alexandru Iosup, Delft University of Technology
Srikanth Kandula, Microsoft
Rajkumar Kettimuthu, Argonne National Laboratory
Dalibor Klusáček, Masaryk University
Madhukar Korupolu, Google
Zhiling Lan, Illinois Institute of Technology
Bill Nitzberg, Altair Engineering
P-O Östberg, Umeå University
Larry Rudolph, Two Sigma Investments
Uwe Schwiegelshohn, Technical University Dortmund
Leonel Sousa, Universidade Técnica de Lisboa
Mark Squillante, IBM T. J. Watson Research Center
Wei Tang, Google NYC
Ramin Yahyapour, GWDG - University of Göttingen
Back to parallel job scheduling workshops home page

Related Resources

Euro-Par 2023   European Conference on Parallel Processing
PDCTA 2023   12th International Conference on Parallel, Distributed Computing Technologies and Applications
PDCDP 2023   2023 International Conference on Parallel, Distributed Computing and Data Processing (PDCDP 2023)
IWDSP 2023   4th International Workshop on Dynamic Scheduling Problems
HPDC 2023   The 32nd International Symposium on High-Performance Parallel and Distributed Computing
ICAPS 2023   International Conference on Automated Planning and Scheduling
IPDPS 2023   The 37th IEEE International Parallel and Distributed Processing Symposium
COCOA 2022   16th Annual International Conference on Combinatorial Optimization and Applications
PDCAT 2022   The 23rd International Conference on Parallel and Distributed Computing, Applications and Technologies
NMLM 2023   Frontiers: Neurocomputational models of language processing