JSSPP: Job Scheduling Strategies for Parallel Processing



Past:   Proceedings on DBLP

Future:  Post a CFP for 2018 or later   |   Invite the Organizers Email


All CFPs on WikiCFP

Event When Where Deadline
JSSPP 2017 Job Scheduling Strategies for Parallel Processing
Jun 2, 2017 - Jun 2, 2017 Orlando, FL, USA Feb 12, 2017
JSSPP 2016 20th Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP)
May 27, 2016 - May 27, 2016 Chicago, USA Feb 21, 2016
JSSPP 2015 Job Scheduling Strategies for Parallel Processing
May 29, 2015 - May 29, 2015 Hyderabad, India Jan 25, 2015
JSSPP 2014 Job Scheduling Strategies for Parallel Processors
May 23, 2014 - May 23, 2014 Phoenix, AZ, USA Jan 19, 2014
JSSPP 2013 Job Scheduling Strategies for Parallel Processors 2013
May 24, 2013 - May 24, 2013 Boston, MA, US Feb 20, 2013
May 25, 2012 - May 25, 2012 Shanghai, China Feb 17, 2012
JSSPP 2009 14th Workshop on Job Scheduling Strategies for Parallel Processing
May 29, 2009 - May 29, 2009 Rome, Italy Feb 13, 2009

Present CFP : 2017

21st Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP 2017)
In Conjunction with IPDPS 2017,
Orlando FL, 2 June 2017

The JSSPP workshop addresses all scheduling aspects of parallel processing, including cloud, grid (HPC) as well as “mixed/hybrid” or otherwise specific systems.

Large parallel systems have been in production for more than 20 years, creating the need of scheduling for such systems. Since 1995, JSSPP provides a forum for the research and engineering community working in the area. Initially, parallel systems were very static, with machines built in fixed configurations, which would be wholesale replaced every few years. Similarly, much of the workload was static as well, consisting of parallel scientific jobs running on a fixed number of nodes. Systems were primarily managed via batch queues. The user experience was far from interactive; jobs could wait in queues for days or even weeks.

A little over 10 years ago, the emergence of large scale, interactive, web applications together with the massive virtualization began to drive the development of a new class of (cloud) systems and schedulers. These systems would use virtual machines and/or containers to run "services", which would essentially never terminate (unlike scientific jobs). This created systems and schedulers with vastly different properties. Moreover, the enormous demand for computing resources resulted in a commercial market of competing providers. At the same time, the increasing demands for more power and interactivity have driven scientific platforms in a similar direction, causing the lines between these platforms to blur.

Nowadays, parallel processing is much more dynamic and connected. Many workloads are interactive and make use of variable resources over time. Complex parallel infrastructures can now be built on the fly, using resources from different sources, provided with different prices and quality of services. Capacity planning became more proactive, where resources are acquired continuously, with the goal of staying ahead of demand. The interaction model between job and resource manager is shifting to one of negotiation, where they agree on resources, price, and quality of service. Also, “hybrid” systems are often used, where the (virtualized) infrastructure is hosting a mix of competing workloads/applications, each having its own resource manager, that must be somehow co-scheduled. These are just a few examples of the open issues facing our field.

From its very beginning, JSSPP has strived to balance practice and theory in its program. This combination provides a rich environment for technical debate about scheduling approaches including both academic researchers as well as participants from industry. JSSPP is a high-visibility workshop, which has been ranking repeatedly in the top 10% of Citeseer's venue impact list.

Building on this tradition, starting this year, JSSPP also welcomes descriptions of open problems in large scale scheduling. Lack of real-world data substantially often hampers the ability of the research community to engage with scheduling problems in a way that has real world impact. Our goal in this new venue is to build a bridge between the production and research worlds, in order to facilitate direct collaborations and impact.

Call for Papers
JSSPP solicits papers that address any of the challenges in parallel scheduling, including:

* Design and evaluation of new scheduling approaches.
* Performance evaluation of scheduling approaches, including methodology, benchmarks, and metrics.
* Workloads, including characterization, classification, and modeling.
* Consideration of additional constraints in scheduling systems, like job priorities, price, accounting, load estimation, and quality of service guarantees.
* Impact of scheduling strategies on system utilization, application performance, user friendliness, cost efficiency, and energy efficiency.
* Scaling and composition of very large scheduling systems.
* Cloud provider issues: capacity planning, service level assurance, reliability.
* Interaction between schedulers on different levels, like processor level as well as whole single- or even multi-owner systems
* Interaction between applications/workloads, e.g., efficient batch job and container/VM co-scheduling within a single system, etc.
* Experience reports from production systems or large scale compute campaigns.

Call for Problems
JSSPP also welcomes descriptions of open problems in large scale scheduling. Effective scheduling approaches are predicated on three things:

* A concise understanding of scheduling goals, and how they relate to one another.
* Details of the workload (job arrival times, sizes, shareability, deadlines, etc.)
* Details of the system being managed (size, break/fix lifecycle, allocation constraints)

Submissions must include concise description of the key metrics of the system and how they are calculated, as well as anonymized data publication of the system workload and production schedule. Detailed descriptions of operational considerations (maintenance, failure patterns, fault domains) are also important. Ideally, anonymized operational logs would also be published, though we understand this might be more difficult. Scripts to evaluate results and compute the metrics relevant for the system are highly encouraged.

We envision that these papers will provide sufficiently detailed information to be able to develop new scheduling approaches, which can be robustly compared with the schedules used in production facilities, and other approaches to solve the same problems.

Workshop organizers:

Walfredo Cirne, Google
Narayan Desai, Google
Dalibor Klusáček, CESNET a.l.e.

Program committee:

Henri Casanova, University of Hawaii at Manoa
Julita Corbalan, Technical University of Catalonia
Dick Epema, Delft University of Technology
Hyeonsang Eom, Seoul National University
Dror Feitelson, The Hebrew University
Liana Fong, IBM T. J. Watson Research Center
Eitan Frachtenberg, Facebook
Alfredo Goldman, University of Sao Paulo
Allan Gottlieb, New York University
Zhiling Lan, Illinois Institute of Technology
Bill Nitzberg, Altair Engineering
P-O Östberg, Umeå University
Larry Rudolph, Two Sigma Investments
Uwe Schwiegelshohn, Technical University Dortmund
Leonel Sousa, Universidade Técnica de Lisboa
Mark Squillante, IBM T. J. Watson Research Center
Wei Tang, Google NYC
Ramin Yahyapour, GWDG - University of Göttingen
Carlo Curino, Microsoft

Related Resources

ICS 2018   the 32nd ACM International Conference on Supercomputing
IPDPS 2018   32nd IEEE International Parallel and Distributed Processing Symposium
ISC HPC 2018   ISC High Performance 2018
COLING 2018   The 27th International Conference on Computational Linguistics (COLING 2018)
OpenSuCo @ ISC HPC 2017   2017 International Workshop on Open Source Supercomputing
ICIP 2018   International Conference on Image Processing
ICCVG 2018   International Conference on Computer Vision and Graphics
ICAPS 2018   The 28th International Conference on Automated Planning and Scheduling
PDP 2018   The 26th Euromicro International Conference on Parallel, Distributed and Network-Based Processing
HPDC 2018   The 27th International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC'18)