posted by organizer: gfursin || 1281 views || tracked by 1 users: [display]

ResCuE-HPC 2018 : 1st Workshop on Reproducible, Customizable and Portable Workflows for HPC

FacebookTwitterLinkedInGoogle

Link: http://rescue-hpc.org
 
When Nov 11, 2018 - Nov 11, 2018
Where Dallas, TX, USA
Submission Deadline Sep 5, 2018
Notification Due Sep 21, 2018
Final Version Due Nov 11, 2018
Categories    workflows   reproducibility   portability   automation
 

Call For Papers

==== Introduction ====

Reproducibility is critical for the scientific process.
Sharing the artifacts, data, and workflows associated with
papers forces authors to turn a more careful eye to their own
work, and it enables other scientists to easily validate and
build on prior work. Over the past five years, many top-tier
parallel computing conferences have established Artifact
Evaluation (AE) initiatives. The community has bought in, and
nearly half of accepted papers now include artifacts.

Unfortunately, recent attempts to reproduce experimental
results show that many challenges still remain. A lack
of common tools, along with increasingly deep stacks
of dependencies, lead to ad-hoc workflows, and evaluators
struggle to install, run, and analyze experiments. These
challenges are not unique to artifact evaluation. Users
of production simulation codes struggle to reproduce complex
workflows, even on the same machine. Benchmark suites are
notoriously complex to configure and work with, and
reproducing their performance can be a daunting task. Indeed,
nearly all shared artifacts still require manual steps and
human intuition, which ultimately makes them difficult
to customize, port, reuse, and build upon.

ResCuE-HPC will bring together HPC researchers and
practitioners to propose and discuss ways to enable
reproducible, portable and customizable experimental workflows
for HPC. We are interested in contributions that describe
state-of-the-art and pitfalls for reproducibility, as well
as improvements to existing frameworks, benchmarks and
datasets that can be used to run HPC workloads across multiple
software versions and hardware architectures. Ultimately,
we aim to automate artifact evaluation, benchmarking, and
workflows with a common co-design framework, and
collaboratively solve reproducibility issues in HPC.

==== Topics of Interest ====

We invite position papers of up to 4 pages presenting novel
or existing practical solutions to:

- automate and unify artifact evaluation at HPC conferences;

- share artifacts (workloads, benchmarks, data sets, models,
tools), workflows and experiments in a portable, customizable,
and reusable format;

- automatically and natively install and rebuild all software
dependencies required for shared experimental workflows
on different machines and environments;

- automatically report and visualize experimental results
including interactive articles to assist reproducible
initiative at SC and other conferences and journals;

- continuously validate experiments from past research and
report/record unexpected behavior (bugs, numerical
instability, variation of empirical results such as execution
time or energy measurements, etc) on new and evolving software
and hardware stack;

- establish open repositories of common benchmarks, data sets
and tools to accelerate knowledge exchange between HPC
centers;

- enable universal, customizable and multi-objective
auto-tuning and co-design of HPC software and hardware in
a reproducible and reusable way;

- unify statistical analysis and predictive modeling
techniques to improve reproducibility of empirical
experimental results.

We also encourage submissions demonstrating practical
use-cases of portable, customizable and reusable HPC workflows
by connecting together existing tools including but not
limited to Spack, Collective Knowledge, EasyBuild,
Common Workflow Language and many others.

==== Format ====

The day will be organized into sessions of 3-4 related papers.
To spark discussion, Each author will briefly introduce their
techniques (10-15 minutes), and this will be followed by an
open panel including the audience.

There will be no formal proceedings for the first edition
of this workshop! Instead, all ResCuE-HPC authors will be able
to participate in preparation of a single ResCuE-HPC report.
The report will focus on gradual convergence on a common
experimental methodology, as well as possible formats for
workflows and artifact sharing (meta information and API).
We plan to make this report available to reproducibility and
artifact evaluation chairs at the leading HPC, ML and systems
conferences as well as the ACM task force on reproducibility
where we are founding members.

We hope that ResCuE-HPC workshop will help the community
to gradually converge on a common experimental methodology and
possible formats for workflows and artifact sharing (meta
information and API). This, in turn, should help our community
unify and automate Artifact Evaluation, benchmarking, and
workflows. The resulting ability to quickly prototype research
ideas will dramatically accelerate development of the next
generation of HPC software and hardware by reusing prior work.


==== Important Dates ====

* Paper submission deadline: 5 September 2018
* Author notification: 21 September 2018
* Workshop: 11 November 2018 (morning, Room 1)

Please see the SC18 home page (http://sc18.supercomputing.org)
for registration deadlines and other related information.

==== Submission Guidelines ====

Authors must submit their position papers (max 4 pages)
using double-column, single-spaced letter format
as PDF files.

All papers should be submitted via SC18 submission website
(see the link at rescue-hpc.org).

Submissions are single-blind and will be peer-reviewed.

==== Workshop Organizers (A-Z) ====

* Grigori Fursin, cTuning foundation/dividiti
* Todd Gamblin, LLNL
* Milos Puzovic, Hartree Centre
* Michela Taufer, University of Delaware

==== Confirmed Keynote ====

* Michael A. Heroux, Sandia National Laboratories

==== Program Committee ====

* Lorena A Barba, George Washington University
* Bruce Childers, University of Pittsburgh
* Kenneth Hoste, Ghent University
* Ivo Jimenez, UC Santa Cruz
* Daniel S Katz, NCSA
* Arnaud Legrand, INRIA / CNRS
* Bernd Mohr, Julich Supercomputing Centre
* David Richards, LLNL
* Victoria Stodden, Stanford University
* Flavio Vella, dividiti

Related Resources

Euro-Par 2024   30th International European Conference on Parallel and Distributed Computing
IEEE-Ei/Scopus-ACEPE 2024   2024 IEEE Asia Conference on Advances in Electrical and Power Engineering (ACEPE 2024) -Ei Compendex
HiPEAC SC 2024   HiPEAC Reproducibility Student Challenge
ICoSR 2024   2024 3rd International Conference on Service Robotics
ISPDC 2024   23rd International Symposium on Parallel and Distributed Computing
ICoIV 2024   2024 International Conference on Intelligent Vehicles (ICoIV 2024)
WQCC 2024   Second Workshop on Quantum Computing and Communication
ACM-Ei/Scopus-AI2A 2024   2024 4th International Conference on Artificial Intelligence, Automation and Automation (AI2A 2024) -EI Compendex
PPAM 2024   15th International Conference on Parallel Processing & Applied Mathematics
ACM-EI/Scopus-ARAEML 2024   2024 International Conference on Advanced Robotics, Automation Engineering and Machine Learning (ARAEML 2024) -EI Compendex