posted by organizer: ivotron || 1586 views || tracked by 4 users: [display]

P-RECS 2018 : First International Workshop On Practical Reproducible Evaluation of Computer Systems


When Jun 11, 2018 - Jun 11, 2018
Where Tempe, AZ
Submission Deadline Apr 2, 2018
Notification Due Apr 30, 2018
Final Version Due May 6, 2018
Categories    reproducibility   automation   computational science   data science

Call For Papers

# P-RECS '18

First International Workshop on Practical Reproducible Evaluation of
Computer Systems.

June 11, 2018. In conjunction with HPDC'18 ( In
cooperation with SIGHPC (pending).

Independent evaluation of experimental results in the area of computer
and networking systems is a challenging task. Recreating the
environment where an experiment originally ran is commonly considered
impractical or even impossible. This workshop will focus heavily on
practical, actionable aspects of reproducibility in broad areas of
computational science and data exploration, with special emphasis on
issues in which community collaboration can be essential for adopting
novel methodologies, techniques and frameworks aimed at addressing
some of the challenges we face today. The workshop will bring together
researchers and experts to share experiences and advance the state of
the art in the reproducible evaluation of computer systems, featuring
contributed papers and invited talks.

## Topics

We expect submissions from topics such as, but not limited to:

* Experiment dependency management.
* Software citation and persistence.
* Data versioning and preservation.
* Provenance of data-intensive experiments.
* Tools and techniques for incorporating provenance into publications.
* Automated experiment execution and validation.
* Experiment portability for code, performance, and related metrics.
* Experiment discoverability for re-use.
* Cost-benefit analysis frameworks for reproducibility.
* Usability and adaptability of reproducibility frameworks into already-established domain-specific tools.
* Long-term artifact archiving for future reproducibility.
* Frameworks for sociological constructs to incentivize paradigm shifts.
* Policies around publication of articles/software.
* Blinding and selecting artifacts for review while maintaining history.
* Reproducibility-aware computational infrastructure.

## Submission

Submit via EasyChair
( We look for two
categories of submissions:

* **Position papers**. This category is for papers whose goal is to
propose solutions (or scope the work that needs to be done) to
address some of the issues outlined above. We hope that a research
agenda comes out of this and that we can create a community that
meets yearly to report on our status in addressing these problems.

* **Experience papers**. This category consists of papers reporting
on the authors' experience in automating one or more
experimentation pipelines. The committee will look for submissions
reporting on their experience: what worked? What aspects of
experiment automation and validation are hard in your domain? What
can be done to improve the tooling for your domain? As part of the
submission, authors need to provide a URL to the automation
service they use (e.g., [TravisCI](,
[Jenkins](, etc.) so reviewers can verify
that there is one or more automated pipelines associated to the

### Format

Authors are invited to submit manuscripts in English not exceeding 5
pages of content. The 5-page limit includes figures, tables and
appendices, but does not include references, for which there is no
page limit. Submissions must use the [ACM Master
(please use the `sigconf` format with default options).

### Proceedings

The proceedings will be archived in both the ACM Digital Library and
IEEE Xplore through SIGHPC.

### Tools

These tools can be optionally used used to automate your experiments:
[ReproZip](, [Sciunit](,

## Important Dates

* Submissions due: *April 9*, 2018
* Acceptance notification: April 30, 2018
* Camera-ready paper submission: May 6, 2018
* Workshop: June 11, 2018

## Organizers

* Ivo Jimenez, UC Santa Cruz
* Carlos Maltzahn, UC Santa Cruz
* Jay Lofstead, Sandia National Laboratories

## Program Committee

* Divyashri Bhat, UMass Amherst
* Michael Crusoe, CWL Project Lead
* Anja Feldmann, TU Berlin
* Todd Gamblin, LLNL
* Mike Heroux, Sandia National Laboratories
* Torsten Hoefler, ETH Zürich
* Neil Chue Hong, Software Sustainability Institute / University of
Edinburgh, UK
* Dan Katz, NCSA
* Kate Keahey, Argonne National Lab / ChameleonCloud
* Ignacio Laguna, LLNL
* Arnaud Legrand, Bâtiment IMAG
* Reed Milewicz, Sandia National Laboratories
* Robert Ricci, University of Utah / CloudLab
* Victoria Stodden, UIUC
* Violet R. Syrotiuk, ASU
* Michela Taufer, University of Delaware
* Michael Zink, UMass Amherst

## Contact

Please address workshop questions to (

Related Resources

Federated Learning in IOT Cybersecurity 2021   PeerJ Computer Science - Federated Learning for Cybersecurity in Internet of Things
IJCAI 2022   31st International Joint Conference on Artificial Intelligence
SCIS-ISIS 2022   Joint 12th International Conference on Soft Computing and Intelligent Systems and 23rd International Symposium on Advanced Intelligent Systems
CLNLP 2021   2021 2nd International Conference on Computational Linguistics and Natural Language Processing (CLNLP 2021)
Sp.issue LNLM - CI & Electrical Systems 2022   Special issue in Computational Intelligence in Emerging Electrical Systems
ICBDB 2021   2021 3rd International Conference on Big Data and Blockchain(ICBDB 2021)
CSNDSP 2022   13th International Symposium on Communication Systems, Networks and Digital Signal Processing
IDA 2022   20th Symposium on Intelligent Data Analysis (IDA) 2022, Rennes, France
ICCIA--IEEE, Ei, Scopus 2022   IEEE--2022 7th International Conference on Computational Intelligence and Applications (ICCIA 2022)--EI Compendex, Scopus
EI Compendex, Scopus-EECT 2022   2022 2nd International Conference on Electrical, Electronics and Computing Technology (EECT 2022)