posted by organizer: vcodreanu || 5121 views || tracked by 3 users: [display]

DLonSC 2020 : The 4th International Workshop on Deep Learning on Supercomputers

FacebookTwitterLinkedInGoogle

Link: https://dlonsc.github.io/
 
When Jun 25, 2020 - Jun 25, 2020
Where Frankfurt, Germany
Submission Deadline May 10, 2020
Notification Due May 24, 2020
Final Version Due Jun 1, 2020
Categories    computer science   machine learning   artificial intelligence
 

Call For Papers

The Deep Learning on Supercomputers workshop is with ISC’20 on June 25th, 2020 in Frankfurt, Germany. It is the fourth workshop in the Deep Learning on Supercomputers series. The workshop provides a forum for practitioners working on any and all aspects of DL for scientific research in the High Performance Computing (HPC) context to present their latest research results and development, deployment, and application experiences. The general theme of this workshop series is the intersection of DL and HPC, while the theme of this particular workshop is centered around the applications of deep learning methods in scientific research: novel uses of deep learning methods, e.g., convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial network (GAN), and reinforcement learning (RL), for both natural and social science research, and innovative applications of deep learning in traditional numerical simulation. Its scope encompasses application development in scientific scenarios using HPC platforms; DL methods applied to numerical simulation; fundamental algorithms, enhanced procedures, and software development methods to enable scalable training and inference; hardware changes with impact on future supercomputer design; and machine deployment, performance evaluation, and reproducibility practices for DL applications with an emphasis on scientific usage.

Topics include but are not limited to:

- DL as a novel approach of scientific computing
- Emerging scientific applications driven by DL methods
- Novel interactions between DL and traditional numerical simulation
- Effectiveness and limitations of DL methods in scientific research
- Algorithms and procedures to enhance reproducibility of scientific DL applications
- DL for science workflows
- Data management through the life cycle of scientific DL applications
- General algorithms and procedures for efficient and scalable DL training
- Scalable DL methods to address the challenges of demanding scientific applications
- General algorithms and systems for large scale model serving for scientific use cases
- New software, and enhancements to existing software, for scalable DL
- DL communication optimization at scale
- I/O optimization for DL at scale
- DL performance evaluation and analysis on deployed systems
- DL performance modeling and tuning of DL on supercomputers
- DL benchmarks on supercomputers
- Novel hardware designs for more efficient DL
- Processors, accelerators, memory hierarchy, interconnect changes with impact on deep learning in the HPC context

As part of the reproducibility initiative, the workshop requires authors to provide information such as the algorithms, software releases, datasets, and hardware configurations used. For performance evaluation studies, we will encourage authors to use well-known benchmarks or applications with open accessible datasets: for example, MLPerf and ResNet-50 with the ImageNet-1K dataset.

Import Dates

Technical paper due: May 10th, 2020
Acceptance notification: May 24th, 2020
Camera ready: June 1st, 2020
Workshop date: June 25th, 2020

Paper Submission

Authors are invited to submit unpublished, original work with a minimum of 6 pages and a maximum of 12 pages in single column text with LNCS style. All submissions should be in LNCS format and submitted using linklings tentatively.

Organizing Committee

Valeriu Codreanu (co-chair), SURFsara, Netherlands
Ian Foster (co-chair), UChicago & ANL, USA
Zhao Zhang (co-chair), TACC, USA
Weijia Xu (proceeding chair), TACC, USA
Ahmed Al-Jarro, Fujitsu Laboratories of Europe, UK
Takuya Akiba, Preferred Networks, Japan
Thomas S. Brettin, ANL, USA
Maxwell Cai, SURFsara, Netherlands
Erich Elsen, DeepMind, USA
Steve Farrell, LBNL, USA
Song Feng, IBM Research, USA
Boris Ginsburg, Nvidia, USA
Torsten Hoefler, ETH, Switzerland
Jessy Li, UT Austin, USA
Zhengchun Liu, ANL, USA
Peter Messmer, Nvidia, USA
Damian Podareanu, SURFsara, Netherlands
Simon Portegies Zwart, Leiden Observatory, Netherlands
Qifan Pu, Google, USA
Arvind Ramanathan, ANL, USA
Vikram Saletore, Intel, USA
Mikhail E. Smorkalov, Huawei, Russia
Rob Schreiber, Cerebras, USA
Dan Stanzione, TACC, USA
Rick Stevens, UChicago & ANL, USA
Wei Tan, Citadel, USA
Jordi Torres, Barcelona Supercomputing Center, Spain
Daniela Ushizima, LBNL, USA
Sofia Vallecorsa , CERN, Switzerland
David Walling, TACC, USA
Markus Weimer, Microsoft, USA
Kathy Yelick, UC Berkeley & LBNL, USA
Huazhe Zhang, Facebook, USA

Related Resources

ML_BDA 2021   Special Issue on Machine Learning Technologies for Big Data Analytics
IJCAI 2021   30th International Joint Conference on Artificial Intelligence
IARCE 2021-Ei Compendex & Scopus 2021   2021 5th International Conference on Industrial Automation, Robotics and Control Engineering (IARCE 2021)
ACM--ICDLT--Ei, Scopus 2021   ACM--2021 5th International Conference on Deep Learning Technologies (ICDLT 2021)--Ei Compendex, Scopus
DEEP-BDB 2021   The 2nd International Conference on Deep Learning, Big Data and Blockchain
ICDM 2021   21th Industrial Conference on Data Mining
DLRS 2021   Call for Papers: Topical Issue on Deep Learning for Recommender Systems
NLPI 2021   2nd International Conference on NLP & Information Retrieval
SDLDIP 2021   Special Issue on Sensors and Deep Learning for Digital Image Processing
JBHI-ECDL 2021   IEEE Journal on Biomedical and Health Informatics Special Issue on Emerging Challenges for Deep Learning