posted by user: vcodreanu || 3776 views || tracked by 2 users: [display]

DLonSC 2021 : The 6th International Workshop on Deep Learning on Supercomputers


When Jul 2, 2021 - Jul 2, 2021
Where Frankfurt, Germany
Submission Deadline Apr 17, 2021
Notification Due May 1, 2021
Final Version Due Jun 17, 2021
Categories    computer science   machine learning   artificial intelligence

Call For Papers

The Deep Learning (DL) on Supercomputers workshop provides a forum for practitioners working on any and all aspects of DL for scientific research in the High Performance Computing (HPC) context to present their latest research results and development, deployment, and application experiences. The general theme of this workshop series is the intersection of DL and HPC, while the theme of this particular workshop is centered around the applications of deep learning methods in scientific research: novel uses of deep learning methods, e.g., convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial network (GAN), and reinforcement learning (RL), for both natural and social science research, and innovative applications of deep learning in traditional numerical simulation. Its scope encompasses application development in scientific scenarios using HPC platforms; DL methods applied to numerical simulation; fundamental algorithms, enhanced procedures, and software development methods to enable scalable training and inference; hardware changes with impact on future supercomputer design; and machine deployment, performance evaluation, and reproducibility practices for DL applications with an emphasis on scientific usage. This workshop will be centered around published papers. Submissions will be peer-reviewed, and accepted papers will be published as part of the Joint Workshop Proceeding by Springer.

Topics include but are not limited to:

DL as a novel approach of scientific computing
Emerging scientific applications driven by DL methods
Novel interactions between DL and traditional numerical simulation
Effectiveness and limitations of DL methods in scientific research
Algorithms and procedures to enhance reproducibility of scientific DL applications
DL for science workflows
Data management through the life cycle of scientific DL applications
General algorithms and procedures for efficient and scalable DL training
Scalable DL methods to address the challenges of demanding scientific applications
General algorithms and systems for large scale model serving for scientific use cases
New software, and enhancements to existing software, for scalable DL
DL communication optimization at scale
I/O optimization for DL at scale
DL performance evaluation and analysis on deployed systems
DL performance modeling and tuning of DL on supercomputers
DL benchmarks on supercomputers
Novel hardware designs for more efficient DL
Processors, accelerators, memory hierarchy, interconnect changes with impact on deep learning in the HPC context
As part of the reproducibility initiative, the workshop requires authors to provide information such as the algorithms, software releases, datasets, and hardware configurations used. For performance evaluation studies, we will encourage authors to use well-known benchmarks or applications with open accessible datasets: for example, MLPerf and ResNet-50 with the ImageNet-1K dataset.

Import Dates
Technical paper due: April 17th, 2021
Acceptance notification: May 1st, 2021
Camera ready: June 17th, 2021
Workshop date: July 2nd, 2021

Paper Submission
Authors are invited to submit unpublished, original work with a minimum of 6 pages and a maximum of 12 pages in single column text with LNCS style. All submissions should be in LNCS format and submitted using EasyChair tentatively.

Organizing Committee
Valeriu Codreanu (co-chair), SURF, Netherlands
Ian Foster (co-chair), UChicago & ANL, USA
Zhao Zhang (co-chair), TACC, USA
Weijia Xu (proceeding chair), TACC, USA
Ahmed Al-Jarro, Fujitsu Laboratories of Europe, UK
Takuya Akiba, Preferred Networks, Japan
Thomas S. Brettin, ANL, USA
Maxwell Cai, SURF, Netherlands
Erich Elsen, DeepMind, USA
Steve Farrell, LBNL, USA
Song Feng, IBM Research, USA
Boris Ginsburg, Nvidia, USA
Torsten Hoefler, ETH, Switzerland
Jessy Li, UT Austin, USA
Zhengchun Liu, ANL, USA
Peter Messmer, Nvidia, USA
Damian Podareanu, SURF, Netherlands
Simon Portegies Zwart, Leiden Observatory, Netherlands
Qifan Pu, Google, USA
Arvind Ramanathan, ANL, USA
Vikram Saletore, Intel, USA
Mikhail E. Smorkalov, Huawei, Russia
Rob Schreiber, Cerebras, USA
Dan Stanzione, TACC, USA
Rick Stevens, UChicago & ANL, USA
Wei Tan, Citadel, USA
Jordi Torres, Barcelona Supercomputing Center, Spain
Daniela Ushizima, LBNL, USA
Sofia Vallecorsa , CERN, Switzerland
David Walling, TACC, USA
Markus Weimer, Microsoft, USA
Kathy Yelick, UC Berkeley & LBNL, USA
Huazhe Zhang, Facebook, USA

Related Resources

ICDM 2024   IEEE International Conference on Data Mining
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
ACML 2024   16th Asian Conference on Machine Learning
ECAI 2024   27th European Conference on Artificial Intelligence
EAIH 2024   Explainable AI for Health
ICANN 2024   33rd International Conference on Artificial Neural Networks
ECCV 2024   European Conference on Computer Vision
IITUPC 2024   Immunotherapy and Information Technology: Unleashing the Power of Convergence
JCICE 2024   2024 International Joint Conference on Information and Communication Engineering(JCICE 2024)
EAICI 2024   Explainable AI for Cancer Imaging