posted by user: _federicafilippini || 599 views || tracked by 2 users: [display]

ScaDL 2023 : Scalable Deep Learning over Parallel And Distributed Infrastructures


When May 19, 2023 - May 19, 2023
Where St. Petersburg, Florida, USA
Submission Deadline Feb 14, 2023
Notification Due Feb 26, 2023
Final Version Due Mar 3, 2023

Call For Papers

Dear colleague,

apologize for multiple posting

ScaDL 2023: Scalable Deep Learning over Parallel And Distributed
Infrastructure - An IPDPS 2023 Workshop

Scope of the Workshop:
Recently, Deep Learning (DL) has received tremendous attention in the research
community because of the impressive results obtained for a large number of
machine learning problems. The success of state-of-the-art deep learning
systems relies on training deep neural networks over a massive amount of
training data, which typically requires a large-scale distributed computing
infrastructure to run. In order to run these jobs in a scalable and efficient
manner, on cloud infrastructure or dedicated HPC systems, several interesting
research topics have emerged which are specific to DL. The sheer size and
complexity of deep learning models when trained over a large amount of data
makes them harder to converge in a reasonable amount of time. It demands
advancement along multiple research directions such as, model/data
parallelism, model/data compression, distributed optimization algorithms for
DL convergence, synchronization strategies, efficient communication and
specific hardware acceleration.

SCADL seeks to advance the following research directions:
- Asynchronous and Communication-Efficient SGD: Stochastic gradient descent is
at the core of large-scale machine learning. Parallelizing SGD gradient
computation across multiple nodes increases the data processed per iteration,
but exposes the SGD to communication and synchronization delays and
unpredictable node failures in the system. Thus, there is a critical need to
design robust and scalable distributed SGD methods to achieve fast error-
convergence in spite of such system variabilities.
High performance computing aspects: Deep learning is highly compute intensive.
Algorithms for kernel computations on commonly used accelerators (e.g. GPUs),
efficient techniques for communicating gradients and loading data from storage
are critical for training performance.

- Model and Gradient Compression Techniques: Techniques such as reducing
weights and the size of weight tensors help in reducing the compute
complexity. Using lower-bit representations such as quantization and
sparsification allow for more optimal use of memory and communication

- Distributed Trustworthy AI: New techniques are needed to meet the goal of
global trustworthiness (e.g., fairness and adversarial robustness) efficiently
in a distributed DL setting.

- Emerging AI hardware Accelerators: with the proliferation of new hardware
accelerators for AI such in memory computing (Analog AI) and neuromorphic
computing, novel methods and algorithms need to be introduced to adapt to the
underlying properties of the new hardware (example: the non-idealities of the
phase-change memory (PCM) and the cycle-to-cycle statistical variations).

- The intersection of Distributed DL and Neural Architecture Search (NAS): NAS
is increasingly being used to automate the synthesis of neural networks.
However, given the huge computational demands of NAS, distributed DL is
critical to make NAS computationally tractable (e.g., differentiable
distributed NAS).

This intersection of distributed/parallel computing and deep learning is
becoming critical and demands specific attention to address the above topics
which some of the broader forums may not be able to provide. The aim of this
workshop is to foster collaboration among researchers from distributed/
parallel computing and deep learning communities to share the relevant topics
as well as results of the current approaches lying at the intersection of
these areas.

Areas of Interest
In this workshop, we solicit research papers focused on distributed deep
learning aiming to achieve efficiency and scalability for deep learning jobs
over distributed and parallel systems. Papers focusing both on algorithms as
well as systems are welcome. We invite authors to submit papers on topics
including but not limited to:

- Deep learning on cloud platforms, HPC systems, and edge devices
- Model-parallel and data-parallel techniques
- Asynchronous SGD for Training DNNs
- Communication-Efficient Training of DNNs
- Scalable and distributed graph neural networks, Sampling techniques for
graph neural networks
- Federated deep learning, both horizontal and vertical, and its challenges
- Model/data/gradient compression
- Learning in Resource constrained environments
- Coding Techniques for Straggler Mitigation
- Elasticity for deep learning jobs/spot market enablement
- Hyper-parameter tuning for deep learning jobs
- Hardware Acceleration for Deep Learning including digital and analog
- Scalability of deep learning jobs on large clusters
- Deep learning on heterogeneous infrastructure
- Efficient and Scalable Inference
- Data storage/access in shared networks for deep learning
- Communication-efficient distributed fair and adversarially robust learning
- Distributed learning techniques applied to speed up neural architecture

Workshop Format:
Due to the continuing impact of COVID-19, ScaDL 2023 will also adopt relevant
IPDPS 2023 policies on virtual participation and presentation. Consequently,
the organizers are currently planning a hybrid (in-person and virtual) event.

Submission Link:
Submissions will be managed through linklings. Submission link available at:

Key Dates
Paper Submission: February 5th, 2023 (EXTENDED)
Acceptance Notification: February 26th, 2023
Camera ready papers due: March 3th, 2023
Workshop Date: May 19th, 2023

Author Instructions
ScaDL 2023 accepts submissions in two categories:
- Regular papers: 8-10 pages
- Short papers/Work in progress: 4 pages
The aforementioned lengths include all technical content, references and
We encourage submissions that are original research work, work in progress,
case studies, vision papers, and industrial experience papers.
Papers should be formatted using IEEE conference style, including figures,
tables, and references. The IEEE conference style templates for MS Word and
LaTeX provided by IEEE eXpress Conference Publishing are available for
download. See the latest versions at

General Chairs
Kaoutar El Maghraoui, IBM Research AI, USA
Daniele Lezzi, Barcelona Supercomputing Center, Spain

Program Committee Chairs
Misbah Mubarak, NVIDIA, USA
Alex Gittens, Rensselaer Polytechnic Institute (RPI), USA

Publicity Chairs
Federica Filippini, Politecnico di Milano, Italy
Hadjer Benmeziane, Université Polytechnique des Hauts-de-France

Web Chair
Praveen Venkateswaran, IBM Research AI, USA

Steering Committee
Parijat Dube, IBM Research AI, USA
Vinod Muthusamy, IBM Research AI, USA
Ashish Verma, IBM Research AI, USA
Jayaram K. R., IBM Research AI, USA
Yogish Sabharwal, IBM Research AI, India
Danilo Ardagna, Politecnico di Milano, Italy

Related Resources

EAIH 2024   Explainable AI for Health
CCITT 2024   3rd International Conference on Computing and Information Technology Trends
FLAIRS 2024   37th International FLAIRS Conference
EAICI 2024   Explainable AI for Cancer Imaging
ISPA 2023   21st IEEE International Symposium on Parallel and Distributed Processing with Applications
SNPD Winter 2023   26th IEEE/ACIS International Winter Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing
AI Safety 2024   Special Issue for the Journal Frontiers in Robotics and AI on AI Safety: Safety Critical Systems
BOES 2023   Bio-inspired Optimization in Engineering and Sciences
ACDL 2024   7th Advanced Course on Data Science & Machine Learning
JCICE 2024   2024 International Joint Conference on Information and Communication Engineering(JCICE 2024)