posted by organizer: youngsr || 2590 views || tracked by 13 users: [display]

MLHPC 2015 : Machine Learning in High Performance Computing Environments Workshop


When Nov 15, 2015 - Nov 15, 2015
Where Austin, Texas
Submission Deadline Sep 15, 2015
Notification Due Oct 1, 2015
Final Version Due Oct 12, 2015
Categories    machine learning   HPC   supercomputing   deep learning

Call For Papers

This workshop will be held in conjunction with SC15: The International Conference for High Performance Computing, Networking, Storage and Analysis located in Austin, Texas on November 15 - 20. The intent of this workshop is to bring together researchers, practitioners, and scientific communities to discuss methods that utilize extreme scale systems for machine learning. This workshop will focus on the greatest challenges in utilizing HPC for machine learning and methods for exploiting data parallelism, model parallelism, ensembles, and parameter search. We invite researchers and practitioners to participate in this workshop to discuss the challenges in using HPC for machine learning and to share the wide range of applications that would benefit from HPC powered machine learning.

In recent years, the models and data available for machine learning (ML) applications have grown dramatically. High performance computing (HPC) offers the opportunity to accelerate performance and deepen understanding of large data sets through machine learning. Current literature and public implementations focus on either cloud-­based or small-­scale GPU environments. These implementations do not scale well in HPC environments due to inefficient data movement and network communication within the compute cluster, originating from the significant disparity in the level of parallelism. Additionally, applying machine learning to extreme scale scientific data is largely unexplored. To leverage HPC for ML applications, serious advances will be required in both algorithms and their scalable, parallel implementations.

Topics will include but will not be limited to:
-Machine learning models, including deep learning, for extreme scale systems
-Enhancing applicability of machine learning in HPC (e.g. feature engineering, usability)
-Learning large models/optimizing hyper parameters (e.g. deep learning, representation learning)
-Facilitating very large ensembles in extreme scale systems
-Training machine learning models on large datasets and scientific data
-Overcoming the problems inherent to large datasets (e.g. noisy labels, missing data, scalable ingest)
-Applications of machine learning utilizing HPC
-Future research challenges for machine learning at large scale.
-Large scale machine learning applications

Authors are invited to submit full papers with unpublished, original work of not more than 10 pages. Authors are also welcome to submit 5-page papers describing initial research or early first-of-a-kind results and 2-page poster abstracts. All papers should be formatted using the ACM style (see All accepted papers (subject to post-review revisions) will be published in the ACM digital and IEEE Xplore libraries by ACM SIGHPC. Papers should be submitted using EasyChair at:

Related Resources

ICML 2017   34th International Conference on Machine Learning
IJCAI 2017   International Joint Conference on Artificial Intelligence
MLDM 2017   Machine Learning and Data Mining in Pattern Recognition
ECML-PKDD 2017   European Conference on Machine Learning and Principles and Practice of Knowledge Discovery
DSAA 2017   The 4th IEEE International Conference on Data Science and Advanced Analytics 2017
PAKDD 2017   The 21st Pacific-Asia Conference on Knowledge Discovery and Data Mining
HPDC 2017   The 26th International ACM Symposium on High-Performance Parallel and Distributed Computing
ICLR 2017   5th International Conference on Learning Representations
MOCO 2017   International Conference on Movement and Computing (MOCO 2017)
HPC 2017   High Performance Computing Symposium