MTAGS: Many-Task Computing on Grids and Supercomputers

FacebookTwitterLinkedInGoogle

 

Past:   Proceedings on DBLP

Future:  Post a CFP for 2017 or later   |   Invite the Organizers Email

 
 

All CFPs on WikiCFP

Event When Where Deadline
MTAGS 2016 MTAGS16: 9th Workshop on Many-Task Computing on Clouds, Grids, and Supercomputers
Nov 14, 2016 - Nov 14, 2016 Salt Lake City Aug 28, 2016
MTAGS 2013 6th Workshop on Many-Task Computing on Clouds, Grids, and Supercomputers
Nov 17, 2013 - Nov 17, 2013 Denver, CO, USA Sep 1, 2013
MTAGS 2012 The 5th Workshop on Many-Task Computing on Grids and Supercomputers
Nov 12, 2012 - Nov 12, 2012 Salt Lake City, Utah, USA Sep 10, 2012
MTAGS 2011 The 4th Workshop on Many-Task Computing on Grids and Supercomputers
Nov 14, 2011 - Nov 14, 2011 Seattle, Washington, USA Sep 2, 2011
MTAGS 2010 3rd ACM Workshop on Many-Task Computing on Grids and Supercomputers 2010
Nov 15, 2010 - Nov 15, 2010 New Orleans, LA Sep 1, 2010 (Aug 25, 2010)
MTAGS 2009 The 2nd ACM Workshop on Many-Task Computing on Grids and Supercomputers
Nov 16, 2009 - Nov 16, 2009 Portland, Oregon, USA Sep 1, 2009 (Aug 1, 2009)
MTAGS 2008 The 1st Workshop on Many-Task Computing on Grids and Supercomputers
Nov 17, 2008 - Nov 17, 2008 Austin, Texas, USA Aug 15, 2008
 
 

Present CFP : 2016

Overview

The 9th workshop on Many-Task Computing on Clouds, Grids, and Supercomputers (MTAGS) will provide the scientific community a dedicated forum for presenting new research, development, and deployment efforts of large-scale many-task computing (MTC) applications on large scale clusters, clouds, grids, and supercomputers. MTC, the theme of the workshop encompasses loosely coupled applications, which are generally composed of many tasks to achieve some larger application goal. This workshop will cover challenges that can hamper efficiency and utilization in running applications on large-scale systems, such as local resource manager scalability and granularity, efficient utilization of raw hardware, parallel file-system contention and scalability, data management, I/O management, reliability at scale, and application scalability. We welcome paper submissions in theoretical, simulations, and systems topics with special consideration to papers addressing the intersection of petascale/exascale challenges with large-scale cloud computing. Papers will be peer-reviewed, and accepted papers will be published in the workshop proceedings as part of the ACM SIGHPC. The workshop will be held in conjunction with SC16: The International Conference on High Performance Computing, Networking, Storage and Analysis, in Salt Lake City on November 14th, 2016.
Scope

The advent of computation can be compared, in terms of the breadth and depth of its impact on research and scholarship, to the invention of writing and the development of modern mathematics. Scientific Computing has already begun to change how science is done, enabling scientific breakthroughs through new kinds of experiments that would have been impossible only a decade ago. As computing becomes a pervasive part of the scientific process, there is a great opportunity to make powerful computing techniques, previously reserved for projects with only the largest investments, available to a broad scientific community.

The massive increase in concurrency provided by modern hardware presents a challenge to scientific applications with large existing investments in previously developed software and limited ability to redesign from scratch using the latest programming models. Many-task computing (MTC) studies technologies, simple and advanced, to rapidly compose highly scalable applications from existing sequential codes. MTC encompasses loosely coupled applications, which are generally composed of many tasks (both independent and dependent tasks) to achieve some larger application goal. Growing from the successes of Globus, Condor, and national-scale grid computing infrastructures, MTC techniques have been deployed on many systems from single many-core systems (leveraging GPGPUs and Intel MIC accelerators), to the largest multi-petascale high-performance computing (HPC) systems. The development and deployment of these MTC systems have expanded the utility of the underlying technologies and fed back to improve the performance and usability of the technologies themselves. Similarly, technologies developed for cloud computing (including MapReduce-based models) can provide additional connections and innovations in computing techniques. MTAGS is a unique venue to promote HPC-related concepts to the broader scientific and cloud computing communities.

We are entering into a “big data” era, as advances in networking, instrumentation, simulation technologies, Internet computing and social networks are producing data at an unprecedented rate. The collection, storage, analysis and sharing of this data are thus one of the greatest challenges in the 21st century. Support for data intensive computing is critical to advancing modern science as storage systems have experienced an increasing gap between its capacity and its bandwidth by more than 10-fold over the last decade. There is an emerging need for advanced techniques to manipulate, visualize and interpret large datasets. While commonly associated with Hadoop and related systems, technologies from HPC and MTC are also applicable. This provides an opportunity to exchange large-scale data management technologies between scientific applications and industrial techniques, which is another emphasis of MTAGS.

Scientific Computing is the key to many domains' "holy grail" of new knowledge, and comes in many shapes and forms. Exchange of ideas from HPC, MTC, and cloud communities is a critical path to the adoption of advanced techniques to best utilize emerging, highly concurrent systems. Underlying techniques for concurrency and data processing originating in the HPC space must be delivered to the broader community to promote future investment in HPC research programs and, more generally, advance scientific investigations.

The 9th workshop on Many-Task Computing on Clouds, Grids, and Supercomputers (MTAGS16) will provide the scientific community a dedicated forum for presenting new research, development, and deployment efforts of large-scale many-task computing (MTC) applications on large scale clusters, Grids, Supercomputers, and Cloud Computing infrastructure. This workshop will cover challenges that can hamper efficiency and utilization in running applications on large-scale systems, such as local resource manager scalability and granularity, efficient utilization of raw hardware, parallel/distributed file system contention and scalability, data management, I/O management, reliability at scale, and application scalability. This workshop encourages interaction and cross-pollination between those developing applications, algorithms, software, hardware and networking, emphasizing many-task computing for large-scale distributed systems. We believe the workshop will be an excellent place to help the community define the current state-of-the-art, determine future goals, and define architectures and services for future high-end computing infrastructure.

For more information on past workshops, please see MTAGS15, MTAGS14, MTAGS13, MTAGS12, MTAGS11, MTAGS10, MTAGS09, and MTAGS08. We also ran a special issue on Many-Task Computing in the IEEE Transactions on Parallel and Distributed Systems (TPDS) which appeared in June 2011, and it can be found at http://datasys.cs.iit.edu/events/TPDS_MTC; the proceedings can be found online at http://www.computer.org/portal/web/csdl/abs/trans/td/2011/06/ttd201106toc.htm. In addition, we are running a special issue on Many-Task Computing in the Cloud in the IEEE Transaction on Cloud Computing: http://datasys.cs.iit.edu/events/TCC-MTC15/. We, the workshop organizers, also published two papers that are highly relevant to this workshop. One paper is titled "Toward Loosely Coupled Programming on Petascale Systems", and was published in SC08; the second paper is titled “Many-Task Computing for Grids and Supercomputers”, which was published in MTAGS08, both of which have been highly cited, with 136 and 237 citations respectively.
Topics

We invite the submission of original work that is related to the topics below. The papers should be 6 pages, including all figures and references. We aim to cover topics related to Many-Task Computing on each of the three major distributed systems paradigms, Cloud Computing, Grid Computing and Supercomputing. Topics of interest include:

Compute resource management

o Scheduling

o Job execution frameworks

o Local resource manager extensions

o Performance evaluation of resource managers in use on large scale systems

o Dynamic resource provisioning

o Techniques to manage extreme concurrency and accelerators

o Challenges and opportunities in running many-task workloads on HPC systems

o Challenges and opportunities in running many-task workloads on Cloud Computing infrastructure

Storage architectures and implementations

o Distributed file systems

o Parallel file systems

o Distributed metadata management

o Content distribution systems for large data

o Data caching frameworks and techniques

o Data management within and across data centers

o Data-aware scheduling

o Data-intensive computing applications

o Eventual-consistency storage usage and management

Programming models and tools

o MapReduce, its generalizations, and implementations

o Many-task computing middleware and applications

o Parallel programming frameworks

o Ensemble MPI

o Service-oriented science applications

Large-scale workflow systems

o Workflow system performance and scalability analysis

o Scalability of workflow systems

o Workflow infrastructure and e-Science middleware

o Programming paradigms and models

Large-scale many-task applications

o High-throughput computing (HTC) applications

o Data-intensive applications

o Quasi-supercomputing applications, deployments, and experiences

o Application coupling, integration, and composition

o Algorithms for many-task applications- Monte Carlo, parameter sweep/search, uncertainty quantification

o Performance evaluation

Performance evaluation

o Theoretical vs. real systems

o Simulations

o Reliability and fault tolerance of large systems


Important Dates

Full paper due: August 28th, 2016
Acceptance notification: September 30th, 2016
Camera Ready Due: October 14th, 2016
Workshop date: November 14th, 2016

Paper Submission

Authors are invited to submit papers with unpublished, original work of not more than 6 pages of double column text using single spaced 10 point size on 8.5 x 11 inch pages, as per IEEE 8.5 x 11 manuscript guidelines; document templates can be found at http://www.ieee.org/conferences_events/conferences/publishing/templates.html. The final 6 page papers in PDF format must be submitted online at easychair: https://easychair.org/conferences/?conf=mtags16 before the deadline. Papers will be peer-reviewed for novelty, scientific merit, and scope for the workshop. Submission implies the willingness of at least one of the authors to register and present the paper.
Organization

General Chairs

Ke Wang, Intel Corportation, USA
Justin Wozniak, Argonne National Laboratory, USA
Ioan Raicu, Illinois Institute of Technology & Argonne National Laboratory, USA

Steering Committee

David Abramson, Monash University, Australia
Ian Foster, University of Chicago & Argonne National Laboratory
Yong Zhao, University of Electronic Science and Technology of China
Jack Dongarra, University of Tennessee, USA
Geoffrey Fox, Indiana University, USA
Manish Parashar, Rutgers University, USA
Marc Snir, Argonne National Laboratory & University of Illinois at Urbana Champaign, USA
Xian-He Sun, Illinois Institute of Technology, USA
Weimin Zheng, Tsinghua University, China

Program Committee

James Hamilton, Amazon Web Services, USA
Kamil Iskra, Argonne National Laboratory
Pete Beckman, Argonne National Laboratory
Rob Ross, Argonne National Labs
David O'Hallaron, Carnegie Mellon University, Intel Labs, USA
Hakim Weatherspoon, Cornell University, USA
Alexandru Iosup, Delft University of Technology, Netherlands
Jeff Chase, Duke University, USA
Catalin Dumitrescu, Fermi National Labs
Jeff Dean, Google, Inc., USA
Alan Gara, IBM Research, USA
Jose Moreira, IBM Research, USA
Zhiling Lan, Illinois Institute of Technology
Marlon Pierce, Indiana University, USA
Geoffrey Fox, Indiana University, USA
Lavanya Ramakrishnan, Lawrence Berkeley National Laboratory
Tevfik Kosar, Louisiana State University, USA
Evangelinos Constantinos, Massachusetts Institute of Technology
Tony Hey, Microsoft Research, USA
Alec Wolman, Microsoft Research, USA
Dan Reed, Microsoft Research, USA
Dennis Gannon, Microsoft Research, USA
Michael Isard, Microsoft Research, USA
Mihai Budiu, Microsoft Research, USA
David Abramson, Monash University, Australia
Kento Aida, NII and Tokyo Institute of Technology
Alok Choudhary, Northwestern University, USA
Peter Dinda, Northwestern University, USA
Arthur Maccabe, Oak Ridge National Labs
Manish Parashar, Rutgers University
Valerie Taylor, Texas A&M
Edward Walker, Texas Advanced Computing Center
Matthew Woitaszek, The University Coorporation for Atmospheric Research
Florin Isaila, Universidad Carlos III de Madrid
Ignacio Llorente, Universidad Complutense de Madrid, Spain
Matei Ripeanu, University of British Columbia, Canada
Kathy Yelick, University of California at Berkeley, USA
Ken Yocum, University of California, San Diego
Rich Wolski, University of California, Santa Barbara, USA
Mike Wilde, University of Chicago & Argonne National Laboratory, USA
Daniel Katz, University of Chicago, USA
Henri Casanova, University of Hawai`i at Manoa
Robert Grossman, University of Illinois at Chicago, USA
Indranil Gupta, University of Illinois at Urbana Champaign
Marc Snir, University of Illinois at Urbana Champaign, USA
Rajkumar Buyya, University of Melbourne
Reagan Moore, University of North Carolina, Chappel Hill, USA
Douglas Thain, University of Notre Dame, USA
Adriana Iamnitchi, University of South Florida, USA
Ewa Deelman, University of Southern California
Ann Chervenak, University of Southern California, USA
Carl Kesselman, University of Southern California, USA
Ed Lazowska, University of Washington, USA
Alain Roy, University of Wisconsin
Miron Livny, University of Wisconsin, Madison, USA
Remzi Arpaci-Dusseau, University of Wisconsin, Madison, USA
Larry Rudolph, VMware, USA
Brian Cooper, Yahoo! Research, USA
Jik-Soo Kim, Korea Institute of Science and Technology Information, Daejeon, Republic of Korea
Scott Klasky Oak Ridge National Laboratory, USA
ike Boros, Cray, USA
 

Related Resources

Special Issue GPEM 2016   Special Issue on Automated Design and Adaptation of Heuristics for Scheduling and Combinatorial Optimisation - Genetic Programming and Evolvable Machines (Springer)
GECON 2016   13th International Conference on Economics of Grids, Clouds, Systems and Services
CoNLL Shared Task 2016   Multilingual Shallow Discourse Parsing (CoNLL 2016 Shared Task)
HPC 2017   High Performance Computing Symposium
UbiComp 2017   2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing
HPDC 2017   The 26th International ACM Symposium on High-Performance Parallel and Distributed Computing
CDCGM 2016   Convergence of Distributed Clouds, Grids and their Management
IPMU 2018   17th Information Processing and Management of Uncertainty in Knowledge-Based Systems Conference
EICS 2017   ACM SIGCHI Symposium on Engineering Interactive Computing Systems
ICCCRI 2017   5th International Conference on Cloud Computing Research and Innovation (ICCCRI 2017)