posted by organizer: djkerbyson || 1435 views || tracked by 7 users: [display]

LSPP 2014 : Workshop on Large-Scale Parallel Processing 2014


When May 23, 2014 - May 23, 2014
Where Phoenix
Submission Deadline Jan 21, 2014
Notification Due Feb 20, 2014
Final Version Due Mar 14, 2014
Categories    HPC   performance   large-scale

Call For Papers

Call for papers: Workshop on LARGE-SCALE PARALLEL PROCESSING

to be held in conjunction with
IEEE International Parallel and Distributed Processing Symposium
Phoeniz, AZ
May 23rd, 2014

SUBMISSION DEADLINE: January 21st 2014 (Extended)

Selected work presented at the workshop will be published in a
special issue of Parallel Processing Letters in December 2014.

The workshop on Large-Scale Parallel Processing is a forum that
focuses on computer systems that utilize thousands of processors
and beyond. Large-scale systems, referred to by some as
extreme-scale and Ultra-scale, have many important research
aspects that need detailed examination in order for their
effective design, deployment, and utilization to take place.
These include handling the substantial increase in multi-core
on a chip, the ensuing interconnection hierarchy, communication,
and synchronization mechanisms. Increasingly this is becoming an
issue of co-design involving performance, power and reliability
aspects. The workshop aims to bring together researchers from
different communities working on challenging problems in this
area for a dynamic exchange of ideas. Work at early stages of
development as well as work that has been demonstrated in
practice is equally welcome.

Of particular interest are papers that identify and analyze novel
ideas rather than providing incremental advances in the following

- LARGE-SCALE SYSTEMS : exploiting parallelism at large-scale,
the coordination of large numbers of processing elements,
synchronization and communication at large-scale, programming
models and productivity

novel systems, the use of processors in memory (PIMS),
parallelism in emerging technologies, future trends.

- MULTI-CORE : utilization of increased parallelism on a single
chip (MPP on a chip such as the Cell and GPUs), the possible
integration of these into large-scale systems, and dealing with
the resulting hierarchical connectivity.

- MONITORING, ANALYSIS AND MODELING : tools and techniques for
gathering performance, power, thermal, reliability, and other
data from existing large scale systems, analyzing such data
offline or in real time for system tuning, and modeling of
similar factors in projected system installations.

- ENERGY MANAGEMENT: Techniques, strategies, and experiences
relating to the energy management and optimization of
large-scale systems.

- APPLICATIONS : novel algorithmic and application methods,
experiences in the design and use of applications that scale to
large-scales, overcoming of limitations, performance analysis
and insights gained.

- WAREHOUSE COMPUTING: dealing with the issues in advanced
datacenters that are increasingly moving from co-locating many
servers to having a large number of servers working cohesively,
impact of both software and hardware designs and optimizations
to achieve best cost-performance efficiency.

Results of both theoretical and practical significance will be
considered, as well as work that has demonstrated impact at
small-scale that will also affect large-scale systems. Work may
involve algorithms, languages, various types of models, or


Papers should not exceed eight single-space pages (including
figures, tables and references) using a 12-point font on 8.5x11
inch pages. Submissions in PostScript or PDF should be made
using EDAS ( Informal enquiries can be made to Submissions will be judged on correctness,
originality, technical strength, significance, presentation
quality and appropriateness. Submitted papers should not have
appeared in or under consideration for another venue.


Submission deadline: January 21st 2014 (Extended)
Notification of acceptance: February 20th 2014
Camera-Ready Papers due: March 14th 2014


Darren J. Kerbyson Pacific Northwest National Laboratory
Ram Rajamony IBM Austin Research Lab
Charles Weems University of Massachusetts


Johnnie Baker Kent State University
Alex Jones University of Pittsburgh
H.J. Siegel Colorado State University
Guangming Tan ICT, Chinese Academy of Sciences
Lixin Zhang ICT, Chinese Academy of Sciences


Pavan Balaji Argonne National Laboratory, USA
Kevin J. Barker Pacific Northwest National Laboratory
Laura Carrington San Diego Supercomputer Center, USA
I-Hsin Chung IBM T.J. Watson Research Lab, USA
Tim German Los Alamos National Laboratory, USA
Georg Hager University of Erlangen, Germany
Simon Hammond Sandia National Laboratory, USA
Martin Herbordt Boston University, USA
Daniel Katz University of Chicago, USA
Celso Mendes University of Illinois Urbana-Champagne
Bernd Mohr Forschungszentrum Juelich, Germany
Phil Roth Oak Ridge National Laboratory, USA
Jose Sancho Barcelona Supercomputer Center, USA
Gerhard Wellein University of Erlangen, Germany
Pat Worley Oak Ridge National Laboratory, USA
Ulrike Yang Lawrence Livermore National Laboratory

Workshop Webpage:

Related Resources

ParLearning 2017   The 6th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics
QEST 2017   14th International Conference on Quantitative Evaluation of SysTems
HPDC 2017   The 26th International ACM Symposium on High-Performance Parallel and Distributed Computing
EuroMPI/USA 2017   The 24th European MPI Users' Group Meeting
QCILSA 2017   Call for book chapters QCILSA 2017: Quantum Computing: an environment for intelligent large scale real application (Springer)
MOCO 2017   International Conference on Movement and Computing (MOCO 2017)
ICCAC 2017   2017 IEEE International Conference on Cloud and Autonomic Computing (ICCAC)
SPC 2017   Workshop on Scheduling for Parallel Computing
JSSPP 2017   Job Scheduling Strategies for Parallel Processing