CPHPCA 2016 : 2nd Workshop on Complex Problems over High Performance Computing Architectures
Call For Papers
2nd Workshop on Complex Problems over High Performance Computing Architectures (CPHPCA’16)
Focus topic “High Performance Computing Programming” in conjunction with CSE 2016.
Paris, France, August 24th – 26th, 2016
Call for Papers
The main proposal of CPHPCA is to provide a scenario to discuss how those problems compromising important challenges and high computational requirements can be mapped over current and upcoming high performance architectures. CPHPCA will be a part (in conjunction) with the 18th IEEE International Conference on Computational Science and Engineering (CSE'16).
The importance of high performance computing is increasing and has become as one of the foremost fields of computing research. This raise brings up many issues, in form of new network topologies and technologies (fast accessing data), new low-consumption architectures, new programming models, etc. It forces us to adapt our codes or create new ones to take advantages of the last computational features.
This workshop focuses on the challenges that suppose how to adapt/implement complex and big problems over those platforms composed by a high number of cores, dealing with communication, programming, heterogeneous architectures, load balancing, benchmarking, etc. Today, the difficulty of the problems to be implemented is increasing considerably, large data and computational requirements, dynamic behavior, numerical simulations, automatic modeling, are just a few examples of this kind of problems.
The goal of this workshop is to bridge the gap between the theory of complex problems (computational fluid dynamics, bio-informatics, linear algebra, big data computing, deep-learning, data mining, ...) and high performance computing platforms by proposing new trends/directions in programming.
Authors are invited to submit manuscripts that present original and unpublished research in all areas related with programming of complex problems via parallel and distributed processing. Works focused on emerging architectures and big computing challenges are especially welcome.
Relevant topics include, but are not limited to:
· New strategies to improve performance
· Code adapting to take advantages of lastest features
· Numerical modeling for complex problems
· Communication, synchronization, load balancing
· Benchmarking, performance and numerical accuracy analysis
· Scalability of algorithms and data structures
· New programming models
· Auto-Tunning Computing Systems
· High level abstraction tools
Manuel Ujaldón (CUDA Fellow)
Manuel Ujaldon is Prof. of Computer Architecture at the University of Malaga (Spain) and CUDA Fellow at Nvidia. He worked in the 90's on parallelizing compilers, finishing his PhD in 1996 by developing a data-parallel compiler for sparse matrix and irregular applications. Over this period, he was part of the HPF and MPI Forums, working as post-doc in the CS Dept. of the University of Maryland (USA). Last decade he started working on the GPGPU movement early in 2003 using Cg, and wrote the first book in spanish about programming GPUs for general purpose computing. He adopted CUDA when it was first released, then focusing on image processing and biomedical applications. Over the past five years, he has published more than 50 papers in journals and international conferences in these two areas.
Dr. Ujaldon has been awarded as NVIDIA Academic Partnership 2008-2011, NVIDIA Teaching Center since 2011, NVIDIA Research Center since 2012, and finally CUDA Fellow. Over the past four years, he has taught around 60 courses on CUDA programming worldwide sponsored by Nvidia, including more than 10 keynotes and tutorials in ACM/IEEE conferences.
GPGPU: Challenges ahead
After a decade being used as hardware accelerators, GPUs constitute nowadays a solid alternative for high performance computing at an affordable cost. Increasing volumes of data managed by large-scale applications make GPUs very attractive for scientific computing, deploying SIMD parallelism in an unprecedented way to produce impressive speed-up factors. This talk reviews current achievements of many-core GPUs and future hardware enhancements taken from Nvidia's roadmap to leverage exascale computing on heterogeneous CPU-GPU platforms: Maxwell (2015) to unveil unified memory, and Pascal (2017) to introduce Stacked DRAM (3D memory). In the final part, we discuss scenarios where speed-ups can be maximized on future GPUs.
Guidelines for submission of contributions to workshops:
CSE 2016 Workshop Proceedings will be published by the IEEE Computer Society through the IEEE Xplore Digital Library. Each paper should not exceed 6 pages including figures and references using IEEE Computer Society Proceedings Manuscripts style.
After the conference, selected papers will be invited for a special issue of the journal Scalable Computing: Practice and Experience.
Submission: May 13, 2016
Notification: June 24, 2016
Camera-ready: July 15, 2016
Workshop: Soon available
P. Valero-Lara, University of Manchester, UK
F. L. Pelayo, Univ. of Castilla-La Mancha, Spain
Leonel Sousa, The Technical University of Lisbon, Portugal
Sedukhin Stanislav, The University of Aizu, Japan
José Ignacio Aliaga Estellés, Jaume I Univ., Spain
J. Daniel García, Carlos III Univ., Spain
Violeta Holmes, University of Huddersfield, UK
Ivan Lirkov, Bulgarian Academy of Sciences, Bulgaria
Javier García Blas, Carlos III Univ., Spain
Marcin Paprzycki, Systems Research Institute of the Polish Academy of Sciences, Poland
Yuehai Xu, VMWare Inc., USA
Manuel Prieto Matías, Complutense Univ. of Madrid, Spain.
Daniel Rubio, Performance Computing Center Stuttgart (HLRS), Germany
Miguel Cárdenas, Reseach Center of Energy, Weather and Technology (CIEMAT), Spain.
Omar Abdelkafi, Université de Haute-Alsace, France
Abel Francisco Paz Gallardo, CETA-CIEMAT, Spain
Qusay Fadhel, The Mansoura University, Egypt
José Luis Sánchez García, Univ. of Castilla-La Mancha, Spain
Ricardo J. Barrientos, The University of La Frontera, Chile