posted by organizer: siwasaki || 1067 views || tracked by 2 users: [display]

AsHES 2023 : The Thirteenth International Workshop on Accelerators and Hybrid Exascale Systems

FacebookTwitterLinkedInGoogle

Link: https://www.ashes-hpc.org/2023/
 
When May 15, 2023 - May 19, 2023
Where St. Petersburg, Florida, USA
Submission Deadline Jan 26, 2023
Notification Due Feb 23, 2023
Final Version Due Mar 7, 2023
Categories    hardware architecture   accelerator   HPC   heterogeneous computing
 

Call For Papers

The Thirteenth International Workshop on Accelerators and Hybrid Exascale Systems (AsHES)
https://www.ashes-hpc.org/2023/

To be held in conjunction with 37th IEEE International Parallel and Distributed Processing Symposium in St. Petersburg, Florida, USA (May 15 - May 19, 2023)

Important Dates (AoE)
========================================
Paper Submission: Jan. 26, 2023 (Extended)
Paper Notification: Feb. 23, 2023
Camera-Ready Submission: Mar. 7, 2023

Workshop Scope and Goals
========================================

The current computing landscape has gone through an ever-increasing rate of change and innovation. This change has been driven by the relentless need to improve the energy-efficient, memory, and compute throughput at all levels of the architectural hierarchy. Although the amount of data that must be organized by today's systems posed new challenges to the architecture, which can no longer be solved with classical, homogeneous design. Improvements in all those areas have led Heterogeneous systems to become the norm rather than the exception.

Heterogeneous computing leverages a diverse set of computing (CPU, GPU, FPGA, TPU, DPU, etc.) and Memory (HBM, Persistent Memory, Coherent PCI protocols, etc.), hierarchical systems and units to accelerate the execution of a diverse set of applications. Emerging and existing areas such as AI, BigData, Cloud Computing, Edge-Computing, Real-time systems, High-Performance Computing, and others have seen a real benefit due to Heterogenous computer architectures. In addition, a new wave of accelerators based on dataflow architecture instead of the traditional von Neumann is sure to bring additional challenges and opportunities.

These new heterogeneous architectures often also require the development of new applications and programming models, to satisfy these new architectures and to fully utilize these capacities. This workshop focuses on understanding the implications of heterogeneous designs at all levels of the computing system stack, such as hardware, compiler optimizations, porting of applications, and developing programming environments for current and emerging systems in all the above-mentioned areas. It seeks to ground heterogeneous system design research through studies of application kernels and/or whole applications, as well as shed light on new tools, libraries and runtime systems that improve the performance and productivity of applications on heterogeneous systems.

The goal of this workshop is to bring together researchers and practitioners who are at the forefront of Heterogeneous computing to learn the opportunities and challenges in future Heterogeneous system design trends and thus help influence the next trends in this area.
Topics of interest include (but are not limited to):

- Applications for hybrid/heterogenous systems;
- Strategies for programming heterogeneous systems using high-level models such as OpenMP, OpenACC, SYCL, OneAPI, Kokkos, Raja, and low-level models such as OpenCL, CUDA;
- Methods and tools to tackle challenges from heterogeneity in AI/ML/DL, BigData, Cloud Computing, Edge-Computing, Real-time Systems, and High-Performance Computing;
- Strategies for application behavior characterization and performance optimization for accelerators;
- Techniques for optimizing kernels for execution on GPGPU, FPGA, TPU, DPU and new emerging heterogeneous platforms;
- Models of application performance on heterogeneous and accelerated HPC systems;
- Compiler Optimizations and tuning heterogeneous systems including parallelization, loop transformation, locality optimizations, Vectorization;
- Implications of workload characterization in heterogeneous and accelerated architecture design;
- Benchmarking and performance evaluation for heterogeneous systems at all level of the system stack;
- Tools and techniques to address both performance and correctness to assist application development for accelerators and heterogeneous processors;
- System software techniques to abstract application domain-specific functionalities for accelerators;
- Innovative use of heterogeneous computing in AI for science or optimizations for AI;
- Design and use of domain-specific functionalities on accelerators;
- Hybrid neuromorphic computing systems;
- In-memory architectures;
- Dataflow architectures;


Paper Tracks and Submission Guidelines
========================================

There are two paper tracks available for AsHES ’23:
1) Full paper track (8 - 10 pages) including citations;
2) Short paper track (maximum of 4 pages) including citations; meant to highlight early investigations of innovative ideas.

Questions?
========================================
Please send any queries about the AsHES workshop to ashes@mcs.anl.gov

Related Resources

AsHES 2024   The Fourteenth International Workshop on Accelerators and Hybrid Emerging Systems
OpenSuCo @ ISC HPC 2017   2017 International Workshop on Open Source Supercomputing
PCDS 2024   The 1st International Symposium on Parallel Computing and Distributed Systems
Euro-Par 2024   30th International European Conference on Parallel and Distributed Computing
ISCSIC 2024   2024 8th International Symposium on Computer Science and Intelligent Control(ISCSIC 2024)
ISPDC 2024   23rd International Symposium on Parallel and Distributed Computing
HEART 2024   14th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies
PPAM 2024   15th International Conference on Parallel Processing & Applied Mathematics
ASAP 2024   Application-Specific Systems, Architectures, and Processors
6th AccML 2024   6th Workshop on Accelerated Machine Learning (AccML)