| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
All CFPs on WikiCFP | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Present CFP : 2025 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Call for Papers for GPGPU-17: The 17th Workshop on General Purpose Processing using GPU
Held in cooperation with PPoPP’25 Half-Day Workshop (March 1 or 2, 2025) Las Vegas, NV, USA https://mocalabucm.github.io/gpgpu2025/ Overview: GPUs are delivering more and more computing power required by modern society. With the growing popularity of massively parallel devices, users demand better performance, programmability, reliability, and security. The goal of this workshop is to provide a forum to discuss massively parallel applications, environments, platforms, and architectures, as well as infrastructures that facilitate related research. Authors are invited to submit papers of original research in the general area of GPU computing and architectures. Topics include, but are not limited to: - GPU Architecture and Hardware - Next-generation GPU architectures - Energy-efficient GPU designs - Scalable multi-GPU systems - GPU memory hierarchies and management - Programming Models and Compilers - High-level programming abstractions for GPUs - Compiler optimizations for GPU codes - Source-to-source translations and tools - Debugging and profiling tools for GPUs - GPU Algorithms and Data Structures - Parallel algorithms tailored for GPUs - Data structures optimized for GPU memory hierarchies - Algorithmic primitives and building blocks - Performance Optimization Techniques - Performance modeling and benchmarking - Auto-tuning and performance portability - Techniques for reducing communication overheads - GPU Applications - Case studies of real-world GPU applications - GPU applications in scientific computing, machine learning, large language models, graphics, and emerging field (e.g., quantum, neuromorphic, bioinformatics and genomics) - Performance comparisons between GPU and other parallel computing platforms - Integration of GPUs with Other Technologies - GPU and FPGA co-processing - Hybrid systems (e.g., CPU-GPU, GPU-TPU integration) - Cloud-based GPU computing - Challenges and Future Trends - Reliability and fault tolerance in GPU systems - Security and privacy concerns in GPU computing - The future of heterogeneity in computing platforms - GPU programming and architecture education Important Dates (Tentative) (11:59 pm, Anywhere on Earth) Papers due: December 16, 2024 Notification: January 20, 2025 Final paper due: February 17, 2025 Submission Guidelines Full paper submissions must be in PDF format for A4 or US letter-size paper. They must not exceed 6 pages (excluding references) in standard ACM two-column sigplan format (review mode, sigplan template). Authors can select if they want to reveal their identity in the submission. Templates for ACM format are available for Microsoft Word, and LaTeX at: https://www.acm.org/publications/proceedings-template. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|