HiPEAC 2014 : 9th International Conference on High-Performance and Embedded Architectures and Compilers
Conference Series : High Performance Embedded Architectures and Compilers
Call For Papers
HiPEAC '14: Call for papers
Call for Papers to ACM TACO (http://taco.acm.org)
The HiPEAC conference is the premier European forum for experts in computer architecture, programming models, compilers and operating systems for embedded and general-purpose systems. Emphasis is given on either cross-cutting research (embedded/high performance, architecture/software stack, etc.) or innovative ideas (new programming models, novel architecture approaches to cope with technology constraints or new technologies, etc.).
The 9th HiPEAC conference will take place in Vienna, Austria from Monday, January 20 to Wednesday, January 22, 2014. Associated workshops, tutorials, special sessions, a large poster session and an exhibition hall will run in parallel with the conference.
Paper selection will happen through the review process for ACM TACO, the ACM Transactions on Architecture and Code Optimization. Prospective authors submit their original papers to TACO at any time before the paper deadline of June 14, 2013 to benefit from two rounds of reviews before the conference paper track cut-off date which is November 15, 2013.
Click here for more detailed information about the new publication model called
ACM TACO 2.0: http://www.hipeac.net/conference/vienna/publicationmodel
Topics of interest include, but are not limited to:
Processor, memory, and storage systems architecture
Parallel, multi-core and heterogeneous systems
Architectural support for programming productivity
Power, performance and implementation efficient designs
Reliability and real-time support in processors, compilers and run-time systems
Application-specific processors, accelerators and reconfigurable processors
Architecture and programming environments for GPU-based computing
Simulation and methodology
Architectural and run-time support for programming languages
Programming models, frameworks and environments for exploiting parallelism
Program characterization and analysis techniques
Dynamic compilation, adaptive execution, and continuous profiling/optimization
Code size/memory footprint optimizations