HotCloudPerf 2020 : The 3rd Workshop on Hot Topic in Cloud Computing Performance
Call For Papers
Call for Papers for
=== The Third Workshop on Hot Topics in Cloud Computing Performance ===
“Cloud Performance Variability and Its Impact on Application Performance”
VENUE: workshop held in conjunction with ICPE,
April 20 or 21, Edmonton, Canada
IMPORTANT DATES (Anywhere on Earth)
January 25, 2020 Papers due
February 17, 2020 Author Notification
February 25, 2020 Camera-ready deadline, author registration
April 20 or 21, 2020 Workshop Day
WORKSHOP THEME AND BACKGROUND
Cloud computing is emerging as one of the most profound changes in the way we build and use IT. The use of global services in public clouds is increasing, and the lucrative and rapidly growing global cloud market already supports over 1 million IT-related jobs. However, it is currently challenging to make the IT services offered by public and private clouds performant (in an extended sense) and efficient. Emerging architectures, techniques, and real-world systems include hybrid deployment, serverless operation, everything as a service, complex workflows, auto-scaling and -tiering, etc. It is unclear to which extent traditional performance engineering, software engineering, and system design and analysis tools can help with understanding and engineering these emerging technologies. The community also needs practical tools and powerful methods to address hot topics in cloud computing performance.
Responding to this need, the HotCloudPerf workshop proposes a meeting venue for academics and practitioners, from experts to trainees, in the field of cloud computing performance. The workshop aims to engage this community, and to lead to the development of new methodological aspects for gaining deeper understanding not only of cloud performance, but also of cloud operation and behavior, through diverse quantitative evaluation tools, including benchmarks, metrics, and workload generators. The workshop focuses on novel cloud properties such as elasticity, performance isolation, dependability, and other non-functional system properties, in addition to classical performance-related metrics such as response time, throughput, scalability, and efficiency.
Each year, the workshop chooses a focus theme to explore; for 2020, the theme is “the performance variability of cloud datacenters and the implications of such phenomena on application performance.” Articles focusing on this topic are particularly encouraged for HotCloudPerf-2020.
The HotCloudPerf workshop is technically sponsored by the Standard Performance Evaluation Corporation (SPEC)’s Research Group (RG), and is organized annually by the RG Cloud Group. HotCloudPerf has emerged from the series of yearly meetings organized by the RG Cloud Group, since 2013. The RG Cloud Group group is taking a broad approach, relevant for both academia and industry, to cloud benchmarking, quantitative evaluation, and experimental analysis.
WORKSHOP SCOPE AND TOPICS
Topics of the focus-theme for 2020, “Cloud Performance Variability and Its Impact on Application Performance”:
1. Serverless computing platforms and microservices in cloud datacenters.
2. Understanding and engineering dependability and non-traditional performance requirements for cloud datacenters.
3. Studying performance variability in cloud datacenters and/or solutions for taming performance variability in clouds.
4. The implications of performance variability on performance reproducibility.
5. Running specialized services with performance guarantees, such as Business Process as a Service, Data as a Service.
6. Extending the cloud datacenter to support the non-functional requirements of converged workloads, including IoT, HPC, and big data.
7. Experience reports and use-cases with understanding and engineering performance in the cloud datacenter.
8. Performance variability and/or guarantees in serverless/FaaS environments.
Long-running topics of the HotCloudPerf workshop include, but are not limited to:
1. Empirical performance studies in cloud computing environments, applications, and systems, including observation, measurement, and surveys.
2. Comparative performance studies and benchmarking of cloud environments, applications, and systems.
3. Performance analysis using modeling and queueing theory for cloud environments, applications, and systems.
4. Simulation-based studies for all aspects of cloud computing performance.
5. Tuning and auto-tuning of systems operating in cloud environments, e.g., auto-scaling of resources and auto-tiering of data, optimized resource deployment.
6. Software patterns and architectures for engineering cloud performance, e.g., serverless.
7. Experience with and analysis of performance of cloud deployment models, including IaaS/PaaS/SaaS/FaaS.
8. End-to-end performance engineering for pipelines and workflows in cloud environments, or of applications with non-trivial SLAs.
9. Tools for monitoring and studying cloud computing performance.
10. General and specific methods and methodologies for understanding and engineering cloud performance.
ARTICLE SUBMISSION GUIDELINES
We solicit the following types of contributions:
Talk only: Extended abstract limited to 1 page (without formatting restrictions)
Full paper limited to 6 pages (double column, ACM conference format)
Short paper limited to 3 pages (double column, ACM conference format)
Contributions in the 1st category (as Talk only) may have already been (partially) presented at other events or in publications. Contributions in the 2nd and 3rd category (technical papers) must represent original and unpublished work that is not currently under review. Full papers may report on original research, lessons learned from realizing an approach, or experiences on transferring a research prototype into practice. Short papers may report on work-in-progress, a tool/demo, or present a vision or position motivating the community to address new challenges.
Articles and talk only contributions are required to be submitted via the EasyChair system of HotCloudPerf-2020.
Articles must use the ACM conference format. Each valid submission will receive at least three (3) peer reviews. Articles must use the ACM conference format. Each valid submission will receive at least three (3) peer reviews. Presented papers will be published by ACM and included in the ACM Digital Library. Adhering to ACM guidelines for conferences, ICPE requires that at least one author of each accepted paper attends the workshop and presents the paper.
Alexandru Uta (VU Amsterdam, the Netherlands)
Dmitry Duplyakin (U. of Utah, USA)
Cristina Abad (ESPOL, Ecuador)
Nikolas Herbst (U. Würzburg, Germany)
Alexandru Iosup (VU Amsterdam, the Netherlands)
Alexandru Uta, VU Amsterdam
Dmitry Duplyakin, U. Utah
Cristina Abad, ESPOL
Nikolas Herbst, U. Würzburg
Alexandru Iosup, VU Amsterdam
Bogdan Ghit, Databricks
Joel Scheuner, Chalmers
André van Hoorn, U of Stuttgart
Rich Wolski, UCSB
Geoffrey Fox, Indiana University
Lucy Cherkasova, ARM Research
Marta Beltran, Universidad Rey Juan Carlos
Chen Wang, IBM
Wilhelm Hasselbring, U. Kiel
Ahmed Ali-Eldin, UMass