posted by organizer: petermkruse || 611 views || tracked by 1 users: [display]

ToCaMS 2020 : Workshop on Testing of Configurable and Multi-variant Systems

FacebookTwitterLinkedInGoogle

Link: https://www.xivt.org/tocams
 
When Mar 23, 2020 - Mar 27, 2020
Where Porto, Portugal
Submission Deadline Jan 12, 2020
Categories    software testing   software product lines   variability testing   configurable systems
 

Call For Papers

ToCaMS 2020, the first ICST Workshop on Testing of Configurable and Multi-variant Systems, will focus on methods and tools for the automated generation and execution of tests for software-based systems that are highly configurable and customizable. As more and more software-based products and services are available in many different variants, new challenges for the software quality assurance processes arise. In this workshop, both foundational and practical testing problems will be discussed, and possible solutions from an academic and industrial perspective will be presented.
Submission Guidelines

All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:

Research papers describing original research, new results, methods and tools in testing of variable and configurable systems. A page limit of 12 pages applies to research papers.
Experience papers describing case studies, applications, experiences, and best practices in testing of variable and configurable systems. A page limit of 4-8 pages applies to experience papers.

All accepted papers will be published in the IEEE ICSTW proceeding.
List of Topics

Due to increasing market diversification and customer demand, more and more software-based products and services are customizable or are designed in the form of many different variants. This brings about new challenges for the software quality assurance processes: How shall the variability be modelled in order to make sure that all features are being tested? Is it better to test selected variants on a concrete level, or can the generic software and baseline be tested abstractly? Can knowledge-based AI techniques be used to identify and prioritize test cases? How can the quality of a generic test suite be assessed? What are appropriate coverage criteria for configurable modules? If it is impossible to test all possible variants, which products and test cases should be selected for test execution? Can security-testing methods be leveraged to an abstract level?
In this workshop, these and related questions will be discussed both from a practical and from a foundational viewpoint. It should be interesting for researchers from the software testing community, as well as for testing experts from industry who want to learn about the latest methods and tools in the field. Besides new theoretical results, we also welcome problem statements, case studies, experience reports, tool presentations and survey papers.

Following is a non-exhaustive list of topics to be discussed at the workshop:

Test modelling,
test generation,
test priorization,
test selection,
test execution,
test evaluation, and
test assessment

for variable and configurable systems.

Organizing committee

Jeremy Bradbury
Peter M. Kruse
Mehrdad Saadatmand
Holger Schlingloff

Venue

The conference will be held in conjunction with ICST 2020 in Porto, Portugal, Mon 23 - Fri 27 March 2020.

Related Resources

AIFU 2020   6th International Conference on Artificial Intelligence and Applications
AAMAS 2020   International Conference on Autonomous Agents and Multi-Agent Systems 2020
ISPR 2020   6th International Conference on Image and Signal Processing
ICAART 2020   12th International Conference on Agents and Artificial Intelligence
SEA 2020   9th International Conference on Software Engineering and Applications
EUMAS 2020   17th European Conference on Multi-Agent Systems
VSC @IEEE WETICE 2020   Track on Validating Software for Critical Systems (VSC) @IEEE WETICE 2020
ISSTA 2020   International Symposium on Software Testing and Analysis
Recommender Systems 2020   Data Science for Next-Generation Recommender Systems
A-MOST 2020   Advances in Model-Based Software Testing