A-MOST 2020 : Advances in Model-Based Software Testing
Conference Series : Advances in Model-Based Software Testing
Call For Papers
The increasing complexity, criticality and pervasiveness of software results in new challenges for testing. Model Based Testing (MBT) continues to be an important research area, where new approaches, methods and tools make MBT techniques (for automatic test case generation) more deployable and useful for industry than ever. Following the success of previous editions, the goal of the A-MOST workshop is to bring researchers and practitioners together to discuss state of the art, practice and future prospects in MBT. Topics and sub-topics (not exhaustive):
Models for component, integration and system testing
(Hybrid) embedded system models
Models for orchestration and choreography of services
Executable models, simulation and model transformations
Environment and use models
Models for variant-rich and highly configurable systems
Machine-learning based models
PROCESSES, METHODS AND TOOLS
Model-based test generation algorithms
Application of model checking techniques to MBT
Symbolic execution-based techniques
Tracing from requirements models to test models
Performance and predictability of MBT
Test model evolution during the software life-cycle
Risk-based approaches for MBT
Generation of testing infrastructures from models
Combinatorial approaches for MBT
Derivation of test models by reverse engineering and machine learning
EXPERIENCES AND EVALUATION
Estimating dependability (e.g., security, safety, reliability) using MBT
Coverage metrics and measurements for structural and (non-)functional models
Cost of testing, economic impact of MBT
Empirical validation, experiences, case studies using MBT
The role of MBT in automata learning (model inference, model mining)
Generating training data for machine learning
Model-based security testing
Statistical model checking
## Submission Format
### Full and Short Papers
Papers should not exceed 8 pages for full papers or 4 pages for short experience and position papers, excluding references - but it is not a strict limit, if you need more space contact the chairs. Each submitted paper must conform to the IEEE two-column publication format. Papers will be reviewed by at least three members from the program committee. Accepted papers will be published in the IEEE Digital Library.
Journal First (new)
The aim of journal-first papers in category is to further enrich the program of A-MOST, as well as to provide an overall more flexible path to publication and dissemination of original research in model-based testing. The published journal paper must adhere to the following three criteria:
It should be clearly within the scope of the workshop.
It should be recent: it should have been accepted and made publicly available in a journal (online or in print) by 1 January 2017 or more recently.
It has not been presented at, and is not under consideration for, journal-first tracks of other conferences or workshops.
The 2-page submission should provide a concise summary of the published journal paper.
Journal-first submissions must be marked as such in the submission’s title, and must explicitly include full bibliographic details (including a DOI) of the journal publication they are based on. Submissions will be judged on the basis of the above criteria, but also considering how well they would complement the workshop’s technical program.