AITest 2022 : The IEEE Fourth International Conference On Artificial Intelligence Testing
Call For Papers
Artificial Intelligence (AI) technologies are widely used in computer applications to perform tasks such as
monitoring, forecasting, recommending, prediction, and statistical reporting. They are deployed in a variety
of systems including driverless vehicles, robot-controlled warehouses, financial forecasting applications,
and security enforcement and are increasingly integrated with cloud/fog/edge computing, big data analytics,
robotics, Internet-of-Things, mobile computing, smart cities, smart homes, intelligent healthcare, etc. In
spite of this dramatic progress, the quality assurance of existing AI application development processes is
still far from satisfactory and the demand for being able to show demonstrable levels of confidence in such
systems is growing. Software testing is a fundamental, effective and recognized quality assurance method
which has shown its cost-effectiveness to ensure the reliability of many complex software systems.
However, the adaptation of software testing to the peculiarities of AI applications remains largely
unexplored and needs extensive research to be performed. On the other hand, the availability of AI
technologies provides an exciting opportunity to improve existing software testing processes, and recent
years have shown that machine learning, data mining, knowledge representation, constraint optimization,
planning, scheduling, multi-agent systems, etc. have real potential to positively impact on software testing.
Recent years have seen a rapid growth of interests in testing AI applications as well as application of AI
techniques to software testing. This conference provides an international forum for researchers and
practitioners to exchange novel research results, to articulate the problems and challenges from practices,
to deepen our understanding of the subject area with new theories, methodologies, techniques, processes
models, etc., and to improve the practices with new tools and resources.
Topics Of Interest
The conference invites papers of original research on AI testing and reports of the best practices in the
industry as well as the challenges in practice and research. Topics of interest include (but are not limited
to) the following:
Testing AI applications
Methodologies for testing, verification and validation of AI applications
o Process models for testing AI applications and quality assurance activities and procedures
o Quality models of AI applications and quality attributes of AI applications, such as
correctness, reliability, safety, security, accuracy, precision, comprehensibility,
o Whole lifecycle of AI applications, including analysis, design, development, deployment,
operation and evolution
o Quality evaluation and validation of the datasets that are used for building the AI
Techniques for testing AI applications
o Test case design, test data generation, test prioritization, test reduction, etc.
o Metrics and measurements of the adequacy of testing AI applications
o Test oracle for checking the correctness of AI application on test cases
Tools and environment for automated and semi-automated software testing AI applications
for various testing activities and management of testing resources
Specific concerns of software testing with various specific types of AI technologies and AI
Applications of AI techniques to software testing
Machine learning applications to software testing, such as test case generation, test
effectiveness prediction and optimization, test adequacy improvement, test cost reduction, etc.
Constraint Programming for test case generation and test suite reduction
Constraint Scheduling and Optimization for test case prioritization and test execution
Crowdsourcing and swarm intelligence in software testing
Genetic algorithms, search-based techniques and heuristics to optimization of testing
Data quality evaluation for AI applications
Automatic data validation tools
Quality assurance for unstructured training data
Large-scale unstructured data quality certification
Techniques for testing deep neural network learning, reinforcement learning and graph
Regular Papers (8 Pages) And Short Papers (2 Pages)
We welcome submissions of both regular research papers (limited to 8 pages), that describe original and
significant work or report on case studies and empirical research, and short papers (limited to 2 pages) that
describe late-breaking research results or work in progress with timely and innovative ideas.
AI Testing In Practice (8 Pages)
The AI Testing in Practice Track provides a forum for networking, exchanging ideas and innovative or
experimental practices to address SE research that impacts directly on practice on software testing for AI.
Tool Demo Track (4 Pages)
The tool track provides a forum to present and demonstrate innovative tools and/or new benchmarking
datasets in the context of software testing for AI.
All papers must be submitted electronically in PDF format using the IEEE Computer Society Proceedings
format (two columns, single-spaced, 10pt font). Papers must not be accepted for publication, or be under
submission to another conference or journal. Each paper will be reviewed by at least three members of the
Program Committee, using a single-blind reviewing procedure. At least one author of the accepted paper
must register for the conference and confirm that she/he will present the paper in person.