posted by user: matteo_camilli || 421 views || tracked by 3 users: [display]

JSS VSI:AI-testing-and-analysis 2024 : [JSS - Elsevier] Special Issue on Automated Testing and Analysis for Dependable AI-enabled Software and Systems

FacebookTwitterLinkedInGoogle

Link: https://www.sciencedirect.com/journal/journal-of-systems-and-software/about/call-for-papers?fbclid=IwAR3PgrP2T65w7ZY2GPSJ3RXVAPxRZQWB2XDcNuUPW6d-16sMGI-74M5V9vA#automated-testing-a
 
When Jan 1, 2024 - Dec 31, 2024
Where NA
Submission Deadline May 30, 2024
Notification Due Jun 30, 2024
Categories    SE4AI   automated testing   ai testing   dependability
 

Call For Papers

====================
Journal of Systems and Software (JSS),
Special Issue on
** Automated Testing and Analysis for Dependable AI-enabled Software and Systems **
====================

** Guest editors **

Matteo Camilli, Politecnico di Milano, Italy

Michael Felderer, German Aerospace Center (DLR) and University of Cologne, Cologne, Germany

Alessandro Marchetto, University of Trento, Italy

Andrea Stocco, Technical University of Munich (TUM) and fortiss GmbH, Germany

** Special Issues Editors **

Laurence Duchien and Raffaela Mirandola

** Editors in Chief **

Paris Avgeriou and David Shepherd


** Special issue information **

The advancements in Artificial Intelligence (AI) and its integration into various domains have led to the development of AI-enabled software and systems that offer unprecedented capabilities. Technologies ranging from computer vision to natural language processing, from speech recognition to recommender systems enhance modern software and systems with the aim of providing innovative services, as well as rich and customized experiences to the users. Such technologies are also changing the software and system engineering and development methods and tools, especially quality assurance methods that require deep restructuring due to the inherent differences between AI and traditional software.

AI-enabled software and systems are often large-scale driven by data, and more complex than traditional software and systems. They are typically heterogeneous, autonomous, and probabilistic in nature. They also lack of transparent understanding of their internal mechanics. Furthermore, they are typically optimized and trained for specific tasks and, as such, may fail to generalize their knowledge to other situations that often emerge in dynamic environments. These systems strongly demand safety, trustworthiness, security, and other dependability aspects. High-quality data and AI components shall be safely integrated, verified, maintained, and evolved. In fact, the potential impact of a failure, or a service interruption, cannot be tolerated in business-critical applications (e.g., chatbots and virtual assistants, facial recognition for authentication and security, industrial robots) or safety-critical applications (e.g., autonomous drones, collaborative robots, self-driving cars and autonomous vehicles for transportation).

The scientific community is hence studying new cost-effective verification and validation techniques tailored to these systems. In particular, automated testing and analysis is a very active area that has led to notable advances to realize the promise of dependable AI-enabled software and systems.

This special issue welcomes contributions regarding approaches, techniques, tools, and experience reports about adopting, creating, and improving automated testing and analysis of AI-enabled software and systems with a special focus on dependability aspects, such as reliability, safety, security, resilience, scalability, usability, trustworthiness, and compliance to standards.

Topics of interest include, but are not limited to:

Verification and validation techniques and tools for AI-enabled software and systems
​Automated testing and analysis approaches, techniques, and tools for AI-enabled software and systems.
Fuzzing and Search-based testing for AI-enabled software and systems.
Metamorphic testing for AI-enabled software and systems.
Techniques and tools to assess the dependability of AI-enabled software and systems, such as reliability, safety, security, resilience, scalability, usability,
trustworthiness, and compliance with standards in critical domains.
Fault and vulnerability detection, prediction, and localization techniques and tools for AI-enabled software and systems.
Automated testing and analysis to improve explainability of AI-enabled software and
systems.
Program analysis techniques for AI-enabled software and systems.
Regression testing and continuous integration for AI components.
Automated testing and analysis of generative AI, such as Large Language Models
(LLMs), chatbots, and text-to-image AI systems.
Verification and validation techniques and tools for specific domains, such as healthcare, telecommunication, cloud computing, mobile, big data, automotive, industrial manufacturing, robotics, cyber-physical systems, Internet of Things, education, social networks, and context-aware software systems.
Empirical studies, applications, and case studies in verification and validation of AI-enabled software and systems.
Experience reports and best practices in adopting, creating, and improving testing and analysis of AI-enabled software and systems.
Future trends in AI testing and analysis, such as integration of AI technologies in test case generation and validation of AI-enabled software and systems.
Important dates (tentative)

Submission Open Date: January 1, 2024
Manuscript Submission Deadline: May 30, 2024
Notification to authors (first round): June 30, 2024
Submission of revised papers (second round): July 31, 2024
Completion of the review and revision process (final notification): October 31, 2024


** Manuscript submission information **

The call for this special issue is an open call. All submitted papers will undergo a rigorous peer-review process and should adhere to the general principles of the Journal of Systems and Software articles. Submissions have to be prepared according to the Guide for Authors. Submitted papers must be original, must not have been previously published, or be under consideration for publication elsewhere. If a paper has been already presented at a conference, it should contain at least 30% new material before being submitted to this issue. Authors must provide any previously published material relevant to their submission and describe the additions made. We will invite some papers for this special issue, although the issue is open. The SI does not publish survey articles, systematic reviews, or mapping studies.

All manuscripts and any supplementary material should be submitted through the Elsevier Editorial System. Follow the submission instructions given on this site. During the submission process, select the article type "VSI:AI-testing-and-analysis" from the "Choose Article Type" pull-down menu.

Submissions will be processed and start the reviewing process as soon as they arrive, without waiting for the submission deadline.

Related Resources

ISSTA 2024   The ACM SIGSOFT International Symposium on Software Testing and Analysis (Round 2)
PRDC 2024   29th IEEE Pacific Rim International Conference on Dependable Computing
ATVA 2024   22nd International Symposium on Automated Technology for Verification and Analysis
LADC 2024   Latin-American Symposium on Dependable and Secure Computing
ICAPS 2024   The 34th International Conference on Automated Planning and Scheduling
ISSTA 2024   The ACM SIGSOFT International Symposium on Software Testing and Analysis (Round 1)
DSIT 2024   2024 7th International Conference on Data Science and Information Technology (DSIT 2024)
NLE Special Issue 2024   Natural Language Engineering- Special issue on NLP Approaches for Computational Analysis of Social Media Texts for Online Well-being and Social Order
IDEAL 2024   Intelligent Data Engineering and Automated Learning
ADMA 2024   20th International Conference Advanced Data Mining and Applications