STaR 2018 : Workshop on Designing Resilient Intelligent Systems for Testability and Reliability
Call For Papers
**Workshop on Designing Resilient Intelligent Systems for Testability and Reliability**
Machine learning is pervasive and increasingly systems are incorporating AI techniques for intelligent decision making. Typically, these intelligent systems are distributed, dynamic and easily modifiable. Many different elements across machine and network boundaries come together to form a coherent whole. Any of these elements is subject to be swapped out for one that is more suited for the intended purpose or is subject to be tweaked to adjust on the basis of insights gained from the analysis of the data gathered thus far. This is a distinct strength of these systems since they are constantly adapting themselves on the basis of what they learn.
However, the very strength of such systems also makes it challenging to reason about their functional correctness qualities and to test them. Testability and reliability are challenging as intelligent systems are constantly adapting, either due to elements being swapped out or decision making algorithms being modified, not remaining static long enough to capture their dynamic and emergent behavior in test cases. In certain contexts, these systems may produce outputs that are far fewer than the large amount of data they may need to analyze to produce these outputs. Under such circumstances, if there are errors in the data, then any fault that results from the internal logic that aggregates and analyzes such data to produce outputs is likely to go undetected.
This workshop focuses on designing robust intelligent systems for testability and reliability. We seek experience reports and articles on not only designing for these qualities but also tools, methods, practices, and techniques for verifying such systems for their intended functionality. Specific topics include but are not limited to:
-Designing architectures of intelligent systems to be more testable and reliable
-Using architectures of robust intelligent systems to produce test artifacts
-Architectural patterns or tactics for making intelligent systems more testable and reliable
-Analytics for making intelligent systems robust through early fault prediction
We solicit submissions in the following categories:
-Position Papers (2 – 4 pages): Position statements focused on challenges, emerging trends and research problems
-Full Length Papers (6 – 8 pages): Original research, empirical study, or systematic literature review
-Industry and Experience papers (6 – 8 pages): Industrial experience and case studies
Original submissions, not under review elsewhere at the time of submission, will be reviewed by three members of the program committee. A paper submission should comply with IEEE format. Authors must honor the double-blind review process. The accepted papers will be published as part of the ICSA 2018 Companion proceedings, and appear in IEEE Xplore Digital Library.
Each submission will be reviewed for the aspects related to: Statement of the problem, significance of research, related work / literature review, methodology, quality of data or findings, results and conclusion, readability and writing style. Accepted submissions will be presented as short talks or posters at the workshop.
• Paper submission: 8 March 2018
• Paper notification: 29 March 2018
• Camera-Ready due: 12 Apr 2018
Raghu Sangwan, Penn State University, USA
Phil Laplante, Penn State University, USA
Mohamad Kassab, Penn State University, USA
(Organizers may be contacted at rsangwan(at)psu.edu)
Michael Golm, Siemens, USA
Richard Kuhn, National Institute of Standards & Technology, USA
Manuel Mazzara, Innopolis University, Russia
Javier Camara Moreno, Carnegie Mellon University, USA
Henry Muccini, University of L'Aquila, Italy
Colin Neill, Penn State University, USA
Rod Nord, SEI, Carnegie Mellon University, USA
Hans Ros, Siemens, USA
Doug Schmidt, Vanderbilt University, USA
Jeffrey Voas, National Institute of Standards & Technology, USA