posted by organizer: cscientist86 || 2112 views || tracked by 3 users: [display]

RAI 2022 : KI Workshop on Robust AI for High-Stakes Applications


When Sep 19, 2022 - Sep 20, 2022
Where Online
Submission Deadline Aug 1, 2022
Notification Due Aug 5, 2022
Final Version Due Aug 31, 2022
Categories    artificial intelligence   machine learning   robustness   applications

Call For Papers

Workshop on Robust AI for High-Stakes Applications (RAI), collocated with KI 2022 (


Robustness is widely understood as the property of some method, algorithm, or system to only decrease gradually in performance when assumptions about its input are decreasingly met. This renders robustness to be a crucial property for dependable and trustworthy applications of AI in open-world environments, in particular in high-stake applications in which human well-being is at risk. However, the usual definition of robustness raises several questions, including:

*What are the performance measures for evaluating the decrease in performance, i.e., which shortcomings are acceptable and which are not?
*How do we identify the degree to which assumptions about input characteristics are not met, in particular if assumptions are hard to specify?

Depending on the respective application area and technique considered, various approaches have been taken to measure or benchmark performance and abnormality of input characteristics. Sometimes, we may be facing unknown requirements on input data and only experiments reveal much later that an approach is not robust (1-pixel-attacks on CNN-based object classification being one infamous example).

There has been a lot of progress in AI over the past few years, with many successful examples in perception and reasoning, which has encouraged the integration of the resulting technologies into important and high-stakes real-world applications such as autonomous mobile systems (e.g., self-driving cars, autonomous drones, service robots) automated surgical assistants, electrical grid management systems, control of critical infrastructure, to name a few. However, for such an integration to constitute a beneficial socio-technical system, safety and reliability are key, and robustness is essential to avert potential catastrophic events due to unconsidered phenomena or situations. The aim of this workshop is to bring together researchers from basic or applied AI across all sub-fields of AI to discuss approaches and challenges for developing robust AI. In particular, we envisage a dialogue between the Machine Learning and the Symbolic AI communities for the benefit of critical real-world applications. Our aim is to foster exchange between the various AI sub-fields present at KI and to discuss future research directions.


Robustness refers to capability of coping with unforeseen phenomena or situations. Gearing AI towards robustness has always been an aim for open-world AI, and it becomes a pressing requirement as AI makes its way into control of high-stake applications. Robustness is addressed in many sub-fields of AI using various working definitions, and various measures. This workshop aims to bring together researchers from all sub-fields of AI working on robust methods.

In this workshop, we invite the research community in Artificial Intelligence to submit position statements and technical works related to the theme of Robust AI for High-Stakes Applications in order to develop a joint understanding of robustness in AI and to foster the exchange on robust AI. Topics of interest include:

*Explainable Artificial Intelligence
*Benchmarking, evaluation, and regularization
*Regularization in Machine Learning
*Robust optimization
*Robust inference algorithms
*Causal model learning
*Neuro-symbolic integration; Logic as a referee
*Anomaly detection
*Open-world planning and decision-making
*AI in socio-technological systems

The list above is by no means exhaustive, as the aim is to foster the debate around all aspects of the suggested theme.


We invite submissions of regular research papers, up to 12 pages in GI-LNI format, position papers, up to 6 pages, or abstracts of recently published papers, 3 pages, on the topic of robustness. Accepted papers will be published in the GI-LNI proceedings series after the KI 2022 conference. The workshop is also open to people who would like to attend without submitting a paper as discussion of the topic will play a major role. During the workshop, perspectives on proposing a special issue for the KI journal on robust AI will be discussed. Workshop submissions and camera-ready versions will be handled by EasyChair; the submission link is as follows:

Important Dates

August 1, 2022: Workshop Paper Due Date
August 22, 2022: Notification of Paper Acceptance
August 31, 2022: Camera-ready papers due

Note: all deadlines are Central European Time (CET), UTC +1, Paris, Brussels, Vienna.

Organizing Committee

Prof. Dr. Ulrich Furbach, University of Koblenz-Landau, Germany / wizAI solutions GmbH
Dr. Alexandra Kirsch, Independent Scientist
Dr. Michael Sioutis, University of Bamberg, Germany
Prof. Dr. Diedrich Wolter, University of Bamberg, Germany


All questions about submissions should be emailed to the workshop co-organizers

Related Resources

ICITA 2024   ICITA 2024: 18th International Conference on Information Technology and Applications
ECAI 2024   27th European Conference on Artificial Intelligence
Computational Response Control 2024   Computational Response Control in Structures: Theory, Optimization and AI Applications
IEEE ICA 2022   The 6th IEEE International Conference on Agents
ICMLA 2024   23rd International Conference on Machine Learning and Applications
TeleHealth-AI 2024   International Workshop on AI Applications in Telemedicine and Digital Health
CCVPR 2024   2024 International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2024)
NeurIPS 2024   The Thirty-Eighth Annual Conference on Neural Information Processing Systems
EAIH 2024   Explainable AI for Health
AIM@EPIA 2024   Artificial Intelligence in Medicine