posted by user: dtaikuleuven || 395 views || tracked by 2 users: [display]

VeriLearn 2023 : Verifying Learning AI Systems Workshop @ ECAI'23

FacebookTwitterLinkedInGoogle

Link: https://dtai.cs.kuleuven.be/events/VeriLearn2023
 
When Sep 30, 2023 - Oct 1, 2023
Where Krakow, Poland
Submission Deadline Jun 23, 2023
Categories    AI   safe & trusthworthy ai   verification for ai
 

Call For Papers

We would like to announce the call for papers for the Verifying Learning AI Systems Workshop @ ECAI'23

**Workshop's Scope**

While there is no uniformly agreed-upon definition of what constitutes safe or trustworthy AI, it is clear that such systems should exhibit certain properties. For example, systems should be robust to minor perturbations to their inputs and there should be some transparency about how a system arrives at a prediction or decision. More importantly, it is becoming increasingly common for deployed AI models to have to conform to requirements (e.g., legal) and/or exhibit specific properties (e.g., fairness). That is, it is necessary to verify that a model complies with these requirements. In the software engineering community, verification has been long studied with the goal of assuring that software fully satisfies the expected requirements. Therefore, a key open question in the quest for safe AI is how verification and machine learning can be combined to provide strong guarantees about software that learns and that adapts itself on the basis of past experience? Finally, what are the boundaries of what can be verified, and how can and should system design be enhanced by other mechanisms (e.g., statistics on benchmarks, procedural safeguards, accountability) to produce the desired properties?

The goal of the Verifying Learning AI Systems (VeriLearn) workshop is to bring together researchers interested in these questions. The workshop will be held in conjunction with the 26th European Conference on Artificial Intelligence, which will take place in Krakow Poland.

https://dtai.cs.kuleuven.be/events/VeriLearn2023

**Topics of Interest**

This workshop solicits papers on the following non-exhaustive list of topics:
• Representations and languages that facilitate reasoning and verification.
• Applications and extensions of software verification techniques in the context of machine learning.
• Verifying safety in dynamic systems or models.
• Reasoning about learned models to assess, e.g., their adherence to requirements.
• Learning models that are safe by design.
• Assessing the robustness of AI systems.
• Ways to evaluate aspects of AI systems that are relevant from a trust and safety perspective.
• Out of distribution detection and learning with abstention.
• Certification methodologies for AI systems.
• Concepts, approaches, and methods for identifying and dealing with the limits of verifiability.
• Approaches and case studies where verification is important for addressing ethical, privacy and societal concerns about AI.
• Case studies showing illustrative applications where verification is used to tackle issues related to safety and trustworthiness.


**Submission Instructions and Dates**

We solicit two types of papers:

- Long papers can be a maximum of 6 pages of content and an unlimited number of references in the ECAI 2023 formatting style and should report on novel, unpublished work that might not be quite mature enough for a conference or journal submission.

- Extended abstracts can be 2 pages in ECAI formatting style and summarize recent publications fitting the workshops.

Submissions should be anonymous. Papers are to be submitted in pdf format at https://cmt3.research.microsoft.com/VeriLearn2023

Paper submission deadline: 20/06/2023 @ 23:59pm CET

**Contact**
Jesse Davis (firstname(dot)lastname@kuleuven.be)

**Organizers**
• Jesse Davis, KU Leuven,
• Bettina Berendt, TU Berlin, director of the Weizenbaum Institute for the Networked Society, KU Leuven
• Hendrik Blockeel, KU Leuven
• Luc De Raedt, KU Leuven
• Benoit Frenay, University of Namur
• Fredrik Heintz, Linköping University
• Jean-Francois Raskin, Université Libre de Bruxelles

Related Resources

AI Safety 2024   Special Issue for the Journal Frontiers in Robotics and AI on AI Safety: Safety Critical Systems
FLAIRS - SLP 2024   Spoken Language Processing and Conversational Systems Special Track
IEEE AIxVR 2024   IEEE International Conference on Artificial Intelligence & extended and Virtual Reality
NLDB 2024   The 29th International Conference on Natural Language & Information Systems
EAIH 2024   Explainable AI for Health
ITNG 2024   The 21st Int'l Conf. on Information Technology: New Generations ITNG 2024
AICS 2024   The AAAI-24 Workshop on Artificial Intelligence for Cyber Security
EAICI 2024   Explainable AI for Cancer Imaging
ICCIA 2024   2024 9th International Conference on Computational Intelligence and Applications (ICCIA 2024)
ADCO 2024   11th International Conference on Advanced Computing