posted by user: dtaikuleuven || 726 views || tracked by 2 users: [display]

VeriLearn 2023 : Verifying Learning AI Systems Workshop @ ECAI'23

FacebookTwitterLinkedInGoogle

Link: https://dtai.cs.kuleuven.be/events/VeriLearn2023
 
When Sep 30, 2023 - Oct 1, 2023
Where Krakow, Poland
Submission Deadline Jun 23, 2023
Categories    AI   safe & trusthworthy ai   verification for ai
 

Call For Papers

We would like to announce the call for papers for the Verifying Learning AI Systems Workshop @ ECAI'23

**Workshop's Scope**

While there is no uniformly agreed-upon definition of what constitutes safe or trustworthy AI, it is clear that such systems should exhibit certain properties. For example, systems should be robust to minor perturbations to their inputs and there should be some transparency about how a system arrives at a prediction or decision. More importantly, it is becoming increasingly common for deployed AI models to have to conform to requirements (e.g., legal) and/or exhibit specific properties (e.g., fairness). That is, it is necessary to verify that a model complies with these requirements. In the software engineering community, verification has been long studied with the goal of assuring that software fully satisfies the expected requirements. Therefore, a key open question in the quest for safe AI is how verification and machine learning can be combined to provide strong guarantees about software that learns and that adapts itself on the basis of past experience? Finally, what are the boundaries of what can be verified, and how can and should system design be enhanced by other mechanisms (e.g., statistics on benchmarks, procedural safeguards, accountability) to produce the desired properties?

The goal of the Verifying Learning AI Systems (VeriLearn) workshop is to bring together researchers interested in these questions. The workshop will be held in conjunction with the 26th European Conference on Artificial Intelligence, which will take place in Krakow Poland.

https://dtai.cs.kuleuven.be/events/VeriLearn2023

**Topics of Interest**

This workshop solicits papers on the following non-exhaustive list of topics:
• Representations and languages that facilitate reasoning and verification.
• Applications and extensions of software verification techniques in the context of machine learning.
• Verifying safety in dynamic systems or models.
• Reasoning about learned models to assess, e.g., their adherence to requirements.
• Learning models that are safe by design.
• Assessing the robustness of AI systems.
• Ways to evaluate aspects of AI systems that are relevant from a trust and safety perspective.
• Out of distribution detection and learning with abstention.
• Certification methodologies for AI systems.
• Concepts, approaches, and methods for identifying and dealing with the limits of verifiability.
• Approaches and case studies where verification is important for addressing ethical, privacy and societal concerns about AI.
• Case studies showing illustrative applications where verification is used to tackle issues related to safety and trustworthiness.


**Submission Instructions and Dates**

We solicit two types of papers:

- Long papers can be a maximum of 6 pages of content and an unlimited number of references in the ECAI 2023 formatting style and should report on novel, unpublished work that might not be quite mature enough for a conference or journal submission.

- Extended abstracts can be 2 pages in ECAI formatting style and summarize recent publications fitting the workshops.

Submissions should be anonymous. Papers are to be submitted in pdf format at https://cmt3.research.microsoft.com/VeriLearn2023

Paper submission deadline: 20/06/2023 @ 23:59pm CET

**Contact**
Jesse Davis (firstname(dot)lastname@kuleuven.be)

**Organizers**
• Jesse Davis, KU Leuven,
• Bettina Berendt, TU Berlin, director of the Weizenbaum Institute for the Networked Society, KU Leuven
• Hendrik Blockeel, KU Leuven
• Luc De Raedt, KU Leuven
• Benoit Frenay, University of Namur
• Fredrik Heintz, Linköping University
• Jean-Francois Raskin, Université Libre de Bruxelles

Related Resources

EXPLAINABILITY 2025   The Second International Conference on Systems Explainability
NLCAI 2025   6th International Conference on Natural Language Computing and AI
ICPRS 2025   15th International Conference on Pattern Recognition Systems
ICIBM 2025   13th International Conference on Intelligent Biology and Medicine (ICIBM 2025)
Integrating Embodied Intelligence and Io 2025   Intelligent Computing: Special Issue: Advanced Intelligent Computation for Integrating Embodied Intelligence and IoT Systems
AI in Evidence Synthesis 2025   AI in Evidence Synthesis (Cochrane Evidence Synthesis and Methods)
COCE 2025   The Second International Conference on Technologies for Marine and Coastal Ecosystems
IoTI5 2025   7th International Workshop on IoT Applications and Industry 5.0
IEEE AI x MHC 2025   Second IEEE International Conference on AI x Medicine, Health, and Care
PREDICTION SOLUTIONS 2025   International Conference on Prediction Solutions for Technical and Societal Systems