DynXAI 2023 : Explainable Artificial Intelligence From Static to Dynamic at ECML PKDD 2023
Call For Papers
Note on the final Call: EXTENDED DEADLINE until July 3rd (AoE) and encouragement for EXTENDED ABSTRACTS as well as already PUBLISHED work fitting the scope of the workshop.
We are excited to organize a workshop on "Explainable Artificial Intelligence: From Static to Dynamic" (DynXAI) at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 2023,Torino, Italy.
The key focus of our workshop is to;
(A) discuss and identify the different application areas of Explainable Artificial Intelligence (XAI) in dynamic learning environments,
(B) discuss the main challenges and potential solutions to pave the way for future research directions
from different fields of study and exchange ideas.
WORKSHOP WEBSITE: https://sites.google.com/view/dynxai-ecmlpkdd-2023
We are accepting submissions now. Please go over the CALL FOR PAPERS below and submit your best work!
Best papers are recommended to be featured in a Special Issue of Neural Computing and Application (NCAA) on Incremental Learning.
Paper Submission Deadline: July 03, 2023
Notification of Acceptance: July 13, 2023
Camera-ready deadline: July 31, 2023
Workshop: September 22, 2023
WORKSHOP EMAIL (General queries): email@example.com
The field of Explainable Artificial Intelligence (XAI) aims to provide explanations for black-box machine learning models. While current XAI research primarily focuses on static learning scenarios, many real-world applications require models to dynamically change over time. This presents a challenging task for faithful model explanations, as these models can change drastically over time, necessitating substantial changes to the resulting explanations. Hence, novel explanation methods are needed that like the underlying models may be updated continuously or change over time.
The goal of this workshop is to bring together scientists from communities where explanations and dynamic, time-dependent machine learning models come together. We want to celebrate recent advances, discuss main challenges and potential solutions, and pave the way for future research directions. Additionally, we aim to strengthen the connection between theory and practice by identifying real-world systems that can benefit from incorporating dynamic explanations.
Thus, we cordially invite scientists who feel addressed by the theme of the workshop to submit their latest works. The workshop solicits work on all aspects of XAI in dynamic machine learning scenarios, including but not limited to:
- online learning
- incremental learning from data streams
- learning under concept drifts
- continual lifelong learning
- progressive data science
- learning from big data
- federated learning
- efficient or real-time explanations
- time-dependent explanations
Authors are supposed to submit original work or already published work in the form of papers or extended abstracts of up to 10 pages (excluding references and appendix) in LNCS guidelines following the style of the main conference format.
We also encourage submissions of already published work that fits the scope of the workshop. In case of already published work please indicate the original publication in your submission.
Accepted papers will be presented in 15 min presentations with 3 mins of discussions.
Authors who submit their work to DynXAI 2023 commit themselves to present their paper at the workshop in case of acceptance.
All accepted papers (with an opt-out for already published work) will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science. Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop. Pre-proceedings will be available online before the workshop.
Submissions should be uploaded as a single pdf file via CMT. (https://cmt3.research.microsoft.com/ECMLPKDDworkshop2023/Track/17/Submission/Create)
- Barbara Hammer (Bielefeld University)
- Eyke Hüllermeier (LMU, Germany)
- Fabian Fumagalli (Bielefeld University)
- Maximilian Muschalik (LMU, Germany)