posted by user: gciatto || 1343 views || tracked by 4 users: [display]

JAAMAS-XAI 2023 : Special Issue on Multi-Agent Systems and Explainable AI @ JAAMAS


When N/A
Where N/A
Submission Deadline Sep 30, 2023
Categories    XAI   MAS   AI   JAAMAS

Call For Papers

# Special Issue: Multi-Agent Systems and Explainable AI

Due to the recent success of machine (ML) and deep learning (DL) and the resurgence of other AI techniques, in the next decade, academia will focus on engineering intelligent systems, possibly involving data-driven, trainable components.

One of the critical challenges in intelligent system engineering is integrating diverse AI technologies while preserving conceptual integrity. Indeed, tomorrow's intelligent systems shall embody many innovative capacities -- such as image, speech, and text recognition and generation -- other than the criterion about when, how, and if to exploit those capabilities to support human users or cooperate with each other smoothly.

The most widely accepted solution to this challenge is using agents and Multi-Agent Systems (MAS), possibly embedding i) data-driven solutions to support their intelligent capabilities, as well as ii) complex automated reasoning/planning solutions to support their autonomous decision-making.

However, as multi-agent systems become more and more sophisticated, the capability of humans to understand their behavior -- hence trusting them and accepting their support -- becomes weaker. Therefore, the need for explainable intelligent systems -- which are capable of motivating their behavior to the users with some degree of autonomy -- is more compelling than ever.

There is a growing divide within the AI community between the success of sub-symbolic techniques (DL/ML) and the public's concern about the role of intelligent systems in society, which has led to the need for eXplainable Artificial Intelligence (XAI). XAI aims to make AI systems (more) transparent and accountable by providing algorithmic and software tools to ease users' understanding of AI systems.

MAS have certainly a role to play in pushing AI towards higher degrees of transparency, accountability, or explainability. Indeed, as the place where automated decision-making occurs, agents may require further smart capabilities aimed at explaining their behavior. In this sense, MAS themselves could be a subject for the field of XAI---and this is even more true if they wrap ML/DL-based solutions, which are inherently poorly understandable, or complex planning or deliberation strategies. However, MAS may also act as a tool for the field of XAI, as their social ability can be straightforwardly exploited to support any possible explanation process.

Finally, agents are where symbolic and sub-symbolic AI should meet. Arguably, explaining AI systems to humans should involve symbolic information---as symbols are what humans understand. Cognitive agents in intelligent MAS already use symbolic AI models and technologies for rational processes, knowledge representation, expressive communication, and effective coordination. The combination of MAS and symbolic AI has the potential to engineer explainable intelligent systems and has significant implications for fields such as robotics, computer science, and economics.

Overall, the intersection of Multi-Agent Systems and Explainable AI is a timely and intriguing topic that raises essential questions about autonomous systems' transparency, accountability, and fairness and their impact on society. Accordingly, we invite submissions of original research papers on the following topics:

- Explainable conflict resolution techniques/frameworks
- Explainable negotiation protocols and strategies
- Trustworthy voting mechanisms
- Argumentation for XAI
- Explainable user/agent profiling
- Explainable robots
- Explainable planning (XAIP)
- Explainable knowledge representation in MASXAI in Human-Computer Interaction (HCI)
- Interaction protocols for explanations
- Multi-Agent based simulations and XAI
- Inter-agent explanations
- Agent-oriented software-engineering tools for XAIXAI and norms in MAS
- Fair XAIXAI and Ethical Machines
- XAI and legal entanglements
- User studies and assessment of any of the aforementioned topics
- Multi-agent reinforcement learning and XAI
- Explainable reinforcement learning in MAS

We welcome submissions from academia and industry that address the challenges and opportunities of XAI, from knowledge extraction (interpretability) to knowledge manipulation and sharing at an agent-agent and agent-human level. Finally, we encourage interdisciplinary submissions integrating perspectives from the computer science, software engineering, data science, legal AI, and ethics communities.

For the sake of reproducibility, submissions involving software contributions of any sort should include an URL to a publicly available repository (e.g., on GitHub) containing the source code and data/experiments if it is the case.

## Timeline

Submissions deadline: September 30, 2023

We shall try to ensure at least reviews 3 per paper

Single-blinded review process

## Guest editors

Giovanni Ciatto (

Andrea Omicini (

Davide Calvaresi (

Related Resources

EXTRAAMAS 2024   EXplainable and TRAnsparent AI and Multi-Agent Systems
AAAI 2025   The 39th Annual AAAI Conference on Artificial Intelligence
xAI 2024   The 2nd World Conference on eXplainable Artificial Intelligence
KDD 2025   31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining
xAI ST Actionable XAI 2024   xAI World Conference Special Track on Actionable Explainable AI
SIGEM 2024   10th International Conference on Signal, Image Processing and Embedded Systems
AMSTA 2024   18th International Conference on Agents and Multi-Agent Systems: Technology and Applications
IEEE AIxVR 2024   IEEE International Conference on Artificial Intelligence & extended and Virtual Reality
FLAIRS-37 ST XAI, Fairness, and Trust 2024   FLAIRS-37 Special Track on Explainable, Fair, and Trustworthy AI
AIMLAI@ECML-PKDD 2024   AIMLAI@ECML/PKDD 2024: Joint Tutorial on XAI for Sequential Data and International Workshop on Advances in Interpretable Machine Learning and Artificial Intelligence