posted by organizer: ProfBina || 574 views || tracked by 2 users: [display]

MEandE-LP 2023 : 3rd International Workshop on Machine Ethics and Explainability - The Role of Logic Programming


When Jul 9, 2023 - Jul 10, 2023
Where Imperial College London, UK
Submission Deadline TBD
Categories    artificial intelligence   machine ethics   explainability   logic programming

Call For Papers



MEandE-LP 2023

3rd Workshop on Machine Ethics and Explainability - The Role of Logic Programming

Affiliated with the 39th International Conference on Logic Programming (ICLP),

Imperial College London, UK

July 9 - 15, 2023



Machine Ethics and Explainability are two recent topics that have been garnering significant attention and concern in recent years. This global concern has manifested itself in numerous initiatives at various levels. An intrinsic relationship exists between these two topics. It is insufficient for an autonomous agent to behave ethically; it must also be able to explain its behavior, necessitating both an ethical component and an explanation component. Moreover, explainable behavior is clearly unacceptable if it is not ethical (i.e., it does not adhere to societal ethical norms).

In many application domains, particularly those involving human lives and necessitating ethical decisions, users must comprehend the system recommendations in order to explain the reasoning behind their decisions to others. One of the most important ultimate goals of Explainable AI systems is achieving an efficient mapping between explainability and causality. Explainability refers to the system's ability to justify its actions in natural language to the average user. In other words, the system's capacity to articulate the reasons underlying its decisions is central to explainability.

However, when dealing with high-risk decision-making systems (ethical decisions), is it sufficient to merely explain the system's decisions to human users? Should we extend beyond the boundaries of the predictive model to observe cause and effect within the system?

A vast body of research on explainability attempts to clarify the output of some black-box models using various approaches. Some approaches endeavor to generate logical rules as explanations. Nonetheless, it is worth noting that most methods for generating post-hoc explanations are themselves based on statistical tools, which are subject to uncertainty or errors. Many post-hoc explainability techniques try to approximate deep-learning black-box models with simpler, interpretable models that can be inspected to explain the black-box models. However, these approximate models are not provably loyal to the original model, as there are always trade-offs between explainability and fidelity.

Conversely, a substantial number of researchers have employed inherently interpretable approaches to develop and implement their ethical autonomous agents. Many of these approaches are based on logic programming, ranging from deontic logics to non-monotonic logics and other formalisms.

Logic Programming (LP) holds significant potential in these two burgeoning research areas, as logic rules are easily understood by humans and promote causality, which is vital for ethical decision-making.

Despite the considerable interest machine ethics has received over the past decade, primarily from ethicists and AIexperts, the question "Are artificial moral agents possible?" remains unanswered. Several attempts have been made to implement ethical decision-making into intelligent autonomous agents using various approaches. However, no fully descriptive and universally acceptable model of moral judgment and decision-making exists to date. None of the developed solutions appear to be entirely convincing in providing trustworthy moral behavior. The same applies to explainability; although there is widespread concern about autonomous agents' explainability, current approaches do not seem satisfactory. Many questions remain unanswered in these two fascinating, rapidly expanding fields.

This workshop aims to convene researchers working on all aspects of machine ethics and explainability, including theoretical work, system implementations, and applications. By co-locating this workshop with ICLP, we also intend to encourage collaboration among researchers from different LP areas. This workshop offers a forum for facilitating discussions on these topics and fostering a productive exchange of ideas.

Topics of interest include, but are not limited to:

- New LP-based approaches to programming machine ethics;

- New LP-based approaches to explainability of black-box models;

- Evaluation and comparison of existing LP-based approaches;

- Approaches to verification of ethical behavior;

- LP applications in machine ethics;

- Integrating LP with methods for machine ethics;

- Integrating LP with methods for explainability;

- Neuro-symbolic AI for ethics/explainability.


The workshop invites two types of submissions:

- original papers describing original research.

- non-original paper already published in formal proceedings or journals.

Original papers:

- regular papers must not exceed 14 pages (including references)

- short papers must not exceed 7 pages (including references). Short papers are particularly suitable to present work in progress, extended abstracts, doctoral theses, or general overviews of research projects.

Authors are requested to clearly specify whether their submission is original or not with a footnote on the first page.

Authors are invited to submit their manuscripts in PDF via the EasyChair system at the link:

Manuscripts must be formatted using the 1-column CEUR-ART Style (you can access the Overleaf template here). For more information, please see the CEUR website


Paper submission deadline: May 25, 2023

Author Notification: June 10, 2023

Camera-ready articles due: June 15, 2023

Workshop: 9-10 July 2023


Authors of all accepted original contributions can opt for to publish their work in formal proceedings.

Accepted non-original contributions will be given visibility on the workshop web site including a link to the original publication, if already published.

Accepted original papers will be published as CEUR workshops proceedings


Imperial College London, UK


Abeer Dyoub ( DISIM, University of L'Aquila, Italy).

Fabio Aurelio D’Asaro (Department of Human Science, University of Verona, Italy)

Francesca A. Lisi (DiB, University of Bari ”Aldo Moro”, Italy)



Related Resources

Ei/Scopus-AACIP 2024   2024 2nd Asia Conference on Algorithms, Computing and Image Processing (AACIP 2024)-EI Compendex
CCBDIOT 2024   2024 3rd International Conference on Computing, Big Data and Internet of Things (CCBDIOT 2024)
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex
CCVPR 2024   2024 International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2024)
IEEE-Ei/Scopus-ACEPE 2024   2024 IEEE Asia Conference on Advances in Electrical and Power Engineering (ACEPE 2024) -Ei Compendex
EXTRAAMAS 2024   EXplainable and TRAnsparent AI and Multi-Agent Systems
XAIE 2024   3rd Workshop on Explainable and Ethical AI jointly with ICPR’2024
IEEE ICA 2022   The 6th IEEE International Conference on Agents
TdD#40   Temes de Disseny #40: The Role of the Designer in Society: Ethical, Theoretical and Practical Perspectives
MLNLP 2024   2024 7th International Conference on Machine Learning and Natural Language Processing (MLNLP 2024)