posted by user: grupocole || 3139 views || tracked by 5 users: [display]

CIM-XAI 2021 : IEEE Computational Intelligence Magazine special issue on Explainable and Trustworthy Artificial Intelligence


When N/A
Where N/A
Submission Deadline Feb 15, 2021
Notification Due Apr 15, 2021
Final Version Due Jul 1, 2021
Categories    artificial intelligence   NLP

Call For Papers

* Apologies for cross-postings *

Dear colleagues,
we are organizing a special issue on "Explainable and Trustworthy Artificial Intelligence" to be published in the IEEE Computational Intelligence Magazine (CIM) by the second half of 2021.

+ Deadline is approaching: 15 Feb 2021 !
+ Website:
+ Submission website:

CIM publishes peer-reviewed articles that present emerging novel discoveries, important insights, or tutorial surveys in all areas of computational intelligence design and applications, in keeping with the Field of Interest of the IEEE Computational Intelligence Society (IEEE/CIS).

- Impact Factor: 9.083
- SJR: Q1 (Artificial Intelligence)
- Website:

You are kindly invited to submit a paper for this special issue. In addition, we highly appreciate if you can help us to disseminate this Call for Papers among your colleagues.

## Aims and Scope

Computational Intelligence (CI) encompasses the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing fuzzy systems, neural networks, connectionist systems, genetic algorithms, evolutionary programming, and hybrid intelligent systems in which these paradigms are contained. These techniques and their hybridizations work in a cooperative way, taking profit from the main advantages of each individual technique, in order to solve lots of complex real-world problems for which other techniques are not well suited. CI enables Artificial Intelligence (AI) through simulating natural intelligence in all its forms.

In the era of the Internet of Things and Big Data, data scientists are required to extract valuable knowledge from the given data. They first analyze, cure and pre-process data. Then, they apply AI techniques to automatically extract knowledge from data. AI is identified as a strategic technology and it is already part of our everyday life. The European Commission states that “EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union's values and fundamental rights as well as ethical principles such as accountability and transparency”1. It emphasizes the importance of Explainable AI (XAI in short), in order to develop an AI coherent with European values: “to further strengthen trust, people also need to understand how the technology works, hence the importance of research into the explainability of AI systems”. Moreover, as remarked in the XAI challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans”2. Accordingly, users require a new generation of XAI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made.

XAI is an endeavor to evolve AI methodologies and technology by focusing on the development of agents capable of both generating decisions that a human could understand in each context, and explicitly explaining such decisions. This way, it is possible to scrutinize the intelligent models and verify if automated decisions are made on the basis of accepted rules and principles, so that decisions can be trusted, and their impact justified.

This Special Issue is supported by the IEEE CIS Task Force on Explainable Fuzzy Systems ([TF-EXFS]( The mission of the TF-EXFS is to lead the development of a new generation of Explainable Fuzzy Systems, with a holistic view of fundamentals and current research trends in the XAI field, paying special attention to fuzzy-grounded knowledge representation and reasoning but also regarding how to enhance human-machine interaction through multi-modal (e.g., graphical or textual modalities) effective explanations.

The scope of this special issue is not limited to the community of researchers in Fuzzy Logic, but it is open to contributions by researchers, from both academy and industry, working in the multidisciplinary field of XAI.

## Topics

This special issue is targeted on general readership articles about design and application of XAI technologies.
Topics of interest include, but are not limited to:
- Theoretical Aspects of Explainability, Fairness, Accountability and Transparency
- Relations between Explainability and other Quality Criteria (such as Interpretability, Accuracy, Stability, Relevance, etc.)
- Dimensions of Interpretability: Readability versus Understandability
- Explainability Evaluation and Improvements
- Learning Methods and Design Issues for Explainable Systems and Models
- Interpretable Machine Learning
- Explaining Black-box Models
- Hybrid Approaches (e.g., Neuro-Fuzzy systems) for XAI
- Model-specific and Model-agnostic Approaches for XAI
- Models for Explainable Recommendations
- Explainable Conversational Agents
- Self-explanatory Decision-Support Systems
- Factual and Counterfactual Explanations
- Causal Thinking, Reasoning and Modeling
- Cognitive Science and XAI
- Argumentation Theory for XAI
- Natural Language Technology for XAI
- Human-Machine Interaction for XAI
- Ethics and Legal Issues for XAI
- XAI-based Data Analysis and Bias Mitigation
- Safe and Trustworthy AI
- Applications of XAI-based Systems
- Open Source Software for XAI

## Submission

The IEEE Computational Intelligence Magazine (CIM) publishes peer-reviewed high-quality articles. All manuscripts must be submitted electronically in PDF format. Manuscripts must be in standard IEEE two-column/single space format and adhere to a length of 10-12 pages (including figures and references) for regular papers. A mandatory page charge is imposed on all papers exceeding 12 pages in length.

More information on manuscript details and submission guidelines can be found at the following websites:
- Special Issue website:
- IEEE CIM website:
- IEEE CIM submission:

## Important Dates

- Manuscript Due: **15th February, 2021**
- First Notification: 15th April, 2021
- Revision Due: 15th May, 2021
- Final Notification: 1st July, 2021
- Publication Date: November 2021

## Guest Editors

- José María Alonso, Research Centre in Intelligent Technologies (CiTIUS), University of Santiago de Compostela, Spain
- Corrado Mencar, Department of Informatics, University of Bari Aldo Moro, Bari, Italy
- Hisao Ishibuchi, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China

Related Resources

xAI 2024   The 2nd World Conference on eXplainable Artificial Intelligence
Ei/Scopus-AACIP 2024   2024 2nd Asia Conference on Algorithms, Computing and Image Processing (AACIP 2024)-EI Compendex
xAI ST Actionable XAI 2024   xAI World Conference Special Track on Actionable Explainable AI
IEEE Big Data - MMAI 2024   IEEE Big Data 2024 Workshop on Multimodal AI
GenAI in Biology 2024   Generative AI in Computational Biology and Bioinformatics (Special Issue on CSBJ)
IEEE-Ei/Scopus-ACEPE 2024   2024 IEEE Asia Conference on Advances in Electrical and Power Engineering (ACEPE 2024) -Ei Compendex
AIxMM 2025   IEEE International Conference on AI x Multimedia
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex
EXTRAAMAS 2024   EXplainable and TRAnsparent AI and Multi-Agent Systems
IEEE-JBHI (SI) 2024   Special Issue on Revolutionizing Healthcare Informatics with Generative AI: Innovations and Implications