posted by user: IEEEComputerSociety || 6373 views || tracked by 8 users: [display]

IEEE CS: Special Issue on Explainable AI 2020 : IEEE CS Computer: Special Issue on Explainable AI and Machine Learning

FacebookTwitterLinkedInGoogle

Link: https://www.computer.org/digital-library/magazines/co/call-for-papers-special-issue-on-explainable-ai-and-machine-learning/?source=wiki
 
When N/A
Where N/A
Submission Deadline Oct 30, 2020
Notification Due Apr 17, 2020
Final Version Due Jul 1, 2020
Categories    machine learning   AI   IEEE   computing
 

Call For Papers

We are observing the rapid increase of artificial intelligence-oriented and machine learning-dependent algorithms and their applications all around us. In addition to everyday applications of speech and image recognition, these algorithms are increasingly used in safety-critical software, such as autonomous driving and robotics. The performance of most artificial intelligence and machine learning (AI/ML) algorithms typically equals or surpasses human performance. However, these AI/ML algorithm-based applications are highly opaque—that is, it is difficult to decipher the reasoning behind a particular classification or decision produced by an AI/ML application.

Although the accuracy level is usually high, AI/ML applications are not foolproof. Deadly accidents in autonomous vehicles are one example of the risks associated with complete reliance on these programs. To accept these applications in our lives, ultimately there must be some responsibility and accountability for the outcome produced by these applications. Knowing how determinations are made by these applications, and being able to justify the AI system’s action or decision is essential—particularly to address the following questions in appropriate scenarios.

How do we know the system is working correctly?
What combinations of factors support the decision?
Why was another action not taken?
This information constitutes explainability, which should be an integral part of verification and validation for AI/ML software. For this special issue, Computer seeks articles that describe different approaches and efforts towards AI/ML explainability.

Topics of Interest:

Examples of failures due to lack of explainability
Performance of learning algorithms
Appropriate levels of trust in learning algorithms
Approaches to AI/ML explainability
Causality and inference in AI/ML applications
Human factors in explainability
Psychological acceptability of AI/ML systems
Key Dates
Articles due for review: October 30, 2020
First notification to authors: February 26, 2021
Second revisions submission deadline: March 15, 2021
Second notification to authors: April 17, 2021
Camera-ready paper deadline: July 01, 2021
Publication: October 2021
Submission Guidelines
For manuscript submission guidelines, visit www.computer.org/publications/author-resources/peer-review/magazines. When you are ready to submit, visit https://mc.manuscriptcentral.com/com-cs.

Questions?
Please contact the guest editors at co10-21@computer.org.

Guest editors:

M S Raunak, Loyola University Maryland/NIST
Rick Kuhn, NIST

Related Resources

IEEE AIxVR 2024   IEEE International Conference on Artificial Intelligence & extended and Virtual Reality
ECAI 2024   27th European Conference on Artificial Intelligence
IEEE-JBHI (SI) 2024   Special Issue on Revolutionizing Healthcare Informatics with Generative AI: Innovations and Implications
CCVPR 2024   2024 International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2024)
AIxMM 2025   IEEE International Conference on AI x Multimedia
AIM@EPIA 2024   Artificial Intelligence in Medicine
IEEE BigData 2024   2024 IEEE International Conference on Big Data
SPIE-Ei/Scopus-CVCM 2024   2024 5th International Conference on Computer Vision, Communications and Multimedia (CVCM 2024) -EI Compendex
EAIH 2024   Explainable AI for Health
IEEE ICCR 2024   IEEE--2024 6th International Conference on Control and Robotics (ICCR 2024)