XAI 2022 : JISYS (OA) - Explainable Artificial Intelligence and Intelligent Systems in Analysis For Complex Problems and Systems
Call For Papers
* Dr. Mazin Abed Mohammed [Lead Guest Editor], University of Anbar, Iraq
* Prof. Dr. Seifedine Kadry, Norrof University College, Norway
* Assoc. Prof. Dr. Oana Geman, Universitatea Stefan cel Mare din Suceava, Romania
These days, the usage of machine learning schemes has acquired (super) human performance in a wide range of activities in the previous decade, given the availability of massive datasets and increased processing capabilities of systems. Image recognition, audio analysis, complex decision scheduling, and strategic game planning are just a few examples. At the same time, existing literature systems suffer from a lack of transparency and interpretability in terms of security, dynamic changes in the system. In contrast, the complex healthcare and finance with the existing schemes pose many trusts, availability and complex decision problems in a dynamic environment.
Explainable artificial intelligence (XAI) has been a hot topic in the research community due to these concerns. The supervised learning paradigm has improved the interpretability and explainability of machine learning systems. Because of the extraordinary performance advantages
of these schemes, recently developed techniques in deep-learning adopted dynamic changes and derived different complex problems for other domains.
The adherence to quality ramifications of utilizing AI in its current state in real-world circumstances is problematic. People have yet to witness significant impact applications in judicial, governmental, financial, and autonomous transportation on multiple fronts. However, algorithmic content suggestion, which molds ideas and tastes, has already impacted human lives for some time. As AI applications become more widely used, trust issues are likely to become increasingly prominent.
Specific criteria increase trust, yet there are significant issues with the problem of formalization's incompleteness. The roadblock to simple optimization methods: some concepts are complex, multidimensional, ambiguous, and difficult to formalize. Interpretability and explainability provide abstracted explanations for locating, verifying, and reasoning about essential features.
These schemes can solve the many complex trust issues in different domains.
The main objective of this special issue is to bring together diverse, novel and impactful research work on Explainable Deep Learning for and Intelligent Systems in Analysis For Complex Problems and Systems - Recent Advances and Future Trends, thereby accelerating research in this field.
- XAI and Intelligent Systems modeling for complex systems
- XAI and Intelligent Systems evolution and prediction of complex problems
- XAI and Intelligent Systems Analysis and integration of complex systems
- XAI and Intelligent Systems Learning based optimization, regulation and control for NP problems
- Deep reinforcement learning for complex problems
- Fuzzy system and decision-making for complex problems
- XAI for Internet of Medical Things
- XAI methodologies to detecting emerging medical threats from healthcare data
- XAI and Intelligent Systems for Medical data fusion
- Health Intervention Design, Modeling and Evaluation based on XAI and Intelligent Systems
- Real-time Explainable AI for medical image and data processing
- Feature selection for interpretable XAI classification
- XAI and Intelligent Systems for image detection, recognition, and segmentation
- XAI and Intelligent Systems for cancer diagnosis
- Future directions of intelligent XAI medical imaging in healthcare
- XAI and Intelligent Systems for Blockchain Applications
- XAI and Intelligent Systems for Urban Computing and Intelligence
- XAI and Intelligent Systems for Human-Robot Interaction
- XAI and Intelligent Systems for Real-world applications
- XAI and Intelligent Systems for Computer networks and Technology
- XAI and Intelligent Systems for Smart city
- XAI and Intelligent Systems for Security Applications
- XAI and Intelligent Systems for Multi Objectives optimization
- XAI and Intelligent Systems for IoT applications
- XAI and Intelligent Systems for E-Learning Systems
- XAI and Intelligent Systems for Engineering complex problems and Applications
- XAI and Intelligent Systems for Cloud and Fog Computing
HOW TO SUBMIT:
The submitted article must be original, unpublished, and not currently reviewed by other journals.
Authors must mention in their cover letter for each Special Issue manuscript that the particular manuscript is for the theme and name of Guest Editors of Special Issue consideration so that the Guest Editors can be notified separately.
Please visit https://mc.manuscriptcentral.com/jisys, when submitting your paper please select the article type "S.I.: XAI and Intelligent Systems in Analysis For Complex Problems and Systems - Recent Advances and Future Trends"
We are looking forward to your submission!
In case of any further questions please contact:
Editorial Office - JISYS_Editorial@degruyter.com