posted by organizer: zhangyudong || 2637 views || tracked by 3 users: [display]

XDLMIPA 2022 : Explainable Deep Learning for Medical Image Processing and Analysis


When N/A
Where N/A
Abstract Registration Due Nov 1, 2021
Submission Deadline Mar 1, 2022
Notification Due Apr 1, 2022
Final Version Due May 1, 2022
Categories    deep learning   image processing

Call For Papers

I. Aim and Scope

Deep learning (DL) is an emerging computational intelligence method that plays a major role in data science and its applications. DL provides computational models of multiple processing layers to learn and represent data with multiple levels of abstraction. It can implicitly capture intricate structures of large-scale data and is ideally suited to some of the hardware architectures currently available. DL has recently achieved outstanding performance in academic and industrial fields and become a vital utensil in a wide range of medical image processing and analysis tasks, including cancer detection, tumor classification, vessel segmentation, etc.
While DL models give impressively high predictive accuracy, they are recognized as black-boxes with deep and complicated layers. In the meantime, DLs have been recently reported as defenseless to spoofing with elegant hand-designed input samples. This principally occurs in the medical image processing field, where a single incorrect prediction might be detrimental, and the trust in the trained DL model and its capacity to deliver efficient and robust data processing must be pledged. Therefore, understanding how the DL models work by means of validation and visualization procedures and thus creating explainable DL models has become an elemental problem.
It is still unclear what information must be delivered to DL models, the best validation procedure to assess their generalization ability, and how DL models work to warrant a rapid, safe and robust prediction. Hence, experts/users request the latest research advances in explainable deep learning (XDL). This critical research topic will bring new challenges and opportunities to the AI community. This SI aims to provide a diverse but complementary set of contributions to demonstrate new developments and applications of explainable deep learning to solve problems in medical image processing. The ultimate goal is to promote research and development of explainable deep learning for multimodal biomedical images by publishing high-quality research articles and reviews in this rapidly growing interdisciplinary field.
The medical data can be acquired from multiple imaging modalities, such as Computed Tomography (CT), Magnetic Resonance (MR) Imaging, Ultrasound, Single Photon Emission Computed Tomography (SPECT), Positron Emission Tomography (PET), Magnetic Particle Imaging (MPI), EE/MEG, Optical Microscopy and Tomography, Photoacoustic Tomography, Electron Tomography, and Atomic Force Microscopy, etc.

II. Topics

Scopes (but are not limited to) the following:

• Qualification/Visualization/Interpretation of learned representations from deep neural network models
• Novel theoretical understanding of deep learning in biomedical engineering
• Explainable transfer learning and multi-task learning
• Analyzing bottleneck of efficient learning of deep neural networks
• Explainable joint semantic segmentation, object detection, and scene recognition on biomedical images
• Improvising on the computation of a deep network; exploiting parallel computation techniques and GPU programming
• Inferring and regularizing network structure for robust prediction
• Translational multimodality imaging and biomedical applications (e.g., detection, diagnostic analysis, quantitative measurements, image guidance)
• Optimization by deep neural networks, multi-dimensional deep learning
• New models of new structure of convolutional neural network
• Adversarial attacks and defending in medical-image-processing applications
• Explainable fusion of deep learning with fuzzy systems and/or evolutionary computations
• Generative Deep Learning for data augmentation
• Predictive modelling of progression based on LSTM and RNN
• How to make deep learning algorithms more explainable

III. Submissions

SUBMISSIONSManuscripts should be prepared according to the “Information for Authors” section of the journal, and submissions should be made through the journal submission website:, by selecting the Manuscript Type of “explainable deep learning for medical image processing and analysis (XDLMIPA)” and clearly marking “explainable deep learning for medical image processing and analysis (XDLMIPA)” in the comments to the Editor-in-Chief. Submitted papers will be reviewed by domain experts. Submission of a manuscript implies that it is the authors’ original, unpublished work and is not being submitted for possible publication elsewhere.

V. Guest Editors

 Prof. Dr. Yu-Dong Zhang, University of Leicester, UK,
 Prof. Dr. Juan Manuel Gorriz, Cambridge University, UK/ University of Granada, Spain
 Prof. Yi Pan, Georgia State University, USA
 Prof. Oscar Cordon, E.T.S. Ingenierías Informática y Telecomunicación/University of Granada

Related Resources

EAIH 2024   Explainable AI for Health
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
ICDM 2024   IEEE International Conference on Data Mining
EAICI 2024   Explainable AI for Cancer Imaging
IOTCB 2024   3rd International Conference on IOT, Cloud and Big Data
ADMIT 2024   2024 3rd International Conference on Algorithms, Data Mining, and Information Technology (ADMIT 2024)
ICVISP 2024   2024 8th International Conference on Vision, Image and Signal Processing (ICVISP 2024)
EPSEE 2024   2024 3rd International Conference on Advanced Electric Power System and Energy Engineering (EPSEE 2024)
IITUPC 2024   Immunotherapy and Information Technology: Unleashing the Power of Convergence
SPIE-Ei/Scopus-ITNLP 2024   2024 4th International Conference on Information Technology and Natural Language Processing (ITNLP 2024) -EI Compendex