| |||||||||||||||||
XDLMIPA 2022 : Explainable Deep Learning for Medical Image Processing and Analysis | |||||||||||||||||
Link: https://cis.ieee.org/images/files/Publications/TETCI/SI25_CFP_XDLMIPA.pdf | |||||||||||||||||
| |||||||||||||||||
Call For Papers | |||||||||||||||||
I. Aim and Scope
Deep learning (DL) is an emerging computational intelligence method that plays a major role in data science and its applications. DL provides computational models of multiple processing layers to learn and represent data with multiple levels of abstraction. It can implicitly capture intricate structures of large-scale data and is ideally suited to some of the hardware architectures currently available. DL has recently achieved outstanding performance in academic and industrial fields and become a vital utensil in a wide range of medical image processing and analysis tasks, including cancer detection, tumor classification, vessel segmentation, etc. While DL models give impressively high predictive accuracy, they are recognized as black-boxes with deep and complicated layers. In the meantime, DLs have been recently reported as defenseless to spoofing with elegant hand-designed input samples. This principally occurs in the medical image processing field, where a single incorrect prediction might be detrimental, and the trust in the trained DL model and its capacity to deliver efficient and robust data processing must be pledged. Therefore, understanding how the DL models work by means of validation and visualization procedures and thus creating explainable DL models has become an elemental problem. It is still unclear what information must be delivered to DL models, the best validation procedure to assess their generalization ability, and how DL models work to warrant a rapid, safe and robust prediction. Hence, experts/users request the latest research advances in explainable deep learning (XDL). This critical research topic will bring new challenges and opportunities to the AI community. This SI aims to provide a diverse but complementary set of contributions to demonstrate new developments and applications of explainable deep learning to solve problems in medical image processing. The ultimate goal is to promote research and development of explainable deep learning for multimodal biomedical images by publishing high-quality research articles and reviews in this rapidly growing interdisciplinary field. The medical data can be acquired from multiple imaging modalities, such as Computed Tomography (CT), Magnetic Resonance (MR) Imaging, Ultrasound, Single Photon Emission Computed Tomography (SPECT), Positron Emission Tomography (PET), Magnetic Particle Imaging (MPI), EE/MEG, Optical Microscopy and Tomography, Photoacoustic Tomography, Electron Tomography, and Atomic Force Microscopy, etc. II. Topics Scopes (but are not limited to) the following: • Qualification/Visualization/Interpretation of learned representations from deep neural network models • Novel theoretical understanding of deep learning in biomedical engineering • Explainable transfer learning and multi-task learning • Analyzing bottleneck of efficient learning of deep neural networks • Explainable joint semantic segmentation, object detection, and scene recognition on biomedical images • Improvising on the computation of a deep network; exploiting parallel computation techniques and GPU programming • Inferring and regularizing network structure for robust prediction • Translational multimodality imaging and biomedical applications (e.g., detection, diagnostic analysis, quantitative measurements, image guidance) • Optimization by deep neural networks, multi-dimensional deep learning • New models of new structure of convolutional neural network • Adversarial attacks and defending in medical-image-processing applications • Explainable fusion of deep learning with fuzzy systems and/or evolutionary computations • Generative Deep Learning for data augmentation • Predictive modelling of progression based on LSTM and RNN • How to make deep learning algorithms more explainable III. Submissions SUBMISSIONSManuscripts should be prepared according to the “Information for Authors” section of the journal, and submissions should be made through the journal submission website: https://mc.manuscriptcentral.com/tetci-ieee, by selecting the Manuscript Type of “explainable deep learning for medical image processing and analysis (XDLMIPA)” and clearly marking “explainable deep learning for medical image processing and analysis (XDLMIPA)” in the comments to the Editor-in-Chief. Submitted papers will be reviewed by domain experts. Submission of a manuscript implies that it is the authors’ original, unpublished work and is not being submitted for possible publication elsewhere. V. Guest Editors Prof. Dr. Yu-Dong Zhang, University of Leicester, UK, yudongzhang@ieee.org yudong.zhang@le.ac.uk Prof. Dr. Juan Manuel Gorriz, Cambridge University, UK/ University of Granada, Spain jg825@cam.ac.uk Prof. Yi Pan, Georgia State University, USA yipan@gsu.edu Prof. Oscar Cordon, E.T.S. Ingenierías Informática y Telecomunicación/University of Granada ocordon@decsai.ugr.es |
|