In the modern era, healthcare systems predominantly operate with digital medical data, facilitating a wide array of artificial intelligence applications. There's a growing interest in quantitatively analysing clinical images through techniques like Positron Emission Tomography, Computerised Tomography, and Magnetic Resonance Imaging, particularly in the realms of texture analysis and radiomics. Through machine and deep learning advancements, researchers can glean insights to enhance the discovery of therapeutic tools, bolster diagnostic decisions, and aid in the rehabilitation process. However, the huge volume of available data may intensify the diagnostic effort, exacerbated by high inter/intra-patient variability, diverse imaging techniques, and the necessity to incorporate data from multiple sensors and sources, thus giving rise to the well-documented domain shift issue.
To tackle these challenges, radiologists and pathologists employ Computer-Aided Diagnosis (CAD) systems, which assist in analysing biomedical images. These systems mitigate or eradicate difficulties arising from inter- and intra-observer variability, ensuring consistent assessments of the same region by the same physician at various times and across different physicians, thanks to adept algorithms.
Additionally, significant issues such as delayed or restricted data access, driven by privacy, security, and intellectual property concerns, pose considerable hurdles. Consequently, researchers are increasingly exploring the use of synthetic data, both for model training and for simulating scenarios not observed in real life.
Furthermore, the emergence of foundation models, such as Vision Transformers and large multimodal models, represents a paradigm shift in medical image analysis. These models, pre-trained on vast datasets, demonstrate remarkable adaptability across various tasks, including segmentation, classification, and multi-modal integration. Their ability to generalise effectively offers promising avenues for addressing domain shift issues and integrating heterogeneous data sources, enhancing diagnostic and predictive accuracy.
This workshop aims to provide a comprehensive overview of recent advancements in biomedical image processing, leveraging machine learning, deep learning, artificial intelligence, and radiomics features. Emphasis is placed on practical applications, including potential solutions to address domain shift issues, the utilisation of synthetic images to augment CAD systems, and the integration of foundation models into clinical workflows. Ultimately, the aim is to explore how these techniques can seamlessly integrate into the conventional medical image processing workflow, encompassing image acquisition, retrieval, disease detection, prediction, and classification.
|