Uncertainty Driven Bottleneck Attention U-Net for Organ at Risk Segmentation
Announcing Our Latest Research: Uncertainty Driven Bottleneck Attention U-Net for Organ at Risk Segmentation
We are excited to share our latest research on advancing medical imaging segmentation with the publication of our paper titled “Uncertainty Driven Bottleneck Attention U-Net for Organ at Risk Segmentation”. This innovative study introduces a novel approach to enhancing the accuracy of segmenting organs at risk (OAR) in computed tomography (CT) images, which is critical for effective radiation treatment planning.
In our research, we propose a U-Net architecture featuring multiple decoders, utilizing the segmentation disagreements between these decoders to derive attention vectors that refine the segmentation results. Unlike traditional attention mechanisms that rely on feature correlation, our model uses network uncertainty to drive the attention mechanism, significantly improving segmentation accuracy. Additionally, to address the challenge of low tissue contrast in CT images, we developed a CT intensity integrated regularization loss. This novel loss function helps the model better understand and segment low contrast tissues, crucial for precise boundary detection.
Our method was rigorously tested on two publicly available OAR challenge datasets, SegThor and LCTSC, with extensive ablation studies and comparisons with state-of-the-art CNN-based segmentation methods. The results demonstrate that our approach significantly enhances segmentation performance, achieving higher accuracy and consistency across different organs and datasets. On the SegThor dataset, our CE+CTRM(UDBA) method excelled in multiple metrics, showing the best performance for organs such as the Heart, Esophagus, and Aorta. For the LCTSC dataset, our model demonstrated strong performance overall, particularly in terms of Intersection Over Union (IoU) scores for the Esophagus, Spine, and Heart.
Automated segmentation of OARs is a challenging yet critical task in medical imaging. Accurate delineation of these organs is essential to minimize radiation exposure to healthy tissues during treatment. Our proposed methods not only enhance segmentation accuracy but also address the variability and low contrast issues inherent in CT images, paving the way for more reliable and effective radiation treatment planning.
Building on these promising results, future work will explore the integration of our model with clinical workflows, aiming to streamline the process of OAR segmentation and further reduce inter-rater variability. Additionally, we plan to extend our approach to other medical imaging modalities and segmentation tasks, continuing to push the boundaries of what is possible with deep learning in medical imaging.
We invite you to read our full paper for a detailed explanation of our methods, experiments, and findings. Your feedback and collaboration are highly valued as we continue to innovate in this crucial field of medical imaging. Thank you for your continued support and interest in our work.
Authors: Abdullah Nazib, Riad Hassan, Zahidul Islam, Clinton Fookes Affiliations: Queensland University of Technology, Green University of Bangladesh