Encrypted login | home

Program Information

Robust Automatic Co-Segmentation of Multiple Medical Images

no image available
T Liu

T Liu1*, D Floros2 , N Pitsianis1,2 , X Sun1 , F Yin3 , L Ren3 , (1) Duke University, Durham, North Carolina, (2) Aristotle University of Thessaloniki, Thessaloniki, Thessaloniki, (3) Duke University Medical Center, Durham, NC


SU-K-201-14 (Sunday, July 30, 2017) 4:00 PM - 6:00 PM Room: 201

Purpose: Automatic segmentation is very valuable for computer aided diagnosis and contouring during radiotherapy planning. When dealing with multiple image sets, such as the different respiratory phases in 4D-CT data, simultaneous co-segmentation of multiple medical images can be very useful for improving the efficiency and accuracy of segmentation and structure tracking. This work aims to develop a robust automatic co-segmentation method by adopting and adapting co-segmentation methods developed by the computer vision community for (RGB) video editing, tracking and understanding to medical image processing.

Methods: The presented method takes into account typical properties of medical images: gray scale, various fine tissue, low contrast, multiple noises, and higher spatial dimensions such as 4D-CT images, that challenge co-segmentation methods in robustness and efficiency. We adopt the basic co-segmentation components: sparse joint graph formation and graph spectral clustering. We make critical changes for robustness in voxel feature selection and voxel-voxel similarity. The voxel/node feature consists of a weighted intensity stencil and the gradient within a local cubelet centered at the voxel, for detecting and preserving edges and textures. We use a parametrized similarity metric scheme for data-specific adaptation by learning. For efficiency, especially to reduce the complexity at graph spectral decomposition and clustering, we partition the volume domain and hence the big graph into subgraphs, with overlapped buffer zones, exploiting the limited displacement in medical images. The partitioned co-segmentations are then fused into a global co-segmentation.

Results: Automatic co-segmentation of two respiratory phases of 4D-CT image pairs in ourexperiments renders segmentations in both images and their correspondence correctly, outperforming conventional methods in segmentation quality and efficiency.

Conclusion: The proposed co-segmentation method has the potential to become a basiccomponent to enhance and enable many imaging processes, such as automatic contouring, (deformable) registration, tracking, anatomical structure recognition, and anatomical atlas matching or generation.

Contact Email: