Interactive Segmentation of Structures in the Head and Neck Using Steerable Active Contours
I Kolesov1*, P Karasev1, N Shusharina2, P Vela3, A Tannenbaum1, G Sharp2, (1) Comprehensive Cancer Center, Birmingham, Alabama, (2) Massachusetts General Hospital, Boston, MA, (3) Georgia Institute of Technology, Atlanta, GATH-C-WAB-2 Thursday 10:30AM - 12:30PM Room: Wabash Ballroom
The purpose of this work is to investigate the performance of an interactive image segmentation method for radiotherapy contouring on computed tomography (CT) images. Manual segmentation is a time consuming task that is essential for treatment. Due to the low contrast of target structures, their similarity to surrounding tissue, and the required precision for the final segmentation result, automatic methods do not exhibit robust performance. Furthermore, when an automatic segmentation algorithm produces errors at the structure boundary, they are tedious for a human user to correct. For this experiment, it is hypothesized that an interactive algorithm can attain ground truth results in a fraction of the the time needed for manual segmentation.
The proposed method is interactive segmentation that tightly couples a human "expert user" with a framework from computer vision called "active contours" to create a closed loop control system. As a result, the strengths (i.e., quickly delineating complicated target boundaries) of the automatic method can be leveraged by the user, who guides the algorithm based on his expert knowledge throughout the process. Experimental segmentations have been performed both with and without the control system feedback, the accuracy of the resulting labels will be compared along with the time required to create the labels.
Four structures were evaluated: left/right eye ball, brain stem, and mandible. Tests show that virtually identical segmentations are performed with and without control system feedback. However, the time required to complete the task is significantly less than what is needed for fully manual contouring.
Interactive segmentation using control system feedback is shown to reduce the time and effort needed to segment targets in CT volumes of the head and neck region.
Funding Support, Disclosures, and Conflict of Interest: This project was supported by grants from the National Center for Research Resources (P41-RR-013218) and the National Institute of Biomedical Imaging and Bioengineering (P41-EB-015902) of the National Institutes of Health. This work was also supported by the NIH grant R01 MH82918 as well as a grant from AFOSR. This work is part of the National Alliance for Medical Image Computing (NAMIC), funded by the National Institutes of Health through the NIH Roadmap for Medical Research, Grant U54 EB005149. Information on the National Centers for Biomedical Computing can be obtained from http://nihroadmap.nih.gov/bioinformatics.