2021 AAPM Virtual 63rd Annual Meeting
Back to session list

Session Title: AI in Image Guided Radiation Therapy
Question 1: Potential advantages in neural network-based statistical learning paradigm, i.e, modern deep learning-based CT image reconstruction DO NOT include:
Reference:1. Yinsheng Li et al, " Learning to Reconstruct Computed Tomography vol. 38(10): 2469-2481(2019) 2. Ge Wang et al, “Deep learning for tomographic image reconstruction”, Nature Machine Intelligence, Vol. 2: 737-748(2020)
Choice A:Relaxed data acquisition conditions
Choice B:Reduced reconstruction time relative to the model-based iterative reconstruction methods
Choice C:Universal applicability
Choice D:Forgiving noise level in acquired data
Question 2: Statistical learning method is the foundation in model-based iterative CT reconstructions.
Reference:1. Yinsheng Li et al, " Learning to Reconstruct Computed Tomography vol. 38(10): 2469-2481(2019) 2. Ge Wang et al, “Deep learning for tomographic image reconstruction”, Nature Machine Intelligence, Vol. 2: 737-748(2020)
Choice A:False
Choice B:True
Question 3: What is an advantage of the deep learning-based methods for image quality augmentation?
Reference:1. Sahiner, Berkman, et al. "Deep learning in medical imaging and radiation therapy." Medical physics 46.1 (2019): e1-e36. 2. Jiang, Zhuoran, et al. "Augmentation of CBCT Reconstructed From Under-Sampled Projections Using Deep Learning." IEEE transactions on medical imaging 38.11 (2019): 2705-2715.
Choice A:Recovers anatomical structures that are completely lost in the input low-quality images.
Choice B:Eliminates the need for high-quality ground-truth images in the training data.
Choice C:Has a very short prediction time, making it applicable for clinical usage.
Choice D:Has very short training time (around seconds), making it fast to be retrained for different patient cohorts.
Question 4: 4D-CBCT has much poorer image quality than 3D-CBCT due to what outcome of the data collection process?
Reference:Jiang, Zhuoran, et al. "Augmentation of CBCT Reconstructed From Under-Sampled Projections Using Deep Learning." IEEE transactions on medical imaging 38.11 (2019): 2705-2715.
Choice A:Scatter artifacts
Choice B:Beam hardening artifacts
Choice C:Ring artifacts
Choice D:Metal artifacts
Choice E:Undersampling of cone beam projections
Question 5: Compared to CBCT, why does digital tomosynthesis (DTS) have much poorer image quality with limited volumetric information?
Reference:Godfrey, Devon J., et al. "Digital tomosynthesis with an on-board kilovoltage imaging device." International Journal of Radiation Oncology* Biology* Physics 65.1 (2006): 8-15.
Choice A:Limitation of the reconstruction algorithm
Choice B:Limited depth information acquired in the limited angle scan
Choice C:Scatter is more severe in DTS than in CBCT
Choice D:Cone beam geometry of the acquisition
Choice E:More motion artifacts in DTS than in CBCT
Question 6: Which of the following statement is NOT correct for the loss function of deep-learning-based registration methods:
Reference:Fu et al., Deep learning in medical image registration: a review. Phys Med Biol. 2020 Oct 22;65(20):20TR01. doi: 10.1088/1361-6560/ab843e. PMID: 32217829; PMCID: PMC7759388. (Transformation error loss was the error between predicted and ground truth transformations, which was only valid for supervised transformation prediction.)
Choice A:Deep similarity-based image appearance loss usually calculates the correlation between the learnt feature-based image descriptors.
Choice B:Transformation smoothness constraints usually involve the calculation of the first and second orders of spatial derivatives of predicted transformation.
Choice C:Transformation error loss was the error between predicted and ground truth transformations, which was only valid for unsupervised transformation prediction.
Choice D:Transformation physical fidelity loss includes inverse consistency loss, negative Jacobian determinant loss, identity loss, anti-folding loss and so on.
Question 7: Which of the following statement is NOT correct for the challenges and opportunities of deep-learning-based registration methods:
Reference:Fu et al., Deep learning in medical image registration: a review. Phys Med Biol. 2020 Oct 22;65(20):20TR01. doi: 10.1088/1361-6560/ab843e. PMID: 32217829; PMCID: PMC7759388. (For unsupervised methods, efforts were made to combine different kinds of regularization terms to constrain the predicted transformation. However, it is difficult to investigate the relative importance of each regularization term.)
Choice A:One of the most common challenges for supervised deep-learning-based methods is the lack of training datasets with known transformations.
Choice B:For supervised methods, efforts were made to combine different kinds of regularization terms to constrain the predicted transformation. However, it is difficult to investigate the relative importance of each regularization term.
Choice C:Due to the unavailability of ground truth transformation between an image pair, it is hard to compare the performances of different registration methods.
Choice D:Generative adversarial network (GAN)-based methods have been gradually gaining popularity since GAN could be used to not only introduce additional regularizations but also perform image domain translation to cast multi-modal to unimodal image registration.
Question 8: Real-time motion monitoring is important because:
Reference:Bertholet J, Knopf A, Eiben B, McClelland J, Grimwood A, Harris E, Menten M, Poulsen P, Nguyen DT, Keall P, Oelfke U. Real-time intrafraction motion monitoring in external beam radiotherapy. Phys Med Biol. 2019 Aug 7;64(15):15TR01. doi: 10.1088/1361-6560/ab2ba8. PMID: 31226704; PMCID: PMC7655120.
Choice A:The treatment plan is typically very conformal
Choice B:The target can move out of the treatment field during irradiation
Choice C:Daily IGRT is not accurate
Question 9: Which of the following is not TRUE regarding AI solutions for intrafraction motion monitoring?
Reference:Siddique S, Chow JCL. Artificial intelligence in radiotherapy. Rep Pract Oncol Radiother. 2020 Jul-Aug;25(4):656-666. doi: 10.1016/j.rpor.2020.03.015. Epub 2020 May 6. PMID: 32617080; PMCID: PMC7321818. Mylonas A, Keall PJ, Booth JT, Shieh CC, Eade T, Poulsen PR, Nguyen DT. A deep learning framework for automatic detection of arbitrarily shaped fiducial markers in intrafraction fluoroscopic images. Med Phys. 2019 May;46(5):2286-2297. doi: 10.1002/mp.13519. Epub 2019 Apr 15. PMID: 30929254.
Choice A:AI is not suited because the task needs to be done in near real-time while AI solutions are slow.
Choice B:AI can be used to overcome the constraints of low image contrast and detect targets on fluoroscopic X-ray images even without markers.
Choice C:AI solutions are challenging to implement because it requires a large amount labeled data.
Question 10: An ideal real-time motion target monitoring system is:
Reference:Keall PJ, Nguyen DT, O'Brien R, et al. Review of Real-Time 3-Dimensional Image Guided Radiation Therapy on Standard-Equipped Cancer Radiation Therapy Systems: Are We at the Tipping Point for the Era of Real-Time Radiation Therapy?. Int J Radiat Oncol Biol Phys. 2018;102(4):922-931. doi:10.1016/j.ijrobp.2018.04.016
Choice A:Precise (low standard deviation of error), Accurate (low mean of error), fast, requires fiducial markers, and integrated to a standard linear accelerator.
Choice B:Precise (low standard deviation of error), Accurate (low mean of error), fast, requires no fiducial markers, and integrated into a standard linear accelerator.
Choice C:Precise (low standard deviation of error), Accurate (low mean of error), fast, and integrated with an MRI-linac.
Back to session list