2022 AAPM 64th Annual Meeting
Back to session list

Session Title: Artificial Intelligence for QA
Question 1: Which of the following QA procedures is least effective at identifying errors?
Reference:Ford et al, Quality Control Quantification (QCQ): A Tool to Measure the Value of Quality Control Checks in Radiation Oncology. Int. J. Radiat. Oncol. Biol. Phys., 84(3), 263-269, (2012)
Choice A:Physicist Initial Chart Check
Choice B:Therapist Chart Review
Choice C:Pre-treatment PSQA
Choice D:Chart Rounds
Question 2: In addition to ion chamber measurements, machine learning models have been developed for the following PSQA methods:
Reference:Reference: Chan et al, Integration of AI and machine learning in radiotherapy QA. Front. Artif. Intell. 3 (2020)
Choice A:Arccheck
Choice B:Mapcheck
Choice C:EPID
Choice D:All of the above
Question 3: What are the recommended check items in photon/electron EBRT initial plan/chart review for medical physicists?
Reference:Ford et.al. , 'Strategies for effective physics plan and chart review in radiation therapy: Report of AAPM Task Group 275", Medical Physics 47(6), e236-e272, 2020; Xia et.al. , 'Medical Physics Practice Guideline (MPPG) 11.a: Plan and chart review in external beam radiotherapy and brachytherapy', Journal of Applied Clinical Medical Physics 22(9), 4-19, 2021
Choice A:Prescription (Total dose, fractionation pattern, modality, technique etc.)
Choice B:Isocenter (isocenter/initial reference point matched with tattoos, shifts, multiple isocenters)
Choice C:Plan quality and dose distribution
Choice D:Treatment technique, beam arrangement, and deliverability
Choice E:All of the above
Question 4: Which of the following is an improvement required to support the implementation of AI for physics plan/chart review?
Reference:Kalet et.al. 'Radiation Therapy Quality Assurance Tasks and Tools: The Many Roles of Machine Learning', Medical Physics 47(5), e168-e177, 2020; Luk et.al. 'Improving the Quality of Care in Radiation Oncology using Artificial Intelligence', Clinical Oncology 34(2), 89-98, 2021
Choice A:Standardization of data content, data format, data structure, and nomenclature
Choice B:Model generalizability and interpretability
Choice C:Independent quality assurance procedures for AI tools
Choice D:All of the above
Question 5: While there have been thousands of published studies of deep learning algorithms performance only a few have found clinical adoption. Medical physicists are on the forefront of clinical implementation and rely on their own judgment as well as task group reports and other guidelines. Despite actively working on, as of May 2022, how may TG reports can we reference to:
Reference:Benjamin H. Kann, Ahmed Hosny, Hugo J.W.L. Aerts, “Artificial intelligence for clinical oncology”, Cancer Cell, Volume 39, Issue 7,2021. https://www.aapm.org/pubs/reports/tabular.asp (accessed 05/03/2022)
Choice A:0
Choice B:1
Choice C:2
Choice D:3
Question 6: Metrics like Dice coefficient are used to quantify the performance of segmentation algorithms. With respect to this quantitative measure, which statement is not true.
Reference:Chen Chen, Qin Chen, Qiu Huaqi, et al., “Deep Learning for Cardiac Image Segmentation: A Review”, Frontiers in Cardiovascular Medicine, Volume 7, 2020 Sanne G.M. van Velzen PhD, Steffen Bruns MSc, Jelmer M., et al., “AI-Based Quantification of Planned Radiation Therapy Dose to Cardiac Structures and Coronary Arteries in Patients With Breast Cancer”, International Journal of Radiation Oncology, Biology, Physics, Volume 112, Issue 3, 2022
Choice A:The dice coefficient ranges from 0 to 1, where 1 indicates a perfect segmentation.
Choice B:Quantitative measures should be used in conjunction with qualitative measure
Choice C:The dice coefficient is less appropriate for evaluation of small structures
Choice D:A low dice coefficient always indicates that the performance is clinically unacceptable.
Back to session list