Encrypted | Login

An AAPM Grand Challenge


The American Association of Physicists in Medicine (AAPM) is sponsoring a “Grand Challenge” on deep-learning for image reconstruction leading up to the 2021 AAPM Annual Meeting. The DL-sparse-view CT challenge will provide an opportunity for investigators in CT image reconstruction using data-driven techniques to compete with their colleagues on the accuracy of their methodology for solving the inverse problem associated with sparse-view CT acquisition. A session at the 2021 AAPM Annual Meeting will focus on the DL-sparse-view CT Challenge; an individual from each of the two top-performing teams will receive a waiver of the meeting registration fee in order to present their methods during this session. Following the Annual Meeting, challenge participants from the five top-performing teams will be invited to participate in a challenge report.


Background information on the DL-sparse-view CT challenge can be found in the article “Do CNNs solve the CT inverse problem?” [1], which spells out the necessary evidence to support the claim that data-driven techniques such as deep-learning with CNNs solve the CT inverse problem. Recent literature [2,3,4] claims that CNNs can solve inverse problems related to medical image reconstruction. In particular, references [2,4] claim that CNNs solve a specific inverse problem that arises in sparse-view X-ray CT. These papers and other related work have gained wide-spread attention and hundreds of papers have followed that build on this approach. Evidence for solving the CT inverse problem can take the form of numerical simulations where a test simulated image can be recovered from its ideal projection (i.e. no noise or other data inconsistencies). In Ref. [1], such experiments were attempted using our best guess at implementing the methodology in Refs. [2,4]. While the CNN results achieved a certain level of accuracy, these results fall short of providing evidence for solving the associated inverse problem.

We do, however, acknowledge that there has been much development in this field over the past few years and it stands to reason that there are likely data-driven approaches superior to the one that we implemented. The purpose of this challenge is to identify the state-of-the-art in solving the CT inverse problem with data-driven techniques. The challenge seeks the data-driven methodology that provides the most accurate reconstruction of sparse-view CT data.

[1] E. Y. Sidky, I. Lorente, J. G. Brankov, and X. Pan, “Do CNNs solve the CT inverse problem?”, IEEE Trans. Biomed. Engineering (early access: https://doi.org/10.1109/TBME.2020.3020741), 2020.
Also available at: https://arxiv.org/abs/2005.10755
[2] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Proc., vol. 26, pp. 4509–4522, 2017.
[3] B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, pp. 487–492, 2018.
[4] Y. Han and J. C. Ye, “Framing U-Net via deep convolutional framelets: Application to sparse-view CT,” IEEE Trans. Med. Imag., vol. 37, pp. 1418–1429, 2018.


The overall objective of the DL-sparse-view CT challenge is to determine which deep-learning (or data-driven) technique provides the most accurate recovery of a test phantom from ideal 128-view projection data with no noise. To this end, we will provide 4000 data/image pairs based on a 2D breast CT simulation that are to be used for training the algorithm. How these 4000 pairs are split into training and validation sets is up to the individual participating teams. After the training period is over, testing data will be provided for 100 cases without the corresponding ground truth images. Participants will submit their reconstructed images for these testing cases.

Get Started

  1. Register to get access via the Challenge website
  2. Download the data/image pairs after approval
  3. Train your image reconstruction algorithm
  4. Download the testing data
  5. Submit your results

Important Dates

Challenge Data

Figure 1

The details of how the simulation images and data are generated are provided in Ref. [1]. The 512x512 pixel training images are from a breast phantom simulation that has complex random structure modeling fibroglandular tissue in circular cross-section of a breast model (see Figure 1). There are differences in the challenge simulation with respect to Ref. [1]; in the challenge simulation a variable number of microcalcification-like objects of variable amplitude are inserted into the phantom and the scan configuration is fan-beam instead of parallel beam. We will generate 4,000 new training images. The 4,000 training images are accompanied with 128-view fan-beam beam projection data (sinograms) over 360 degree scanning and images generated with filtered back-projection (FBP) applied to the 128-view data.

DL investigators can choose whether they will take an end-to-end (128-view sinogram to image), an artifact-removal (128-view FBP image to image) approach, or some combination of the two.

In the training phase, the DL investigators will have access to the 4,000 training images, sinograms/FBP-images. In the testing phase, 100 new data-image pairs will be generated and the participants will be given only the data in the form of the 128-view sinograms and FBP images. Rankings in the DL-algorithms will be based purely on RMSE over the 100 testing images. During the training phase, a single validation data set will be released for the purpose of creating a leaderboard, which will be visible only from March 17th to May 17th, the day when the testing set is released. Final results test results may not reflect the leaderboard ranking because the final testing phase depends on RMSE performance over a test set containing 100 images. Teams can only submit one set of results, and the submission is expected to be accompanied by a technical note describing the DL-algorithm. The top two performers will be expected to participate in preparing a publication that analyzes and summarizes the results and provides a description of the methodology.

Quantitative evaluation

Submitted reconstructed images will be evaluated by computing

Mean RMSE will be the primary metric that determines the algorithm ranking. In case of a numerical tie (to be determined based on the distribution of results with the mean RMSE), the worst-case ROI-RMSE will be used to rank the algorithms.

Results, prizes and publication plan

At the conclusion of the challenge, the following information will be provided to each participant:

The top 2 participants:

A manuscript summarizing the challenge results will be submitted for publication following completion of the challenge.

Terms and Conditions

The DL-sparse-view CT challenge is organised in the spirit of cooperative scientific progress. The following rules apply to those who register a team and download the data:

Organizers and Major Contributors


For further information, please contact the lead organizer, Emil Sidky.