2018 AAPM Annual Meeting
Back to session list

Session Title: Convolutional Neural Nets (Session 3 of the Certificate Series)
Question 1: While 1x1 convolution kernels might seem counterintuitive at first as they do not operate over a patch of the image, they are potentially useful because they:
Reference:Lin, Min, Qiang Chen, and Shuicheng Yan. 2013. “Network In Network.” arXiv [cs.NE]. arXiv. http://arxiv.org/abs/1312.4400
Choice A:Allow integrating multiple "mini-neural networks" in the network.
Choice B:Offer an inexpensive way to make networks deeper.
Choice C:Are not really convolutions but rather matrix multiplications and therefore contain very few parameters.
Choice D:All of the above.
Question 2: The inception module allows for tuning fewer hyperparameters as it allows for a composition of multiple convolution filter sizes in lieu of choosing a single size:
Reference:Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, and Others. 2015. “Going Deeper with Convolutions.” In . Cvpr. http://openaccess.thecvf.com/CVPR2015.py.
Choice A:True.
Choice B:False.
Question 3: What do Generative adversarial networks (GAN) comprise?
Reference:Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. “Deep Learning. Book in Preparation for MIT Press.” URL!` Http://www. Deeplearningbook. Org
Choice A:A single generative network
Choice B:A generative and a discriminative network
Choice C:An autoencoder network with skip connections
Question 4: Unsupervised learning neural networks include which of the following:
Reference:Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. Curran Associates, Inc.
Choice A:Autoencoders
Choice B:Variational autoencoders
Choice C:Generative adversarial networks
Choice D:All of the above.
Question 5: Residual connections involve skipping multiple layers and are used to make deep neural networks easier to optimize.
Reference:He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. “Deep Residual Learning for Image Recognition.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–78.
Choice A:True.
Choice B:False.
Question 6: Which of the following gives non-linearity to a neural network?
Reference:Hahnloser, R. H., R. Sarpeshkar, M. A. Mahowald, R. J. Douglas, and H. S. Seung. 2000. “Digital Selection and Analogue Amplification Coexist in a Cortex-Inspired Silicon Circuit.” Nature 405 (6789): 947–51.
Choice A:Stochastic Gradient Descent.
Choice B:Rectified Linear Unit.
Choice C:Convolution function.
Choice D:None of the above.
Question 7: In a neural network, which of the following techniques is used to deal with overfitting?
Reference:Lemley, Joseph, Shabab Bazrafkan, and Peter Corcoran. 2017. “Smart Augmentation Learning an Optimal Data Augmentation Strategy.” IEEE Access 5. IEEE: 5858–69.
Choice A:Dropout.
Choice B:Augmentation.
Choice C:Batch Normalization.
Choice D:All of these.
Question 8: What is the purpose of an Autoencoder?
Reference:Leyli-Abadi, Milad, Lazhar Labiod, and Mohamed Nadif. 2017. “Denoising Autoencoder as an Effective Dimensionality Reduction and Clustering of Text Data.” In Advances in Knowledge Discovery and Data Mining, 801–13. Springer International Publishing.
Choice A:Nonlinear Dimensionality reduction.
Choice B:Data denoising.
Choice C:Cross-validation.
Choice D:A & B.
Question 9: The problem you are trying to solve has a small amount of data. You have access to a pre-trained neural network that was trained on a similar problem. Which of the following methodologies would you choose to make use of this pre-trained network?
Reference:Erickson, Bradley J., Panagiotis Korfiatis, Timothy L. Kline, Zeynettin Akkus, Kenneth Philbrick, and Alexander D. Weston. 2018. “Deep Learning in Radiology: Does One Size Fit All?” Journal of the American College of Radiology: JACR 15 (3 Pt B): 521–26.
Choice A:Re-train the model for the new dataset.
Choice B:Assess on every layer how the model performs and only select a few of them.
Choice C:Fine tune the last couple of layers only.
Choice D:Freeze all the layers except the last, re-train the last layer.
Question 10: Which of following activation functions cannot be used at output layer to classify an image?
Reference:Erickson, Bradley J., Panagiotis Korfiatis, Timothy L. Kline, Zeynettin Akkus, Kenneth Philbrick, and Alexander D. Weston. 2018. “Deep Learning in Radiology: Does One Size Fit All?” Journal of the American College of Radiology: JACR 15 (3 Pt B): 521–26.
Choice A:Sigmoid
Choice B:Tanh
Choice C:ReLU
Choice D:Softmax
Back to session list