Abstract
Deep learning models achieve state-of-the-art results in a wide array of medical imaging problems. Yet the lack of interpretability of deep neural networks is a primary concern for medical practitioners and poses a considerable barrier before the deployment of such models in clinical practice. Several techniques have been developed for visualizing the decision process of DNNs. However, few implementations are openly available for the popular PyTorch library, and existing implementations are often limited to two-dimensional data and classification models. We present M3d-CAM, an easy easy to use library for generating attention maps of CNN-based PyTorch models for both 2D and 3D data, and applicable to both classification and segmentation models. The attention maps can be generated with multiple methods: Guided Backpropagation, Grad-CAM, Guided Grad-CAM and Grad-CAM++. The maps visualize the regions in the input data that most heavily influence the model prediction at a certain layer. Only a single line of code is sufficient for generating attention maps for a model, making M3d-CAM a plug-and-play solution that requires minimal previous knowledge.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Hooker S, Erhan D, Kindermans PJ, et al. A benchmark for interpretability methods in deep neural networks. In: Advances in Neural Information Processing Systems; 2019. p. 9737–9748.
Huang X, Kroening D, Ruan W, et al. A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Computer Science Review. 2020;37:100–270.
Xu F, Uszkoreit H, Du Y, et al. Explainable AI: a brief survey on history, research areas, approaches and challenges. In: CCF International Conference on Natural Language Processing and Chinese Computing. Springer; 2019. p. 563–574.
Paszke A, Gross S, Massa F, et al. PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems; 2019. p. 8024–8035.
Selvaraju RR, Cogswell M, Das A, et al. Grad-cam: visual explanations from deep networks via gradient-based localization. Proc IEEE ICCV. 2017; p. 618–626.
Springenberg JT, Dosovitskiy A, Brox T, et al. Striving for simplicity: the all convolutional net. arXiv preprint arXiv:14126806. 2014;.
Chattopadhay A, Sarkar A, Howlader P, et al. Grad-cam++: generalized gradientbased visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE; 2018. p. 839–847.
Linda Wang ZQL, Wong A. COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest radiography images; 2020.
Fan DP, Zhou T, Ji GP, et al. Inf-Net: automatic COVID-19 lung infection segmentation from CT images. IEEE TMI. 2020; p. 2626–2637.
Isensee F, Petersen J, Klein A, et al. Abstract: nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation. In: Handels H, Deserno TM, Maier A, et al., editors. Bildverarbeitung für die Medizin 2019; 2019. p. 22–22.
Cohen JP, Morrison P, Dao L. COVID-19 image data collection. arXiv 200311597. 2020;Available from: https://github.com/ieee8023/covid-chestxray-dataset.
Simpson AL, Antonelli M, Bakas S, et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms; 2019.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Der/die Autor(en), exklusiv lizenziert durch Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature
About this paper
Cite this paper
Gotkowski, K., Gonzalez, C., Bucher, A., Mukhopadhyay, A. (2021). M3d-CAM. In: Palm, C., Deserno, T.M., Handels, H., Maier, A., Maier-Hein, K., Tolxdorff, T. (eds) Bildverarbeitung für die Medizin 2021. Informatik aktuell. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-33198-6_52
Download citation
DOI: https://doi.org/10.1007/978-3-658-33198-6_52
Published:
Publisher Name: Springer Vieweg, Wiesbaden
Print ISBN: 978-3-658-33197-9
Online ISBN: 978-3-658-33198-6
eBook Packages: Computer Science and Engineering (German Language)