首页> 外文会议>IEEE International Conference on Computing Communication and Automation >Explainable Deep Learning Methods for Medical Imaging Applications
【24h】

Explainable Deep Learning Methods for Medical Imaging Applications

机译:可解释的医学成像应用深度学习方法

获取原文

摘要

This paper discusses explainable deep learning approaches for medical imaging applications. This includes evaluation of three different feature visualization approaches for visualizing deep neural network models, developed with medical domain images. The idea behind using these approaches is to unbox the internals of deep neural network, specifically convolutional neural networks (CNN) by providing a step by step learning of these models. These approaches could help clinicians to have confidence in complex deep learning models, as opposed to treating those as black boxes as these approaches can generate a meaningful view of the model layers and their feature maps. Also, these approaches can be adopted by data scientists to improve their model performances by looking at the model behavior in each layer. We have implemented three approaches namely Activation Map, Deconvolution and Grad-CAM localization in Keras with Tensorflow background and have validated the results with CNN models developed with natural images. We have also been able to generate these visualizations for models with even filter sizes with activation map and deconvolution approaches. These methods play crucial role in getting approval from regulatory authorities.
机译:本文讨论了用于医学成像应用的可解释的深度学习方法。这包括对三种不同特征可视化方法的评估,这些方法用于可视化用医学领域图像开发的深度神经网络模型。使用这些方法的想法是通过逐步学习这些模型来解开深度神经网络的内部框,特别是卷积神经网络(CNN)。这些方法可以帮助临床医生对复杂的深度学习模型充满信心,而不是将其视为黑盒,因为这些方法可以生成模型层及其特征图的有意义的视图。同样,数据科学家可以通过查看每一层中的模型行为来采用这些方法来改善其模型性能。我们在具有Tensorflow背景的Keras中实现了三种方法,即激活图,去卷积和Grad-CAM本地化,并已使用自然图像开发的CNN模型验证了结果。我们还能够通过激活图和反卷积方法为滤波器大小均匀的模型生成这些可视化效果。这些方法在获得监管机构的批准中起着至关重要的作用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号