首页> 外文期刊>Frontiers in Computational Neuroscience >Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis
【24h】

Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis

机译:DemyStifying脑肿瘤分割网络:解释性和不确定性分析

获取原文
           

摘要

The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categories of tumors from different MRI modalities. However, these networks are often black-box models and do not provide any evidence regarding the process they take to perform this task. Increasing transparency and interpretability of such deep learning techniques is necessary for the complete integration of such methods into medical practice. In this paper, we explore various techniques to explain the functional organization of brain tumor segmentation models and to extract visualizations of internal concepts to understand how these networks achieve highly accurate tumor segmentations. We use the BraTS 2018 dataset to train three different networks with standard architectures and outline similarities and differences in the process that these networks take to segment brain tumors. We show that brain tumor segmentation networks learn certain human-understandable disentangled concepts on a filter level. We also show that they take a top-down or hierarchical approach to localizing the different parts of the tumor. We then extract visualizations of some internal feature maps and also provide a measure of uncertainty with regards to the outputs of the models to give additional qualitative evidence about the predictions of these networks. We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis.
机译:胶质瘤的准确自动分割及其肿瘤内结构是重要的,不仅适用于治疗计划,而且对于后续评估也是重要的。已经开发出基于2D和3D深神经网络(DNN)的几种方法,以分段脑肿瘤,并从不同的MRI模式分类不同类别的肿瘤。但是,这些网络通常是黑盒式模型,并且不提供关于执行此任务的过程的任何证据。增加这种深度学习技术的透明度和可解释性是在医疗实践中完全集成这种方法的必要条件。在本文中,我们探讨了解释脑肿瘤分割模型的功能组织的各种技术,并提取内部概念的可视化,了解这些网络如何实现高度准确的肿瘤细分。我们使用Brats 2018数据集培训三个不同的网络,其中包含标准架构和大纲相似之处和这些网络对脑肿瘤进行脑肿瘤的过程中的差异。我们表明大脑肿瘤分割网络在过滤级别学习某些人类理解的解除戒开概念。我们还表明,他们采用自上而下或分层方法来定位肿瘤的不同部位。然后,我们提取一些内部特征映射的可视化,并且还提供关于模型的输出的不确定性的量度,以提供关于这些网络预测的额外定性证据。我们认为,这种人为可理解的组织和概念的出现可能有助于接受和整合这些方法在医学诊断中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号