首页> 外文会议>International Conference on Pattern Recognition >Explainable Feature Embedding using Convolutional Neural Networks for Pathological Image Analysis
【24h】

Explainable Feature Embedding using Convolutional Neural Networks for Pathological Image Analysis

机译:使用卷积神经网络嵌入用于病理图像分析的解释功能

获取原文

摘要

The development of computer-assisted diagnosis (CAD) algorithms for pathological image analysis is an important research topic. Recently, convolutional neural networks (CNNs) have been used in several studies for the development of CAD algorithms. Such systems are required to be not only accurate but also explainable for their decisions, to ensure reliability. However, a limitation of using CNNs is that the basis of the decisions made by them are hardly interpretable by humans. Thus, in this paper, we present an explainable diagnosis method comprising two CNNs playing different roles. This method allows us to interpret the basis of the decisions made by the CNN from two perspectives: statistics and visualization. For the statistical explanation, the method constructs a dictionary of representative pathological features. It performs diagnoses based on the occurrence and importance of learned features referred from its dictionary. To construct the dictionary, we introduce a vector quantization scheme for CNN. For the visual interpretation, the method provides images of learned features embedded in a feature space as an index of the dictionary by generating them using a conditional autoregressive model. The experimental results showed that the proposed network learned pathological features that contributed to the diagnosis and yielded an area under the receiver operating curve (AUC) of approximately 0.93 for detecting atypical tissues in pathological images of the uterine cervix. Moreover, the proposed method demonstrated that it could provide visually interpretable images to show the rationales behind its decisions. Thus, the proposed method can serve as a valuable tool for pathological image analysis in terms of both its accuracy and explainability.
机译:用于病理图像分析的计算机辅助诊断(CAD)算法是一个重要的研究主题。最近,卷积神经网络(CNNS)已被用于CAD算法开发的几项研究中。这种系统不仅是准确的,而且可以解释他们的决策,以确保可靠性。然而,使用CNN的限制是由人类难以解释它们的基础。因此,在本文中,我们提出了一种可解释的诊断方法,包括两个CNN扮演不同的角色。此方法允许我们解释来自两个观点的CNN决策的基础:统计和可视化。对于统计说明,该方法构建代表性病理特征的字典。它基于从其字典提到的学习功能的发生和重要性来执行诊断。要构建字典,我们向CNN引入矢量量化方案。对于视觉解释,该方法通过使用条件自回归模型生成它们,提供嵌入在特征空间中的学习功能的图像作为字典的索引。实验结果表明,拟议的网络学习的病理特征,其有助于诊断,并在接收器操作曲线(AUC)下产生约0.93的区域,用于检测子宫子宫颈病理图像中的非典型组织。此外,所提出的方法表明它可以提供视觉上可解释的图像,以显示其决策背后的理由。因此,所提出的方法可以作为其准确性和解释性方面的用于病理图像分析的有价值的工具。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号