首页> 美国卫生研究院文献>Biomedical Informatics Insights >Imitating Pathologist Based Assessment With Interpretable and ContextBased Neural Network Modeling of Histology Images
【2h】

Imitating Pathologist Based Assessment With Interpretable and ContextBased Neural Network Modeling of Histology Images

机译:具有可解释性和上下文的模仿病理学家的评估基于组织学图像的神经网络建模

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Convolutional neural networks (CNNs) have gained steady popularity as a tool to perform automatic classification of whole slide histology images. While CNNs have proven to be powerful classifiers in this context, they fail to explain this classification, as the network engineered features used for modeling and classification are ONLY interpretable by the CNNs themselves. This work aims at enhancing a traditional neural network model to perform histology image modeling, patient classification, and interpretation of the distinctive features identified by the network within the histology whole slide images (WSIs). We synthesize a workflow which (a) intelligently samples the training data by automatically selecting only image areas that display visible disease-relevant tissue state and (b) isolates regions most pertinent to the trained CNN prediction and translates them to observable and qualitative features such as color, intensity, cell and tissue morphology and texture. We use the Cancer Genome Atlas’s Breast Invasive Carcinoma (TCGA-BRCA) histology dataset to build a model predicting patient attributes (disease stage and node status) and the tumor proliferation challenge (TUPAC 2016) breast cancer histology image repository to help identify disease-relevant tissue state (mitotic activity). Wefind that our enhanced CNN based workflow both increased patient attributepredictive accuracy (~2% increase for disease stage and ~10% increase for nodestatus) and experimentally proved that a data-driven CNN histology modelpredicting breast invasive carcinoma stages is highly sensitive to features suchas color, cell size, and shape, granularity, and uniformity. This worksummarizes the need for understanding the widely trusted models built using deeplearning and adds a layer of biological context to a technique that functionedas a classification only approach till now.
机译:卷积神经网络(CNN)作为执行对整个切片组织学图像进行自动分类的工具,已获得稳定的普及。尽管在这种情况下CNN被证明是强大的分类器,但是它们却无法解释这种分类,因为用于建模和分类的网络工程功能只能由CNN自己解释。这项工作旨在增强传统的神经网络模型,以执行组织学图像建模,患者分类以及组织学全玻片图像(WSI)中由网络识别的独特特征的解释。我们合成了一个工作流程,该工作流程(a)通过仅自动选择显示可见的疾病相关组织状态的图像区域来智能地采样训练数据,并且(b)隔离与训练后的CNN预测最相关的区域,并将其转换为可观察和定性的特征,例如颜色,强度,细胞和组织的形态和质地。我们使用癌症基因组图集的乳腺浸润癌(TCGA-BRCA)组织学数据集来构建预测患者属性(疾病阶段和淋巴结状态)和肿瘤增殖挑战(TUPAC 2016)乳腺癌组织学图像库的模型,以帮助识别与疾病相关的疾病组织状态(有丝分裂活动)。我们发现我们增强的基于CNN的工作流程都增加了患者属性预测准确性(疾病阶段增加约2%,淋巴结增加约10%状态),并通过实验证明了数据驱动的CNN组织学模型预测乳腺浸润癌的分期对以下特征高度敏感颜色,像元大小,形状,粒度和均匀度。这项工作总结了理解使用深度构建的广泛信任模型的需求学习并为起作用的技术增加一层生物学环境到目前为止,仅作为分类方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号