首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >TieNet: Text-Image Embedding Network for Common Thorax Disease Classification and Reporting in Chest X-Rays
【24h】

TieNet: Text-Image Embedding Network for Common Thorax Disease Classification and Reporting in Chest X-Rays

机译:TieNet:普通胸部疾病分类和报告胸部X射线的文本图像嵌入网络

获取原文

摘要

Chest X-rays are one of the most common radiological examinations in daily clinical routines. Reporting thorax diseases using chest X-rays is often an entry-level task for radiologist trainees. Yet, reading a chest X-ray image remains a challenging job for learning-oriented machine intelligence, due to (1) shortage of large-scale machine-learnable medical image datasets, and (2) lack of techniques that can mimic the high-level reasoning of human radiologists that requires years of knowledge accumulation and professional training. In this paper, we show the clinical free-text radiological reportscan be utilized as a priori knowledge for tackling these two key problems. We propose a novel Text-Image Embedding network (TieNet) for extracting the distinctive image and text representations. Multi-level attention models are integrated into an end-to-end trainable CNN-RNN architecture for highlighting the meaningful text words and image regions. We first apply TieNet to classify the chest X-rays by using both image features and text embeddings extracted from associated reports. The proposed auto-annotation framework achieves high accuracy (over 0.9 on average in AUCs) in assigning disease labels for our hand-label evaluation dataset. Furthermore, we transform the TieNet into a chest X-ray reporting system. It simulates the reporting process and can output disease classification and a preliminary report together. The classification results are significantly improved (6% increase on average in AUCs) compared to the state-of-the-art baseline on an unseen and hand-labeled dataset (OpenI).
机译:胸部X光检查是日常临床工作中最常见的放射学检查之一。对于放射线医师来说,使用胸部X光报告胸腔疾病通常是一项入门级任务。然而,由于(1)缺乏大规模的机器可学习医学图像数据集,以及(2)缺乏能够模仿高水平机器学习医学图像的技术,对于以学习为导向的机器智能而言,读取胸部X射线图像仍然是一项艰巨的工作。需要多年的知识积累和专业培训的人类放射科医生的高级推理。在本文中,我们显示了临床自由文本放射学报告可以作为解决这两个关键问题的先验知识。我们提出了一种新颖的文本图像嵌入网络(TieNet),用于提取独特的图像和文本表示形式。多级注意力模型已集成到端到端可训练的CNN-RNN体系结构中,以突出显示有意义的文本单词和图像区域。我们首先使用TieNet通过使用图像特征和从相关报告中提取的文本嵌入来对胸部X光进行分类。所提出的自动注释框架在为我们的手标签评估数据集分配疾病标签时达到了很高的准确性(在AUC中平均超过0.9)。此外,我们将TieNet转换为胸部X光报告系统。它模拟了报告过程,可以一起输出疾病分类和初步报告。与最新的基线相比,分类结果得到了显着改善(AUC平均增加了6%),而看不见的和手工标记的数据集(OpenI)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号