首页> 外文会议>Conference on Medical Imaging: Image Processing >Extracting 2D weak labels from volume labels using multiple instance learning in CT hemorrhage detection
【24h】

Extracting 2D weak labels from volume labels using multiple instance learning in CT hemorrhage detection

机译:在CT出血检测中使用多实例学习从体积标签中提取2D弱标签

获取原文

摘要

Multiple instance learning (MIL) is a supervised learning methodology that aims to allow models to learn instance class labels from bag class labels, where a bag is defined to contain multiple instances. MIL is gaining traction for learning from weak labels but has not been widely applied to 3D medical imaging. MIL is well-suited to clinical CT acquisitions since (1) the highly anisotropic voxels hinder application of traditional 3D networks and (2) patch-based networks have limited ability to learn whole volume labels. In this work, we apply MIL with a deep convolutional neural network to identify whether clinical CT head image volumes possess one or more large hemorrhages (> 20cm~3), resulting in a learned 2D model without the need for 2D slice annotations. Individual image volumes are considered separate bags, and the slices in each volume are instances. Such a framework sets the stage for incorporating information obtained in clinical reports to help train a 2D segmentation approach. Within this context, we evaluate the data requirements to enable generalization of MIL by varying the amount of training data. Our results show that a training size of at least 400 patient image volumes was needed to achieve accurate per-slice hemorrhage detection. Over a five-fold cross-validation, the leading model, which made use of the maximum number of training volumes, had an average true positive rate of 98.10%, an average true negative rate of 99.36%, and an average precision of 0.9698. The models have been made available along with source code to enabled continued exploration and adaption of MIL in CT neuroimagiug.
机译:多实例学习(MIL)是一种有监督的学习方法,旨在使模型能够从袋类标签中学习实例类标签,其中袋被定义为包含多个实例。 MIL正在从弱标签学习中获得吸引力,但尚未广泛应用于3D医学成像。 MIL(MIL)非常适合临床CT采集,因为(1)高度各向异性的体素阻碍了传统3D网络的应用,并且(2)基于补丁的网络学习整个体积标签的能力有限。在这项工作中,我们将MIL与深层卷积神经网络一起使用,以识别临床CT头部图像量是否存在一个或多个大出血(> 20cm〜3),从而获得了无需2D切片注释的学习型2D模型。单个图像卷被视为独立的包装袋,每个体积中的切片均为实例。这样的框架为整合临床报告中获得的信息奠定了基础,以帮助训练2D分割方法。在这种情况下,我们评估数据需求以通过改变训练数据的数量来实现MIL的通用化。我们的结果表明,需要至少400个患者图像量的训练量才能实现准确的每片出血检测。经过五次交叉验证,利用最大训练量的领先模型的平均真实阳性率为98.10%,平均真实阴性率为99.36%,平均精度为0.9698。该模型已与源代码一起提供,以能够继续探索和适应CT神经影像学中的MIL。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号