首页> 外文期刊>Journal of digital imaging: the official journal of the Society for Computer Applications in Radiology >Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks
【24h】

Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks

机译:卷积神经网络深入学习细分的眼睛跟踪

获取原文
获取原文并翻译 | 示例
           

摘要

Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology. As a proof of principle, our hypothesis was that segmentation masks generated with the help of eye tracking (ET) would be very similar to those rendered by hand annotation (HA). Additionally, our goal was to show that a CNN trained on ET masks would be equivalent to one trained on HA masks, the latter being the current standard approach. Step 1: Screen captures of 19 publicly available radiologic images of assorted structures within various modalities were analyzed. ET and HA masks for all regions of interest (ROIs) were generated from these image datasets. Step 2: Utilizing a similar approach, ET and HA masks for 356 publicly available T1-weighted postcontrast meningioma images were generated. Three hundred six of these image + mask pairs were used to train a CNN with U-net-based architecture. The remaining 50 images were used as the independent test set. Step 1: ET and HA masks for the nonneurological images had an average Dice similarity coefficient (DSC) of 0.86 between each other. Step 2: Meningioma ET and HA masks had an average DSC of 0.85 between each other. After separate training using both approaches, the ET approach performed virtually identically to HA on the test set of 50 images. The former had an area under the curve (AUC) of 0.88, while the latter had AUC of 0.87. ET and HA predictions had trimmed mean DSCs compared to the original HA maps of 0.73 and 0.74, respectively. These trimmed DSCs between ET and HA were found to be statistically equivalent with a p value of 0.015. We have demonstrated that ET can create segmentation masks suitable for deep learning semantic segmentation. Future work will integrate ET to produce masks in a faster, more natural manner that distracts less from typical radiology clinical workflow.
机译:与卷积神经网络(CNNS)的深度学习在多种医疗保健应用中经历了巨大的增长,并且已被证明在医学(例如,放射学和病理学)图像的语义分割中具有高精度。然而,CNNS所需训练中的一个关键屏障是获得大规模和精确地注释的成像数据。我们试图通过眼跟踪技术解决缺乏注释数据。作为原则的证据,我们的假设是在眼跟踪(ET)的帮助下产生的分割掩模与手工注释(HA)呈现的那些非常相似。此外,我们的目标是表明,在ET面具上培训的CNN将相当于训练有素的哈巴马斯,后者是目前的标准方法。步骤1:分析了在各种方式中的19个公共可用放射理学图像的屏幕捕获。从这些图像数据集生成所有感兴趣区域(ROI)的ET和HA掩模。步骤2:利用类似的方法,生成了356个公共可用的T1加权后脑膜瘤图像的ET和HA Masks。这些图像+面膜对中的三百六对用基于U-Net的架构训练CNN。其余50个图像被用作独立的测试集。步骤1:非患者图像的ET和HA Masks在彼此之间的平均骰子相似度系数(DSC)为0.86。步骤2:脑膜瘤ET和HA Masks彼此之间的平均DSC为0.85。在使用两种方法的单独培训之后,ET方法几乎与50张图像的测试组上的HA相同。前者在0.88的曲线(AUC)下有一个区域,而后者的AUC为0.87。与0.73和0.74的原始HA地图相比,ET和HA预测分析了平均DSC。在ET和HA之间的这些修剪DSC在统计上等同于P值为0.015。我们已经证明ET可以创建适合深入学习语义细分的分段掩模。未来的工作将以典型放射学临床工作流程分散较少的速度,更自然的方式,将ET融入套件以更快,更自然的方式分散注意力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号