...
首页> 外文期刊>Artificial intelligence in medicine >Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks
【24h】

Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks

机译:使用银标准掩模在脑部MR成像中用于颅骨剥离的卷积神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

Manual annotation is considered to be the "gold standard" in medical imaging analysis. However, medical imaging datasets that include expert manual segmentation are scarce as this step is time-consuming, and therefore expensive. Moreover, single-rater manual annotation is most often used in data-driven approaches making the network biased to only that single expert. In this work, we propose a CNN for brain extraction in magnetic resonance (MR) imaging, that is fully trained with what we refer to as "silver standard" masks. Therefore, eliminating the cost associated with manual annotation. Silver standard masks are generated by forming the consensus from a set of eight, public, non-deep-learning-based brain extraction methods using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. Our method consists of (1) developing a dataset with "silver standard" masks as input, and implementing (2) a tri-planar method using parallel 2D U-Net-based convolutional neural networks (CNNs) (referred to as CONSNet). This term refers to our integrated approach, i.e., training with silver standard masks and using a 2D U-Net-based architecture. We conducted our analysis using three public datasets: the Calgary-Campinas-359 (CC-359), the LONI Probabilistic Brain Atlas (LPBA40), and the Open Access Series of Imaging Studies (OASIS). Five performance metrics were used in our experiments: Dice coefficient, sensitivity, specificity, Hausdorff distance, and symmetric surface-to-surface mean distance. Our results showed that we outperformed (i.e., larger Dice coefficients) the current state-of-the-art skull-stripping methods without using gold standard annotation for the CNNs training stage. CONSNet is the first deep learning approach that is fully trained using silver standard data and is, thus, more generalizable. Using these masks, we eliminate the cost of manual annotation, decreased inter-/intra-rater variability, and avoided CNN segmentation overfitting towards one specific manual annotation guideline that can occur when gold standard masks are used. Moreover, once trained, our method takes few seconds to process a typical brain image volume using modem a high-end GPU. In contrast, many of the other competitive methods have processing times in the order of minutes.
机译:手动注释被认为是医学影像分析的“黄金标准”。然而,由于该步骤耗时且因此昂贵,因此包括专家手动分割的医学成像数据集稀缺。此外,单评估者手动注释最常用于数据驱动的方法中,从而使网络仅偏向于该专家。在这项工作中,我们提出了一种用于磁共振(MR)成像中脑部提取的CNN,该CNN经过我们所谓的“银标准”口罩的全面培训。因此,消除了与手动注释相关的成本。银标准面罩是通过使用同步真相和性能水平估计(STAPLE)算法从一组八种基于公共,非深度学习的大脑提取方法中形成共识来生成的。我们的方法包括(1)开发以“银标准”掩码作为输入的数据集,以及实施(2)使用基于2D U-Net的并行卷积神经网络(CNN)(称为CON​​SNet)的三平面方法。该术语指的是我们的综合方法,即使用银标准口罩并使用基于2D U-Net的体系结构进行培训。我们使用三个公共数据集进行了分析:卡尔加里-坎皮纳斯(Calgary-Campinas-359)(CC-359),LONI概率脑图集(LPBA40)和影像研究开放获取系列(OASIS)。我们的实验中使用了五个性能指标:骰子系数,灵敏度,特异性,Hausdorff距离和对称的表面到表面平均距离。我们的研究结果表明,在CNN训练阶段不使用黄金标准注释的情况下,我们的表现优于(即更大的Dice系数)当前的最新颅骨剥离方法。 CONSNet是第一个深度学习方法,该方法使用银标准数据进行了全面培训,因此更具通用性。使用这些遮罩,我们消除了人工注释的成本,降低了评分者间/评估者内部的差异,并避免了使用金标准遮罩时可能发生的CNN分割过度适应一种特定的人工注释准则。而且,经过培训后,我们的方法需要花费几秒钟的时间,使用高端GPU调制解调器处理典型的大脑图像。相反,许多其他竞争方法的处理时间约为几分钟。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号