首页> 外文期刊>Physics in medicine and biology. >Automatic PET cervical tumor segmentation by combining deep learning and anatomic prior
【24h】

Automatic PET cervical tumor segmentation by combining deep learning and anatomic prior

机译:通过组合深度学习和解剖学之前的自动宠物颈肿瘤分割

获取原文
获取原文并翻译 | 示例
           

摘要

Cervical tumor segmentation on 3D (18)FDG PET images is a challenging task because of the proximity between cervix and bladder, both of which can uptake (18)FDG tracers. This problem makes traditional segmentation based on intensity variation methods ineffective and reduces overall accuracy. Based on anatomy knowledge, including 'roundness' of the cervical tumor and relative positioning between the bladder and cervix, we propose a supervised machine learning method that integrates convolutional neural network (CNN) with this prior information to segment cervical tumors. First, we constructed a spatial information embedded CNN model (S-CNN) that maps the PET image to its corresponding label map, in which bladder, other normal tissue, and cervical tumor pixels are labeled as -1, 0, and 1, respectively. Then, we obtained the final segmentation from the output of the network by a prior information constrained (PIC) thresholding method. We evaluated the performance of the PIC-S-CNN method on PET images from 50 cervical cancer patients. The PICS-CNN method achieved a mean Dice similarity coefficient (DSC) of 0.84 while region-growing, Chan-Vese, graph-cut, fully convolutional neural networks (FCN) based FCN-8 stride, and FCN-2 stride, and U-net achieved 0.55, 0.64, 0.67, 0.71, 0.77, and 0.80 mean DSC, respectively. The proposed PIC-S-CNN provides a more accurate way for segmenting cervical tumors on 3D PET images. Our results suggest that combining deep learning and anatomic prior information may improve segmentation accuracy for cervical tumors.
机译:3D(18)FDG PET图像上的颈部肿瘤分割是一种具有挑战性的任务,因为子宫颈和膀胱之间的邻近,这两者都可以吸收(18)FDG示踪剂。此问题基于强度变化方法无效的传统分割,降低了整体准确性。基于解剖学知识,包括宫颈肿瘤的“圆形”和膀胱和子宫颈之间的相对定位,我们提出了一种监督机器学习方法,将卷积神经网络(CNN)与该宫颈肿瘤进行分段。首先,我们构建了一种空间信息嵌入式CNN模型(S-CNN),其将PET图像映射到其对应的标签图,其中膀胱,其他正常组织和颈椎肿瘤像素分别标记为-1,0和1 。然后,我们通过先前的信息约束(PIC)阈值处理方法从网络的输出获得最终分割。我们评估了PIC-S-CNN方法对50例宫颈癌患者的PET图像的性能。 PICS-CNN方法实现了0.84的平均骰子相似度系数(DSC),而基于区域生长,CHAN-VESE,Graph-Cut,完全卷积神经网络(FCN)的FCN-8步,以及FCN-2步行-NET分别实现0.55,0.64,0.67,0.71,0.77和0.80平均DSC。所提出的PIC-S-CNN为在3D PET图像上分割宫颈肿瘤提供更准确的方法。我们的研究结果表明,组合深度学习和解剖学事先信息可以提高宫颈肿瘤的分割精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号