...
首页> 外文期刊>Physics in medicine and biology. >Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network
【24h】

Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network

机译:肿瘤共分割在PET / CT中使用多模态全卷积神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

Automatic tumor segmentation from medical images is an important step for computer-aided cancer diagnosis and treatment. Recently, deep learning has been successfully applied to this task, leading to state-of-the-art performance. However, most of existing deep learning segmentation methods only work for a single imaging modality. PET/CT scanner is nowadays widely used in the clinic, and is able to provide both metabolic information and anatomical information through integrating PET and CT into the same utility. In this study, we proposed a novel multi-modality segmentation method based on a 3D fully convolutional neural network (FCN), which is capable of taking account of both PET and CT information simultaneously for tumor segmentation. The network started with a multi-task training module, in which two parallel sub-segmentation architectures constructed using deep convolutional neural networks (CNNs) were designed to automatically extract feature maps from PET and CT respectively. A feature fusion module was subsequently designed based on cascaded convolutional blocks, which re-extracted features from PET/CT feature maps using a weighted cross entropy minimization strategy. The tumor mask was obtained as the output at the end of the network using a softmax function. The effectiveness of the proposed method was validated on a dinic PET/ CT dataset of 84 patients with lung cancer. The results demonstrated that the proposed network was effective, fast and robust and achieved significantly performance gain over CNN-based methods and traditional methods using PET or CT only, two V-net based co-segmentation methods, two variational co-segmentation methods based on fuzzy set theory and a deep learning co-segmentation method using W-net.
机译:医学图像的自动肿瘤分割是计算机辅助癌症诊断和治疗的重要步骤。最近,深入学习已成功应用于这项任务,导致最先进的表现。但是,大多数现有的深度学习细分方法仅适用于单一的成像模式。如今,PET / CT扫描仪在诊所中广泛使用,并且能够通过将PET和CT集成到相同的实用程序中提供代谢信息和解剖信息。在本研究中,我们提出了一种基于3D全卷积神经网络(FCN)的新型多模态分割方法,其能够考虑同时用于肿瘤分割的PET和CT信息。该网络以多任务培训模块开始,其中使用深卷积神经网络(CNN)构建的两个并行子分割架构被设计为分别自动提取来自PET和CT的特征映射。随后基于级联卷积块设计了一种特征融合模块,其使用加权交叉熵最小化策略重新提取来自PET / CT特征图的特征。使用Softmax功能作为网络末端的输出获得肿瘤掩模。提出方法的有效性在84例肺癌患者的大餐PET / CT数据集上验证。结果表明,所提出的网络是有效的,快速且稳健的,并实现了基于CNN的方法和传统方法的显着性能,使用PET或CT,基于V-Net的共分割方法,基于两个变分协处理方法模糊集理论与使用W-Net的深度学习共分割方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号