首页> 外文期刊>IEEE Transactions on Medical Imaging >Co-Learning Feature Fusion Maps From PET-CT Images of Lung Cancer
【24h】

Co-Learning Feature Fusion Maps From PET-CT Images of Lung Cancer

机译:肺癌PET-CT图像的共同学习特征融合图

获取原文
获取原文并翻译 | 示例
           

摘要

The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer-aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve the fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's feature across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, and tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FSs), multi-branch (MB) techniques, and multi-channel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0.05) than the fusion baselines (FS: 99.00%, MB: 99.08%, and TC: 98.92%) and a significantly higher Dice score (63.85%) than the recent PET-CT tumor segmentation methods.
机译:对用于计算机辅助诊断应用(例如检测和分割)的多模态正电子发射断层扫描和计算机断层扫描(PET-CT)图像进行分析需要结合PET的敏感性和CT的解剖学定位来检测异常区域。用于PET-CT图像分析的当前方法是根据有关图像分析任务的知识,分别处理模态或融合来自每个模态的信息。这些方法通常不考虑空间变化的视觉特征,这些视觉特征跨不同的模式编码不同的信息,这些不同的模式在不同的位置具有不同的优先级。例如,肺中异常PET的高摄入量对肿瘤的检测比心脏中生理性PET的吸收更有意义。我们的目标是使用新的监督卷积神经网络(CNN)来改进多模态PET-CT中互补信息的融合,该网络学会融合互补信息以进行多模态医学图像分析。我们的CNN首先对特定于形态的特征进行编码,然后使用它们来得出空间变化的融合图,该图量化了每个形态在不同空间位置的特征的相对重要性。然后将这些融合图与特定于模态的特征图相乘,以获得在不同位置的互补多模态信息的表示,然后可以将其用于图像分析。我们使用肺癌的PET-CT图像数据集评估了CNN检测和分割具有不同融合要求的多个区域(肺,纵隔和肿瘤)的能力。我们将我们的方法与用于多模式图像融合(融合输入(FSs),多分支(MB)技术和多通道(MC)技术)和分段的基线技术进行了比较。我们的发现表明,与融合基线(FS:99.00%,MB:99.08%和TC:98.92%)相比,我们的CNN前景检测准确率(99.29%,p <0.05)显着更高,而Dice评分(63.85)显着更高。 %),而不是最近的PET-CT肿瘤分割方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号