首页> 外文期刊>Quality Control, Transactions >Fusion of Brain PET and MRI Images Using Tissue-Aware Conditional Generative Adversarial Network With Joint Loss
【24h】

Fusion of Brain PET and MRI Images Using Tissue-Aware Conditional Generative Adversarial Network With Joint Loss

机译:使用组织感知条件生成的对抗网络与联合损失融合脑宠物和MRI图像

获取原文
获取原文并翻译 | 示例
           

摘要

Positron emission tomography (PET) has rich pseudo color information that reflects the functional characteristics of tissue, but lacks structural information and its spatial resolution is low. Magnetic resonance imaging (MRI) has high spatial resolution as well as strong structural information of soft tissue, but lacks color information that shows the functional characteristics of tissue. For the purpose of integrating the color information of PET with the anatomical structures of MRI to help doctors diagnose diseases better, a method for fusing brain PET and MRI images using tissue-aware conditional generative adversarial network (TA-cGAN) is proposed. Specifically, the process of fusing brain PET and MRI images is treated as an adversarial machine between retaining the color information of PET and preserving the anatomical information of MRI. More specifically, the fusion of PET and MRI images can be regarded as a min-max optimization problem with respect to the generator and the discriminator, where the generator attempts to minimize the objective function via generating a fused image mainly contains the color information of PET, whereas the discriminator tries to maximize the objective function through urging the fused image to include more structural information of MRI. Both the generator and the discriminator in TA-cGAN are conditioned on the tissue label map generated from MRI image, and are trained alternatively with joint loss. Extensive experiments demonstrate that the proposed method enhances the anatomical details of the fused image while effectively preserving the color information from the PET. In addition, compared with other state-of-the-art methods, the proposed method achieves better fusion effects both in subjectively visual perception and in objectively quantitative assessment.
机译:正电子发射断层扫描(PET)具有富含伪颜色信息,反映组织的功能特征,但缺乏结构信息,其空间分辨率低。磁共振成像(MRI)具有高空间分辨率以及软组织的强结构信息,但缺乏显示组织功能特性的颜色信息。为了将PET的颜色信息与MRI的解剖结构集成在一起以帮助医生诊断疾病更好,提出了一种利用组织感知条件生成对抗网络(TA-CGAN)融合脑宠物和MRI图像的方法。具体地,在保持PET的颜色信息和保留MRI的解剖学之间,将融合脑PET和MRI图像的过程被视为对抗机器。更具体地,PET和MRI图像的融合可以被视为关于发电机和鉴别器的最小最大优化问题,其中发电机试图通过产生融合图像最小化目标函数,主要包含PET的颜色信息,而鉴别器尝试通过施加熔融图像来最大化目标函数以包括MRI的更多结构信息。 TA-CGAN中的发电机和鉴别器都在从MRI图像产生的组织标签图上调节,并且通过关节损耗替代地培训。广泛的实验表明,所提出的方法增强了融合图像的解剖细节,同时有效地保护来自宠物的颜色信息。此外,与其他最先进的方法相比,该方法在主观视觉感知和客观定量评估中实现了更好的融合效应。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号