...
首页> 外文期刊>Statistics and computing >Trans-cGAN: transformer-Unet-based generative adversarial networks for cross-modality magnetic resonance image synthesis
【24h】

Trans-cGAN: transformer-Unet-based generative adversarial networks for cross-modality magnetic resonance image synthesis

机译:

获取原文
获取原文并翻译 | 示例

摘要

Abstract Magnetic resonance imaging is a widely used medical imaging technology, which can provide different contrasts between the tissues in human body. To use the complementary information from multiple imaging modalities and shorten the time of MR scanning, cross-modality magnetic resonance image synthesis has recently aroused extensive interests in literature. Most existing methods improve the quality of image synthesis by artificially designing loss on the basis of minimizing pixel-wise intensity error, even though it can improve the quality of the synthesized image, it is still difficult to balance the hyperparameters of different loss functions. In this paper, we propose a generative adversarial network (Trans-cGAN) based on transformer and unet for cross-modality magnetic resonance image synthesis. Specifically, transformer block is added into the network to enhance the model’s understanding of the overall semantics of the image as well as to implicitly integrate the edge information. Moreover, spectral normalization is added to the generator and discriminator to avoid mode collapse and ensure the stability of training. Experimental results demonstrate that the proposed Trans-cGAN is superior to the most of the cross-modality MR image synthesis methods in both qualitative and quantitative evaluations. Furthermore, Trans-cGAN also shows excellent generality in the generic image synthesis task of the benchmark datasets of maps and cityscapes.

著录项

相似文献

  • 外文文献
  • 中文文献
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号