首页> 外文期刊>Wireless communications & mobile computing >A Transfer Deep Generative Adversarial Network Model to Synthetic Brain CT Generation from MR Images
【24h】

A Transfer Deep Generative Adversarial Network Model to Synthetic Brain CT Generation from MR Images

机译:从MR图像转移深生成的对抗网络模型

获取原文
           

摘要

Background . The generation of medical images is to convert the existing medical images into one or more required medical images to reduce the time required for sample diagnosis and the radiation to the human body from multiple medical images taken. Therefore, the research on the generation of medical images has important clinical significance. At present, there are many methods in this field. For example, in the image generation process based on the fuzzy C-means (FCM) clustering method, due to the unique clustering idea of FCM, the images generated by this method are uncertain of the attribution of certain organizations. This will cause the details of the image to be unclear, and the resulting image quality is not high. With the development of the generative adversarial network (GAN) model, many improved methods based on the deep GAN model were born. Pix2Pix is a GAN model based on UNet. The core idea of this method is to use paired two types of medical images for deep neural network fitting, thereby generating high-quality images. The disadvantage is that the requirements for data are very strict, and the two types of medical images must be paired one by one. DualGAN model is a network model based on transfer learning. The model cuts the 3D image into multiple 2D slices, simulates each slice, and merges the generated results. The disadvantage is that every time an image is generated, bar-shaped “shadows” will be generated in the three-dimensional image. Method/Material . To solve the above problems and ensure the quality of image generation, this paper proposes a Dual3D&PatchGAN model based on transfer learning. Since Dual3D&PatchGAN is set based on transfer learning, there is no need for one-to-one paired data sets, only two types of medical image data sets are needed, which has important practical significance for applications. This model can eliminate the bar-shaped “shadows” produced by DualGAN’s generated images and can also perform two-way conversion of the two types of images. Results . From the multiple evaluation indicators of the experimental results, it can be analyzed that Dual3D&PatchGAN is more suitable for the generation of medical images than other models, and its generation effect is better.
机译:背景 。医学图像的生成是将现有的医学图像转换为一个或多个所需的医学图像,以减少样本诊断所需的时间和从拍摄的多个医学图像中对人体的辐射。因此,对医学图像产生的研究具有重要的临床意义。目前,该领域有许多方法。例如,在基于模糊C型(FCM)聚类方法的图像生成过程中,由于FCM的唯一聚类思想,该方法生成的图像不确定某些组织的归属。这将导致图像的细节不清楚,并且产生的图像质量不高。随着生成的对抗网络(GaN)模型的发展,基于深甘地模型的许多改进方法出生。 PIX2PIX是基于UNET的GAN模型。该方法的核心思想是使用配对的两种类型的医学图像进行深度神经网络拟合,从而产生高质量的图像。缺点是数据的要求非常严格,并且两种类型的医学图像必须逐一配对。双甘蓝模型是一种基于转移学习的网络模型。该模型将3D图像切割成多个2D切片,模拟每个切片,并合并生成的结果。缺点是每次产生图像时,将在三维图像中产生条形的“阴影”。方法/材料。为了解决上述问题并确保图像生成的质量,提出了一种基于转移学习的Dual3d和Patchgan模型。由于基于传输学习设置了双向和PACKGAN,因此不需要一对一配对数据集,因此只需要两种类型的医学图像数据集,这对应用具有重要的实际意义。该模型可以消除由Dualgan的生成图像产生的条形的“阴影”,并且还可以执行两种类型图像的双向转换。结果 。从实验结果的多个评估指标,可以分析Dual3D和PACKGAN更适合于生成医学图像而不是其他模型,其产生效果更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号