首页> 外文期刊>Image Processing, IET >Multi-head mutual-attention CycleGAN for unpaired image-to-image translation
【24h】

Multi-head mutual-attention CycleGAN for unpaired image-to-image translation

机译:用于未配对的图像到图像转换的多头互连激活

获取原文
获取原文并翻译 | 示例
       

摘要

The image-to-image translation, i.e. from source image domain to target image domain, has made significant progress in recent years. The most popular method for unpaired image-to-image translation is CycleGAN. However, it always cannot accurately and rapidly learn the key features in target domains. So, the CycleGAN model learns slowly and the translation quality needs to be improved. In this study, a multi-head mutual-attention CycleGAN (MMA-CycleGAN) model is proposed for unpaired image-to-image translation. In MMA-CycleGAN, the cycle-consistency loss and adversarial loss in CycleGAN are still used, but a mutual-attention (MA) mechanism is introduced, which allows attention-driven, long-range dependency modelling between the two image domains. Moreover, to efficiently deal with the large image size, the MA is further improved to the multi-head mutual-attention (MMA) mechanism. On the other hand, domain labels are adopted to simplify the MMA-CycleGAN architecture, so only one generator is required to perform bidirectional translation tasks. Experiments on multiple datasets demonstrate MMA-CycleGAN is able to learn rapidly and obtain photo-realistic images in a shorter time than CycleGAN.
机译:图像到图像转换,即从源图像域到目标图像域,近年来取得了重大进展。未配对的图像到图像转换最受欢迎的方法是Cryscan。但是,它始终无法准确且快速地学习目标域中的关键功能。因此,Conscangan模型慢慢学习,需要改善翻译质量。在本研究中,提出了一种多头相互关注激活(MMA-COMPERGAN)模型用于未配对的图像到图像转换。在MMA-CixclaN中,仍然使用循环一致性损失和激活中的对抗丧失,但介绍了互关注(MA)机制,这允许两个图像域之间的注意力驱动,远程依赖性建模。此外,为了有效地处理大图像尺寸,MA进一步改善了多头互感(MMA)机构。另一方面,采用域标签来简化MMA-Corpergan架构,因此只需要一个生成器来执行双向转换任务。在多个数据集上的实验表明MMA-Corpergan能够快速学习并在比Conscangan的时间较短的时间内获得照片逼真的图像。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号