首页> 外文会议>International Conference on Artificial Neural Networks >Self-attention StarGAN for Multi-domain Image-to-image Translation
【24h】

Self-attention StarGAN for Multi-domain Image-to-image Translation

机译:自注意StarGAN用于多域图像到图像的翻译

获取原文

摘要

In this paper, we propose a Self-attention StarGAN by introducing the self-attention mechanism into StarGAN to deal with multi-domain image-to-image translation, aiming to generate images with high-quality details and obtain consistent backgrounds. The self-attention mechanism models the long-range dependencies among the feature maps at all positions, which is not limited to the local image regions. Simultaneously, we take the advantage of batch normalization to reduce reconstruction error and generate fine-grained texture details. We adopt spectral normalization in the network to stabilize the training of Self-attention StarGAN. Both quantitative and qualitative experiments on a public dataset have been conducted. The experimental results demonstrate that the proposed model achieves lower reconstruction error and generates images in higher quality compared to StarGAN. We exploit Amazon Mechanical Turk (AMT) for perceptual evaluation, and 68.1% of all 1,000 AMT Turkers agree that the backgrounds of the images generated by Self-attention StarGAN are more consistent with the original images.
机译:在本文中,我们通过将自我注意机制引入StarGAN中以处理多域图像到图像的转换,提出了一种自我注意StarGAN,旨在生成具有高质量细节的图像并获得一致的背景。自我注意机制对所有位置的特征图之间的长期依赖关系进行建模,这不限于局部图像区域。同时,我们利用批量归一化的优势来减少重构误差并生成细粒度的纹理细节。我们在网络中采用频谱归一化来稳定自注意力StarGAN的训练。已经在公共数据集上进行了定量和定性实验。实验结果表明,与StarGAN相比,该模型可实现较低的重构误差并生成更高质量的图像。我们利用Amazon Mechanical Turk(AMT)进行感官评估,在所有1,000名AMT Turker中,有68.1%的人同意由Self-attention StarGAN生成的图像背景与原始图像更加一致。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号