首页> 外文会议>International Conference on Artificial Neural Networks >Self-attention StarGAN for Multi-domain Image-to-image Translation
【24h】

Self-attention StarGAN for Multi-domain Image-to-image Translation

机译:用于多域图像到图像转换的自我关注Stargan

获取原文

摘要

In this paper, we propose a Self-attention StarGAN by introducing the self-attention mechanism into StarGAN to deal with multi-domain image-to-image translation, aiming to generate images with high-quality details and obtain consistent backgrounds. The self-attention mechanism models the long-range dependencies among the feature maps at all positions, which is not limited to the local image regions. Simultaneously, we take the advantage of batch normalization to reduce reconstruction error and generate fine-grained texture details. We adopt spectral normalization in the network to stabilize the training of Self-attention StarGAN. Both quantitative and qualitative experiments on a public dataset have been conducted. The experimental results demonstrate that the proposed model achieves lower reconstruction error and generates images in higher quality compared to StarGAN. We exploit Amazon Mechanical Turk (AMT) for perceptual evaluation, and 68.1% of all 1,000 AMT Turkers agree that the backgrounds of the images generated by Self-attention StarGAN are more consistent with the original images.
机译:在本文中,我们通过将自我关注机制引入Stargan来应对多域图像到图像转换,旨在产生具有高质量细节的图像并获得一致的背景来提出自我关注的机制。自我关注机制在所有位置处的特征映射中的长距重依赖性模拟,这不限于本地图像区域。同时,我们采取批量归一化的优势,以减少重建误差并产生细粒度的纹理细节。我们采用网络中的光谱标准化来稳定自我关注Stargan的培训。已经进行了在公共数据集上的定量和定性实验。实验结果表明,拟议的模型实现了更低的重建误差,并与Stargan相比,更高质量的图像产生了更高的图像。我们利用亚马逊的Mechanical Turk(AMT)的感知评估,以及所有1000个AMT零工的68.1%同意,自我关注StarGAN生成的图像的背景是与原始图像更加一致。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号