首页> 外文期刊>Neurocomputing >Generative adversarial learning for detail-preserving face sketch synthesis
【24h】

Generative adversarial learning for detail-preserving face sketch synthesis

机译:细节保存面素描合成的生成对抗学习

获取原文
获取原文并翻译 | 示例
           

摘要

Face sketch synthesis aims to generate a face sketch image from a corresponding photo image and has wide applications in law enforcement and digital entertainment. Despite the remarkable achievements that have been made in face sketch synthesis, most existing works pay main attention to the facial content transfer, at the expense of facial detail information. In this paper, we present a new generative adversarial learning framework to focus on detail preservation for realistic face sketch synthesis. Specifically, the high-resolution network is modified as generator to transform a face image from photograph to sketch domain. Except for the common adversarial loss, we design a detail loss to force the synthesized face sketch images have proximate details to its corresponding photo images. In addition, the style loss is adopted to restrain the synthesized face sketch images have vivid sketch style as the hand-drawn sketch images. Experimental results demonstrate that the proposed approach achieves superior performance, compared to state-of-the-art approaches, both on visual perception and objective evaluation. Specifically, this study indicated the higher FSIM values (0.7345 and 0.7080) and Scoot values (0.5317 and 0.5091) than most comparison methods on the CUFS and CUFSF datasets, respectively.(c) 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
机译:面部素描合成旨在从相应的照片图像生成面部草图图像,并在执法和数字娱乐中具有广泛的应用。尽管面对素描综合取得了显着成果,但大多数现有工程以面部细节信息为代价,大多数现有工程都会主要关注面部内容转移。在本文中,我们提出了一种新的生成的对抗性学习框架,专注于逼真的脸剪影合成细节保存。具体地,高分辨率网络被修改为生成器以将面部图像从照片转换为绘制域。除了常见的对抗性损失外,我们设计细节损失以迫使合成的面部草图图像具有邻近细节的相应照片图像。此外,采用风格损耗来抑制合成面积素描图像与手绘草图图像有生动的草图风格。实验结果表明,与最先进的方法相比,拟议的方法均可在视觉感知和客观评估中实现了卓越的性能。具体而言,本研究指示分别较高FSIM值(0.7345和0.7080)和快速移动的值(0.5317和0.5091)比在CUFS和CUFSF数据集最比较方法,(c)中2021作者(一个或多个)。由elsevier b.v发布。这是CC By-NC-ND许可下的开放式访问文章(http://creativecommons.org/licenses/by-nc-nd/4.0/)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号