首页> 外文会议>AAAI Conference on Artificial Intelligence >FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis
【24h】

FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis

机译:FLNET:具有忠实说话面部动画综合的地标驱动的获取和学习网络

获取原文

摘要

Talking face synthesis has been widely studied in either appearance-based or warping-based methods. Previous works mostly utilize single face image as a source, and generate novel facial animations by merging other person's facial features. However, some facial regions like eyes or teeth, which may be hidden in the source image, can not be synthesized faithfully and stably. In this paper, We present a landmark driven two-stream network to generate faithful talking facial animation, in which more facial details are created, preserved and transferred from multiple source images instead of a single one. Specifically, we propose a network consisting of a learning and fetching stream. The fetching sub-net directly learns to attentively warp and merge facial regions from five source images of distinctive landmarks, while the learning pipeline renders facial organs from the training face space to compensate. Compared to baseline algorithms, extensive experiments demonstrate that the proposed method achieves a higher performance both quantitatively and qualitatively. Codes are at https://github.com/kgu3/FLNet_AAAI2020.
机译:谈话脸部合成已被广泛研究以外观为基础的或基于翘曲的方法。以前的作品主要利用单人映像作为源,通过合并其他人的面部特征来生成新颖的面部动画。然而,可以忠实且稳定地在源图像中隐藏在源图像中的一些面部区域或牙齿这样的面部区域。在本文中,我们介绍了一个地标驱动的双流网络,以生成忠实的谈论面部动画,其中从多个源图像而不是单个图像创建,保留和传送更多面部细节。具体地,我们提出由学习和获取流组成的网络。提取的子网直接学会从一个独特地标的五个源图像术直接术语和合并面部地区,而学习管道将面部器官从训练面空间呈现给予补偿。与基线算法相比,广泛的实验表明,该方法的定量和定性地实现了更高的性能。代码在https://github.com/kgu3/flnet_aaai2020处。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号