首页> 外文会议>Chinese Control Conference >Double Encoder Conditional GAN for Facial Expression Synthesis
【24h】

Double Encoder Conditional GAN for Facial Expression Synthesis

机译:用于面部表情合成的双编码器条件GAN

获取原文
获取外文期刊封面目录资料

摘要

Photorealistic facial expression synthesis from single face image is already a highly challenging research work, in part due to a paucity of labeled and paired facial expression samples. Most existing facial expression synthesis works attempt to learn the transformation between expression domains and thus would require the paired samples as well as the labeled query image. In this paper, we propose the Double Encoder Conditional GAN (DECGAN) for facial expression synthesis. Generative Adversarial Networks (GANs) have demonstrated to successfully approximate complex data distributions. And cGANs, which contain external information, can determine the specific relationship between images. This work inspires us to modify the structure of GAN, and use the target facial expression feature as a condition. In this work, we propose two encoders to encode the original expression and the target expression, respectively, to extract the latent vectors and conditional labels features of the real image. In the meantime, associative learning is used to associate unpaired original emoticons with target emoticons in the database and to share identities.
机译:从单张面部图像合成逼真的面部表情已经是一项极富挑战性的研究工作,部分原因是缺少标记和配对的面部表情样本。大多数现有的面部表情合成工作都试图学习表情域之间的转换,因此将需要成对的样本以及标记的查询图像。在本文中,我们提出了用于面部表情合成的Double Encoder Conditional GAN(DECGAN)。生成对抗网络(GAN)已被证明可以成功地近似复杂的数据分布。包含外部信息的cGAN可以确定图像之间的特定关系。这项工作启发我们修改GAN的结构,并使用目标面部表情功能作为条件。在这项工作中,我们提出了两种编码器,分别对原始表达和目标表达进行编码,以提取真实图像的潜在向量和条件标签特征。同时,关联学习用于将未配对的原始表情符号与数据库中的目标表情符号相关联,并共享身份。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号