【24h】

Everybody Dance Now

机译:现在大家跳起来吧

获取原文

摘要

This paper presents a simple method for “do as I do” motion transfer: given a source video of a person dancing, we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We approach this problem as video-to-video translation using pose as an intermediate representation. To transfer the motion, we extract poses from the source subject and apply the learned pose-to-appearance mapping to generate the target subject. We predict two consecutive frames for temporally coherent video results and introduce a separate pipeline for realistic face synthesis. Although our method is quite simple, it produces surprisingly compelling results (see video). This motivates us to also provide a forensics tool for reliable synthetic content detection, which is able to distinguish videos synthesized by our system from real data. In addition, we release a first-of-its-kind open-source dataset of videos that can be legally used for training and motion transfer.
机译:本文介绍了一种“做我做”运动转移的简单方法:给定一个跳舞的人的源视频,我们可以在目标对象执行标准动作仅几分钟后,将表演转移到一个新颖的(业余)目标上。我们将这个问题作为使用姿势作为中间表示的视频到视频的翻译来解决。为了传递运动,我们从源主题中提取姿势,然后应用学习到的姿势到外观映射来生成目标对象。我们预测两个连续的帧以获得时间上连贯的视频结果,并为现实的人脸合成引入单独的管道。尽管我们的方法非常简单,但它产生令人惊讶的令人信服的结果(请参见视频)。这激励我们也提供一种用于可靠合成内容检测的取证工具,该工具能够将我们系统合成的视频与真实数据区分开。此外,我们发布了第一个开放源代码的视频数据集,可合法用于训练和运动传递。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号