首页> 外文期刊>Computers & Graphics >Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio
【24h】

Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio

机译:学习跳舞:一个图表卷积的对手网络,从音频产生现实舞蹈动作

获取原文
获取原文并翻译 | 示例
           

摘要

Synthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models undergo training and variability issues due to the non-Euclidean geometry of the motion manifold structure. In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learning scheme conditioned on the input music audios to create natural motions preserving the key movements of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of-the-art dance generation method conditioned on music in different experiments. Moreover, our graph-convolutional approach is simpler, easier to be trained, and capable of generating more realistic motion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data. The dataset and project are publicly available at: https://www.verlab.dcc.ufmg.br/motion-analysis/cag2020. (C) 2020 Elsevier Ltd. All rights reserved.
机译:通过学习技术综合人体运动正在成为一种越来越流行的方法,可以减轻新数据捕获来生产动画的要求。学习自然地从音乐中移动,即跳舞,是人类常常毫不费力地表现的复杂运动之一。每个舞蹈运动都是独一无二的,但这种运动保持了舞蹈风格的核心特征。由于运动歧形结构的非欧几里德几何形状,大多数解决这个问题的方法都是经典卷积和递归神经模型的训练和可变性问题。在本文中,我们设计了一种基于图表卷积网络的新方法,以解决自动信息的自动舞蹈问题。我们的方法使用对输入音乐Audios的对抗性学习方案,以创建保留不同音乐风格的关键运动的自然动作。我们评估了我们具有三种定量度量的生成方法和用户学习的方法。结果表明,拟议的GCN模型优于不同实验中的音乐艺术舞蹈发电方法。此外,我们的图形 - 卷积方法更简单,更容易被训练,并且能够能够在定性和不同的定量度量产生更现实的运动样式。它还呈现了与真实运动数据相当的可视移动感知质量。数据集和项目可在:https://www.verlab.dcc.ufmg.br/motion-analysis/cag2020。 (c)2020 elestvier有限公司保留所有权利。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号