首页> 外文会议>ACM SIGGRAPH Asia >Real-time prosody-driven synthesis of body language
【24h】

Real-time prosody-driven synthesis of body language

机译:实时韵律驱动肢体语言的合成

获取原文

摘要

Human communication involves not only speech, but also a wide variety of gestures and body motions. Interactions in virtual environments often lack this multi-modal aspect of communication. We present a method for automatically synthesizing body language animations directly from the participants' speech signals, without the need for additional input. Our system generates appropriate body language animations by selecting segments from motion capture data of real people in conversation. The synthesis can be performed progressively, with no advance knowledge of the utterance, making the system suitable for animating characters from live human speech. The selection is driven by a hidden Markov model and uses prosody-based features extracted from speech. The training phase is fully automatic and does not require hand-labeling of input data, and the synthesis phase is efficient enough to run in real time on live microphone input. User studies confirm that our method is able to produce realistic and compelling body language.
机译:人类的沟通不仅涉及言论,而且涉及各种各样的手势和身体运动。虚拟环境中的交互通常缺乏通信的这种多模态方面。我们提出了一种用于直接从参与者的语音信号自动综合身体语言动画的方法,而无需额外的输入。我们的系统通过在会话中的真实人的运动捕获数据中选择段来生成适当的肢体语言动画。合成可以逐步地进行,没有对话语的推进知识,使系统适用于动画来自现场人类语音的动画。选择由隐藏的马尔可夫模型驱动,并使用从语音中提取的基于韵律的特征。训练阶段是全自动的,不需要输入数据的手动标记,并且合成阶段足以在实时麦克风输入上实时运行。用户研究证实我们的方法能够产生现实和引人注目的肢体语言。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号