【24h】

Lifelike Gesture Synthesis and Timing for Conversational Agents

机译:对话代理的逼真的手势合成和计时

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Synchronization of synthetic gestures with speech output is one of the goals for embodied conversational agents which have become a new paradigm for the study of gesture and for human-computer interface. In this context, this contribution presents an operational model that enables lifelike gesture animations of an articulated figure to be rendered in real-time from representations of spatiotemporal gesture knowledge. Based on various findings on the production of human gesture, the model provides means for motion representation, planning, and control to drive the kinematic skeleton of a figure which comprises 43 degrees of freedom in 29 joints for the main body and 20 DOF for each hand. The model is conceived to enable cross-modal synchrony with respect to the coordination of gestures with the signal generated by a text-to-speech system.
机译:合成手势与语音输出的同步是具体化对话代理的目标之一,这些对话代理已成为研究手势和人机界面的新范例。在这种情况下,该贡献提出了一种操作模型,该模型使时空手势知识的表示能够实时渲染关节人物的逼真的手势动画。基于对人类手势产生的各种发现,该模型提供了用于运动表示,计划和控制的方法,以驱动人物的运动学骨架,该骨架包括主体29个关节的43个自由度和每只手20个自由度的。该模型被构想为在手势与文本到语音系统生成的信号的协调方面实现跨模式同步。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号