【24h】

Animating a Chinese interactive virtual character

机译:动画化中文互动虚拟角色

获取原文

摘要

This paper creates a Chinese interactive virtual character based on multi-modal mapping and rules, which receives information from the input modules and generates audio and visual speech, face expressions and body animations. The audio and visual speech are synthesized from the input text by multi-modal mapping, while face expressions and body movements are rule-based driven by emotion states. All of the original animations are captured by a motion capture system and plotted into a character model, which is created by the 3D creation software. We use a skeletal open source animation engine to create the scene in which the virtual character can talk like human communicating with users. The whole expression of this virtual character is considered very natural and realistic.
机译:本文基于多模式映射和规则创建了一个中文交互式虚拟人物,该虚拟人物从输入模块接收信息,并生成视听语音,面部表情和身体动画。音频和视觉语音是通过多模式映射从输入文本合成的,而面部表情和身体动作是由情感状态驱动的基于规则的。所有原始动画均由运动捕获系统捕获,并绘制到由3D创建软件创建的角色模型中。我们使用骨骼开源动画引擎来创建场景,在该场景中,虚拟角色可以像人类与用户进行交流一样说话。这个虚拟角色的整个表达被认为是非常自然和现实的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号