...
首页> 外文期刊>Multimedia Tools and Applications >Multimodal behavior realization for embodied conversational agents
【24h】

Multimodal behavior realization for embodied conversational agents

机译:体现对话主体的多模式行为实现

获取原文
获取原文并翻译 | 示例
           

摘要

Applications with intelligent conversational virtual humans, called Embodied Conversational Agents (ECAs), seek to bring human-like abilities into machines and establish natural human-computer interaction. In this paper we discuss realization of ECA multimodal behaviors which include speech and nonverbal behaviors. We devise RealActor, an open-source, multi-platform animation system for real-time multimodal behavior realization for ECAs. The system employs a novel solution for synchronizing gestures and speech using neural networks. It also employs an adaptive face animation model based on Facial Action Coding System (FACS) to synthesize face expressions. Our aim is to provide a generic animation system which can help researchers create believable and expressive ECAs.
机译:具有智能对话虚拟人的应用程序,称为嵌入式对话代理(ECA),试图将类人功能引入机器并建立自然的人机交互。在本文中,我们讨论了ECA多模态行为的实现,包括语音和非语言行为。我们设计了RealActor,这是一个开放源代码,多平台的动画系统,用于ECA的实时多模式行为实现。该系统采用一种新颖的解决方案,用于使用神经网络同步手势和语音。它还采用了基于面部动作编码系统(FACS)的自适应面部动画模型来合成面部表情。我们的目标是提供一个通用的动画系统,可以帮助研究人员创建可信且富有表现力的ECA。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号