首页> 外文会议>International Conference on Computational Science pt.3; 20040606-20040609; Krakow; PL >Towards Believable Behavior Generation for Embodied Conversational Agents
【24h】

Towards Believable Behavior Generation for Embodied Conversational Agents

机译:朝着具体对话对象的可信行为生成

获取原文
获取原文并翻译 | 示例

摘要

This paper reports on the generation of coordinated multimodal output for the NICE (Natural Interactive Communication for Edutainment) system . In its first prototype, the system allows for fun and experientially rich interaction between primarily 10 to 18 years old human users and 3D-embodied fairy tale author H.C. Andersen in his study. User input consists of domain-oriented spoken conversation combined with 2D input gesture, entered via a mouse-compatible device. The animated character can move about and interact with his environment as well as communicate with the user through spoken conversation and non-verbal gesture, body posture, facial expression and gaze. The described approach aims to make the virtual agent's appearance, voice, actions, and communicative behavior convey the impression of a character with human-like behavior, emotions, relevant domain knowledge, and a distinct personality. We propose an approach to multimodal output generation, which exploits a richly parameterized semantic instruction from the conversation manager and splits the instruction into synchronized text instructions to the text-to-speech synthesizer, and behavioral instructions to the animated character. Based on the implemented version of this approach, we are in the process of creating a behavior sub-system that combines the described multimodal output instructions with parameters representing the current emotional state of the character, producing animations that express emotional state through speech and non-verbal behavior.
机译:本文报告了NICE(用于娱乐的自然交互式通信)系统的协调多模式输出的生成。该系统在其第一个原型中,允许10至18岁的人类用户与3D立体童话故事作者H.C.之间进行有趣且丰富的互动体验。安徒生在他的书房。用户输入包括通过鼠标兼容设备输入的面向领域的口语对话和2D输入手势。动画角色可以在周围环境中移动并与之交互,以及通过语音对话和非语言手势,身体姿势,面部表情和凝视与用户进行交流。所描述的方法旨在使虚拟代理的外观,声音,动作和交流行为传达出具有类似人的行为,情感,相关领域知识和鲜明个性的角色印象。我们提出了一种用于多模式输出生成的方法,该方法利用了来自对话管理器的丰富参数化的语义指令,并将该指令分为同步的文本指令(用于文本到语音合成器)和行为指令(用于动画角色)。基于此方法的实现版本,我们正在创建一个行为子系统,该行为子系统将所描述的多模式输出指令与代表角色当前情绪状态的参数相结合,生成通过语音和非语音方式表达情绪状态的动画。语言行为。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号