【24h】

An Adaptive Personality Model for ECAs

机译:ECA的自适应人格模型

获取原文
获取原文并翻译 | 示例

摘要

Curtin University's Talking Heads (TH) combine an MPEG-4 compliant Facial Animation Engine (FAE), a Text To Emotional Speech Synthesiser (TTES), and a multi-modal Dialogue Manager (DM), that accesses a Knowledge Base (KB) and outputs Virtual Human Markup Language (VHML) text which drives the TTES and FAE. A user enters a question and an animated TH responds with a believable and affective voice and actions. However, this response to the user is normally marked up in VHML by the KB developer to produce the required facial gestures and emotional display. A real person does not react by fixed rules but on personality, beliefs, good and bad previous experiences, and training. This paper reviews personality theories and models relevant to THs, and then discusses the research at Curtin over the last five years in implementing and evaluating personality models. Finally the paper proposes an active, adaptive personality model to unify that work.
机译:科廷大学的Talking Heads(TH)结合了MPEG-4兼容的面部动画引擎(FAE),文本到情感语音合成器(TTES)和多模式对话管理器(DM),可以访问知识库(KB)和输出虚拟人类标记语言(VHML)文本,该文本驱动TTES和FAE。用户输入一个问题,然后动画TH会以令人信服的情感表达和动作进行响应。但是,通常由KB开发人员在VHML中标记对用户的此响应,以生成所需的面部手势和情感显示。真实的人不会按照固定的规则做出反应,而会对人格,信念,以前的好坏经历以及训练产生反应。本文回顾了与TH相关的人格理论和模型,然后讨论了Curtin在过去五年中实施和评估人格模型的研究。最后,本文提出了一个主动的,适应性强的人格模型来统一这项工作。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号