首页> 外文会议>International and Interdisciplinary Conference on Modeling and Using Context(CONTEXT 2005); 20050705-08; Paris(FR) >Contextual Factors and Adaptative Multimodal Human-Computer Interaction: Multi-level Specification of Emotion and Expressivity in Embodied Conversational Agents
【24h】

Contextual Factors and Adaptative Multimodal Human-Computer Interaction: Multi-level Specification of Emotion and Expressivity in Embodied Conversational Agents

机译:上下文因素和适应性多模式人机交互:情感对话中情感和表达的多层次规范

获取原文
获取原文并翻译 | 示例

摘要

In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors. The selection of these behaviors should depend not only on factors related to her individuality such as her culture, her social and professional role, her personality, but also on a set of contextual variables (such as her interlocutor, the social conversation setting), and other dynamic variables (belief, goal, emotion). We describe the representation scheme and the computational model of behavior expressivity of the Expressive Agent System that we have developed. We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues and levels of representation are required for enabling the proper recognition of the emotions).
机译:在本文中,我们提出了一种能体现丰富的言语和非言语行为的体现型会话代理(ECA)模型。这些行为的选择不仅应取决于与她的个性有关的因素,例如她的文化,她的社会和职业角色,她的性格,而且还应取决于一组上下文变量(例如她的对话者,社交对话的设置),以及其他动态变量(信念,目标,情感)。我们描述了我们开发的Expressive Agent System的行为表现能力的表示方案和计算模型。我们解释了情感丰富的电视视频采访语料库的多级注释如何提供上下文相关的知识作为ECA规范的输入(例如,需要哪些上下文提示和表示级别才能正确识别情感)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号