首页> 外文期刊>User modeling and user-adapted interaction >User preferences can drive facial expressions: evaluating an embodied conversational agent in a recommender dialogue system
【24h】

User preferences can drive facial expressions: evaluating an embodied conversational agent in a recommender dialogue system

机译:用户首选项可以驱动面部表情:在推荐者对话系统中评估具体的对话代理

获取原文
获取原文并翻译 | 示例
           

摘要

Tailoring the linguistic content of automatically generated descriptions to the preferences of a target user has been well demonstrated to be an effective way to produce higher-quality output that may even have a greater impact on user behaviour. It is known that the non-verbal behaviour of an embodied agent can have a significant effect on users' responses to content presented by that agent. However, to date no-one has examined the contribution of non-verbal behaviour to the effectiveness of user tailoring in automatically generated embodied output. We describe a series of experiments designed to address this question. We begin by introducing a multi-modal dialogue system designed to generate descriptions and comparisons tailored to user preferences, and demonstrate that the user-preference tailoring is detectable to an overhearer when the output is presented as synthesised speech. We then present a multimodal corpus consisting of the annotated facial expressions used by a speaker to accompany the generated tailored descriptions, and verify that the most characteristic positive and negative expressions used by that speaker are identifiable when resyn-thesised on an artificial talking head. Finally, we combine the corpus-derived facial displays with the tailored descriptions to test whether the addition of the non-verbal channel improves users' ability to detect the intended tailoring, comparing two strategies for selecting the displays: one based on a simple corpus-derived rule, and onernmaking direct use of the full corpus data. The performance of the subjects who saw displays selected by the rule-based strategy was not significantly different than that of the subjects who got only the linguistic content, while the subjects who saw the data-driven displays were significantly worse at detecting the correctly tailored output. We propose a possible explanation for this result, and also make recommendations for developers of future systems that may make use of an embodied agent to present user-tailored content.
机译:业已证明,根据目标用户的喜好调整自动生成的描述的语言内容是产生高质量输出的有效方法,甚至对用户行为产生更大的影响。众所周知,具体化代理的非语言行为可能会对用户对该代理所呈现内容的响应产生重大影响。但是,迄今为止,还没有人检查过非言语行为对自动生成的体现输出中用户剪裁效果的影响。我们描述了旨在解决该问题的一系列实验。我们首先介绍一种多模式对话系统,该系统旨在生成针对用户偏好的描述和比较,并证明当输出显示为合成语音时,听众可能会检测到用户偏好。然后,我们提出一个多模式语料库,该语料库由说话者使用的带表情的面部表情组成,以配合生成的量身定制的描述,并验证当在人工说话的头部上重新合成时,该说话者使用的最具特征性的正面和负面表情是可识别的。最后,我们将两种语料库选择策略进行了比较:一种是基于简单语料库的,另一种是将语料库衍生的面部显示与量身定制的描述相结合,以测试添加非语言通道是否可以提高用户检​​测预期剪裁的能力。派生规则,并直接使用完整的语料库数据。看到通过基于规则的策略选择的显示的对象的性能与仅获得语言内容的对象的性能没有显着差异,而看到数据驱动的显示的对象在检测正确定制的输出方面明显较差。我们建议对此结果进行可能的解释,并为可能使用具体化的代理来呈现用户量身定制的内容的未来系统的开发人员提供建议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号