首页> 外文期刊>Affective Computing, IEEE Transactions on >Constraint-Based Model for Synthesis of Multimodal Sequential Expressions of Emotions
【24h】

Constraint-Based Model for Synthesis of Multimodal Sequential Expressions of Emotions

机译:基于约束的情绪多峰顺序表达式合成模型

获取原文
获取原文并翻译 | 示例
           

摘要

Emotional expressions play a very important role in the interaction between virtual agents and human users. In this paper, we present a new constraint-based approach to the generation of multimodal emotional displays. The displays generated with our method are not limited to the face, but are composed of different signals partially ordered in time and belonging to different modalities. We also describe the evaluation of the main features of our approach. We examine the role of multimodality, sequentiality, and constraints in the perception of synthesized emotional states. The results of our evaluation show that applying our algorithm improves the communication of a large spectrum of emotional states, while the believability of the agent animations increases with the use of constraints over the multimodal signals.
机译:情感表达在虚拟代理与人类用户之间的交互中起着非常重要的作用。在本文中,我们提出了一种基于约束的新方法来生成多模式情感展示。用我们的方法生成的显示不限于面部,而是由按时间顺序部分排序并属于不同形式的不同信号组成。我们还将描述对我们方法主要特征的评估。我们研究了多态性,顺序性和约束在合成情绪状态感知中的作用。我们的评估结果表明,应用我们的算法可以改善大范围情绪状态的沟通,而代理动画的可信度会随着对多模态信号的约束的使用而增加。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号