首页> 外文期刊>ACM Transactions on Applied Perception (TAP) >Multimodal Affect: Perceptually Evaluating an Affective Talking Head
【24h】

Multimodal Affect: Perceptually Evaluating an Affective Talking Head

机译:多峰影响:感知评估情感说话的头

获取原文
获取原文并翻译 | 示例
           

摘要

Many tasks such as driving or rapidly sorting items can be best achieved by direct actions. Other tasks such as giving directions, being guided through a museum, or organizing a meeting are more easily solved verbally. Since computers are increasingly being used in all aspects of daily life, it would be of great advantage if we could communicate verbally with them. Although advanced interactions with computers are possible, a vast majority of interactions are still based on the WIMP (Window, Icon, Menu, Point) metaphor [Hevner and Chatterjee 2010] and are, therefore, via simple text and gesture commands. The field of affective interfaces is working toward making computers more accessible by giving them (rudimentary) natural-language abilities, including using synthesized speech, facial expressions, and virtual body motions. Once the computer is granted a virtual body, however, it must be given the ability to use it to nonverbally convey socio-emotional information (such as emotions, intentions, mental state, and expectations) or it will likely be misunderstood. Here, we present a simple affective talking head along with the results of an experiment on the multimodal expression of emotion. The results show that although people can sometimes recognize the intended emotion from the semantic content of the text even when the face does not convey affect, they are considerably better at it when the face also shows emotion. Moreover, when both face and text convey emotion, people can detect different levels of emotional intensity.
机译:通过直接行动可以最好地完成许多任务,例如驾驶或快速分拣物品。口头上可以轻松解决其他任务,例如发出指示,在博物馆中进行引导或组织会议。由于计算机越来越广泛地用于日常生活的各个方面,因此,如果我们能够通过口头交流,将具有很大的优势。尽管可以与计算机进行高级交互,但是绝大多数交互仍基于WIMP(窗口,图标,菜单,点)隐喻[Hevner and Chatterjee 2010],因此是通过简单的文本和手势命令进行的。情感界面领域正在努力通过赋予计算机(基本)自然语言功能(包括使用合成语音,面部表情和虚拟身体动作)来使计算机更易于访问。但是,一旦为计算机授予了虚拟身体,就必须赋予它使用它以非语言方式传达社会情感信息(例如情感,意图,精神状态和期望)的能力,否则可能会被误解。在这里,我们介绍了一个简单的情感谈话头以及关于情绪的多峰表达的实验结果。结果表明,尽管即使在表情不表达情感的情况下,人们有时仍可以从文本的语义内容中识别出预期的情感,但当表情也表达情感时,他们在文本上的表现要好得多。此外,当面孔和文字都传达情感时,人们可以检测到不同程度的情感强度。

著录项

  • 来源
    《ACM Transactions on Applied Perception (TAP)》 |2015年第4期|17.1-17.17|共17页
  • 作者单位

    Brandenburg University of Technology,Brandenburgische Technische Universitat, Institut fuer Informatik, Lehrstuhl Grafische Systeme, Konrad-Wachsmann-Allee 5, 03046 Cottbus, Deutschland;

    Brandenburg University of Technology,Brandenburgische Technische Universitat, Institut fuer Informatik, Lehrstuhl Grafische Systeme, Konrad-Wachsmann-Allee 5, 03046 Cottbus, Deutschland;

    Brandenburg University of Technology,Brandenburgische Technische Universitat, Institut fuer Informatik, Lehrstuhl Grafische Systeme, Konrad-Wachsmann-Allee 5, 03046 Cottbus, Deutschland;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Affective interfaces; emotion; speech; facial animation;

    机译:情感界面;情感;言语;面部动画;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号