首页> 外文期刊>The quarterly journal of experimental psychology: QJEP >Increased discriminability of authenticity from multimodal laughter is driven by auditory information
【24h】

Increased discriminability of authenticity from multimodal laughter is driven by auditory information

机译:通过听觉信息驱动多峰笑的真实性可怜的可判断性

获取原文
获取原文并翻译 | 示例
       

摘要

We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.
机译:我们对视听笑声的真实性感知进行了调查,其中我们对鲜明和激动的样本进行了鲜明对比,并研究了单峰情感信息对多式化感知的贡献。在一项试点研究中,我们证明了听众感知自发性比无名能(仅限视觉,视觉,视觉)和多模式语境(视听)更正常的笑声。在主要实验中,我们表明,对于多式联笑声,增强了无与伦比和自发笑声的可怜性。对情感评级之间的关系分析与对真实性的看法表明,虽然单峰的感知显着预测了对视听笑声的评估,但听觉情感提示对多式化感知产生了更大的影响。我们在自发性和意志行为的背景下讨论情绪信号中的情绪信号中的差异和潜在的不匹配,并突出应在对动态多模态情绪加工的未来研究中解决的问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号