首页> 美国卫生研究院文献>Frontiers in Human Neuroscience >On the role of crossmodal prediction in audiovisual emotion perception
【2h】

On the role of crossmodal prediction in audiovisual emotion perception

机译:跨模式预测在视听情绪感知中的作用

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Humans rely on multiple sensory modalities to determine the emotional state of others. In fact, such multisensory perception may be one of the mechanisms explaining the ease and efficiency by which others' emotions are recognized. But how and when exactly do the different modalities interact? One aspect in multisensory perception that has received increasing interest in recent years is the concept of cross-modal prediction. In emotion perception, as in most other settings, visual information precedes the auditory information. Thereby, leading in visual information can facilitate subsequent auditory processing. While this mechanism has often been described in audiovisual speech perception, so far it has not been addressed in audiovisual emotion perception. Based on the current state of the art in (a) cross-modal prediction and (b) multisensory emotion perception research, we propose that it is essential to consider the former in order to fully understand the latter. Focusing on electroencephalographic (EEG) and magnetoencephalographic (MEG) studies, we provide a brief overview of the current research in both fields. In discussing these findings, we suggest that emotional visual information may allow more reliable predicting of auditory information compared to non-emotional visual information. In support of this hypothesis, we present a re-analysis of a previous data set that shows an inverse correlation between the N1 EEG response and the duration of visual emotional, but not non-emotional information. If the assumption that emotional content allows more reliable predicting can be corroborated in future studies, cross-modal prediction is a crucial factor in our understanding of multisensory emotion perception.
机译:人类依靠多种感觉方式来确定他人的情绪状态。实际上,这种多感官感知可能是解释识别他人情绪的容易程度和效率的机制之一。但是不同的方式如何以及何时确切地相互作用?近年来,多感官感知的一个方面是交叉模式预测的概念。与大多数其他设置一样,在情感感知中,视觉信息位于听觉信息之前。因此,引入视觉信息可以促进随后的听觉处理。尽管通常在视听语音感知中描述了这种机制,但到目前为止,在视听情感感知中尚未解决该机制。基于(a)跨模式预测和(b)多感觉情感感知研究的最新技术,我们建议有必要考虑前者以便充分理解后者。着重于脑电图(EEG)和磁脑电图(MEG)研究,我们简要概述了这两个领域的最新研究。在讨论这些发现时,我们建议与非情绪视觉信息相比,情绪视觉信息可以更可靠地预测听觉信息。为了支持该假设,我们对以前的数据集进行了重新分析,该数据集显示N1 EEG反应与视觉情绪持续时间之间呈反相关关系,但与非情绪信息之间呈负相关关系。如果在未来的研究中可以证实情绪内容可以提供更可靠的预测的假设,那么跨模态预测是我们理解多感觉情绪感知的关键因素。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号