首页> 外文期刊>Neuropsychologia >Neural responses elicited to face motion and vocalization pairings.
【24h】

Neural responses elicited to face motion and vocalization pairings.

机译:神经反应引起面对动作和发声配对。

获取原文
获取原文并翻译 | 示例
           

摘要

During social interactions our brains continuously integrate incoming auditory and visual input from the movements and vocalizations of others. Yet, the dynamics of the neural events elicited to these multisensory stimuli remain largely uncharacterized. Here we recorded audiovisual scalp event-related potentials (ERPs) to dynamic human faces with associated human vocalizations. Audiovisual controls were a dynamic monkey face with a species-appropriate vocalization, and a house with opening front door with a creaking door sound. Subjects decided if audiovisual stimulus trials were congruent (e.g. human face-human sound) or incongruent (e.g. house image-monkey sound). An early auditory ERP component, N140, was largest to human and monkey vocalizations. This effect was strongest in the presence of the dynamic human face, suggesting that species-specific visual information can modulate auditory ERP characteristics. A motion-induced visual N170 did not change amplitude or latency across visual motion category in the presence of sound. A species-specific incongruity response consisting of a late positive ERP at around 400 ms, P400, was selectively larger only when human faces were mismatched with a non-human sound. We also recorded visual ERPs at trial onset, and found that the category-specific N170 did not alter its behavior as a function of stimulus category-somewhat unexpected as two face types were contrasted with a house image. In conclusion, we present evidence for species-specificity in vocalization selectivity in early ERPs, and in a multisensory incongruity response whose amplitude is modulated only when the human face motion is paired with an incongruous auditory stimulus.
机译:在社交互动中,我们的大脑不断整合来自其他人的动作和发声的听觉和视觉输入。然而,由这些多感觉刺激引起的神经事件的动力学仍未完全表征。在这里,我们记录了与动态人脸相关的人类发声的视听头皮事件相关电位(ERP)。视听控制装置是充满动感的猴子脸,带有与物种适当的发声,并且房屋的前门打开且发出吱吱作响的门声。受试者决定视听刺激试验是一致的(例如人脸-人的声音)还是不一致的(例如房屋图像-猴子的声音)。早期的听觉ERP组件N140对人和猴子发声最大。在动态人脸的情况下,这种效果最强,表明特定于物种的视觉信息可以调节听觉ERP的特征。在声音存在的情况下,运动诱发的视觉N170不会改变视觉运动类别的幅度或延迟。仅当人脸与非人的声音不匹配时,才在400毫秒左右的晚期ERP组成的特定于物种的不一致反应(P400)才有选择地更大。我们还记录了试验开始时的可视化ERP,发现特定类别的N170并没有改变其作为刺激类别的函数的行为,因为两种面部类型与房屋图像形成了对比,这有些出乎意料。总之,我们为早期ERP中的发声选择性提供了物种特异性的证据,并且在多感官不一致反应中,仅当人脸运动与不协调的听觉刺激配对时,其幅度才被调制。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号