...
首页> 外文期刊>Neuroscience: An International Journal under the Editorial Direction of IBRO >COGNITIVE INTEGRATION OF ASYNCHRONOUS NATURAL OR NON-NATURAL AUDITORY AND VISUAL INFORMATION IN VIDEOS OF REAL-WORLD EVENTS: AN EVENT-RELATED POTENTIAL STUDY
【24h】

COGNITIVE INTEGRATION OF ASYNCHRONOUS NATURAL OR NON-NATURAL AUDITORY AND VISUAL INFORMATION IN VIDEOS OF REAL-WORLD EVENTS: AN EVENT-RELATED POTENTIAL STUDY

机译:真实事件视频中异步自然或非自然音频和视觉信息的认知整合:与事件相关的电位研究

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process.
机译:在本文中,我们旨在研究现实事件视频中异步自然或非自然听觉和视觉信息的认知整合。为了比较具有异步自然声音和语音的视频的多感觉集成之间的差异和相似性,使用具有异步语义一致或不一致的自然声音或语音的视频作为刺激。事件相关电位(ERP)结果表明,无论自然声音是与视频中的关键动作一致还是不一致,都会触发N1和P250成分。与具有一致自然声音的视频相比,具有不一致自然声音的视频可以引起N400-P600效果,这与单感觉视觉研究的结果相似。语音上语义上一致或不一致的视频都可能引起N1成分。同时,与语音一致的视频相比,语音不一致的视频会引起N400-LPN效应,这表明这种语义处理可能与识别记忆有关。而且,语音语义不一致的视频所引起的N400效果要比语音语义不一致的视频所引起的N400效果更大且更晚。总体而言,具有自然声音或语音的视频的多感官集成可以大致分为两个阶段。对于具有自然声音的视频,第一阶段可能会反映接收到的信息与存储在内存中的信息之间的联系;第二个可能代表语义信息不一致的评估过程。对于带语音的视频,第一阶段类似于具有自然声音的视频的第一阶段。而第二个可能与识别记忆过程有关。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号