首页> 外文期刊>NeuroImage >Top-down attention regulates the neural expression of audiovisual integration
【24h】

Top-down attention regulates the neural expression of audiovisual integration

机译:自上而下的注意力调节视听整合的神经表达

获取原文
获取原文并翻译 | 示例
       

摘要

The interplay between attention and multisensory integration has proven to be a difficult question to tackle. There are almost as many studies showing that multisensory integration occurs independently from the focus of attention as studies implying that attention has a profound effect on integration. Addressing the neural expression of multisensory integration for attended vs. unattended stimuli can help disentangle this apparent contradiction. In the present study, we examine if selective attention to sound pitch influences the expression of audiovisual integration in both behavior and neural activity. Participants were asked to attend to one of two auditory speech streams while watching a pair of talking lips that could be congruent or incongruent with the attended speech stream. We measured behavioral and neural responses (fMRI) to multisensory stimuli under attended and unattended conditions while physical stimulation was kept constant. Our results indicate that participants recognized words more accurately from an auditory stream that was both attended and audiovisually (AV) congruent, thus reflecting a benefit due to AV integration. On the other hand, no enhancement was found for AV congruency when it was unattended. Furthermore, the fMRI results indicated that activity in the superior temporal sulcus (an area known to be related to multisensory integration) was contingent on attention as well as on audiovisual congruency. This attentional modulation extended beyond heteromodal areas to affect processing in areas classically recognized as unisensory, such as the superior temporal gyrus or the extrastriate cortex, and to non-sensory areas such as the motor cortex. Interestingly, attention to audiovisual incongruence triggered responses in brain areas related to conflict processing (i.e., the anterior cingulate cortex and the anterior insula). Based on these results, we hypothesize that AV speech integration can take place automatically only when both modalities are sufficiently processed, and that if a mismatch is detected between the AV modalities, feedback from conflict areas minimizes the influence of this mismatch by reducing the processing of the least informative modality.
机译:事实证明,注意力与多感觉整合之间的相互作用是一个很难解决的问题。几乎所有的研究表明,多感觉整合独立于关注的焦点而发生,这表明暗示关注对整合具有深远影响的研究。解决针对有人参与和无人参与刺激的多感觉整合的神经表达,可以帮助消除这种明显的矛盾。在本研究中,我们研究了对音高的选择性注意是否会影响行为和神经活动中视听整合的表达。要求参与者观看两个听觉语音流中的一个,同时观看一对会说话的嘴唇,这些嘴唇可能与参加的语音流一致或不一致。我们测量了在有人和无人看管的情况下,对多感觉刺激的行为和神经反应(fMRI),而物理刺激保持不变。我们的结果表明,参与者可以从听觉流中更加准确地识别单词,该听觉流既有出席者,也有视听(AV)的,因此反映出AV集成带来的好处。另一方面,在无人看管的情况下,未发现AV一致性增强。此外,功能磁共振成像结果表明,颞上沟(一个与多感官整合有关的区域)的活动取决于注意力以及视听一致性。这种注意的调制超出了异质模态区域的范围,从而影响了在传统上被认为是单感觉的区域(例如颞上回或超条纹皮层)以及非感觉区域(例如运动皮层)的处理。有趣的是,对视听不一致的注意会触发与冲突处理相关的大脑区域(即前扣带回皮层和前岛)的反应。根据这些结果,我们假设只有在两种模式都得到充分处理后,AV语音集成才能自动发生,并且,如果在AV模式之间检测到不匹配,则来自冲突区域的反馈会通过减少对视频的处理来最大程度地降低这种不匹配的影响。信息最少的方式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号