首页> 外文期刊>PLoS Biology >Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex
【24h】

Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex

机译:视听语音夹带过程中的代表性交互:左后颞上回的冗余和左运动皮层的协同作用

获取原文
           

摘要

Integration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction. With a novel information theoretic measure, we found that theta (3–7 Hz) oscillations in the posterior superior temporal gyrus/sulcus (pSTG/S) represent auditory and visual inputs redundantly (i.e., represent common features of the two), whereas the same oscillations in left motor and inferior temporal cortex represent the inputs synergistically (i.e., the instantaneous relationship between audio and visual inputs is also represented). Importantly, redundant coding in the left pSTG/S and synergistic coding in the left motor cortex predict behavior—i.e., speech comprehension performance. Our findings therefore demonstrate that processes classically described as integration can have different statistical properties and may reflect distinct mechanisms that occur in different brain regions to support audiovisual speech comprehension. Author summary Combining different sources of information is fundamental to many aspects of behavior, from our ability to pick up a ringing mobile phone to communicating with a friend in a busy environment. Here, we have studied the integration of auditory and visual speech information. Our work demonstrates that integration relies upon two different representational interactions. One system conveys redundant information by representing information that is common to both auditory and visual modalities. The other system, which is supported by a different brain area, represents synergistic information by conveying greater information than the linear summation of individual auditory and visual information. Further, we show that these mechanisms are related to behavioral performance. This novel insight opens new ways to enhance our understanding of the mechanisms underlying multi-modal information integration, a fundamental aspect of brain function. These fresh insights have been achieved by applying to brain imaging data a recently developed methodology called the partial information decomposition. This methodology also provides a novel and principled way to quantify the interactions between representations of multiple stimulus features in the brain.
机译:多模态感官信息的集成对于人类行为的许多方面至关重要,但是这些过程背后的神经机制仍然神秘。例如,在面对面的交流中,我们知道大脑会整合动态的听觉和视觉输入,但是我们尚不了解这种整合机制在哪里以及如何支持语音理解。在这里,我们量化了动态音频和视觉语音信号之间的表征交互,并表明不同的大脑区域表现出不同类型的表征交互。通过一种新颖的信息理论方法,我们发现后颞上回/沟(pSTG / S)中的theta(3–7 Hz)振荡多余地代表了听觉和视觉输入(即,代表了两者的共同特征),而左运动和颞下皮质的相同振动以协同方式表示输入(即,还表示了音频和视觉输入之间的瞬时关系)。重要的是,左pSTG / S中的冗余编码和左运动皮质中的协同编码可预测行为,即语音理解性能。因此,我们的发现表明,经典地描述为整合的过程可能具有不同的统计特性,并且可能反映出在不同大脑区域中发生的不同机制,以支持视听语音理解。作者摘要组合不同的信息源对于行为的许多方面都是至关重要的,从我们的能力到拿起正在振铃的手机,再到在繁忙的环境中与朋友交流。在这里,我们研究了听觉和视觉语音信息的整合。我们的工作表明,整合依赖于两种不同的代表性交互作用。一个系统通过表示听觉和视觉方式所共有的信息来传达冗余信息。由不同大脑区域支持的另一个系统通过传达比单个听觉和视觉信息的线性求和更大的信息来表示协同信息。此外,我们表明这些机制与行为表现有关。这种新颖的见解开辟了新的途径,可增强我们对多模式信息整合(大脑功能的基本方面)的潜在机制的理解。通过将最近开发的一种称为部分信息分解的方法应用于脑成像数据,已经获得了这些新鲜的见解。这种方法还提供了一种新颖且有原则的方法来量化大脑中多种刺激特征的表示之间的相互作用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号