首页> 外文期刊>The Journal of Neuroscience: The Official Journal of the Society for Neuroscience >Eye Can Hear Clearly Now: Inverse Effectiveness in Natural Audiovisual Speech Processing Relies on Long-Term Crossmodal Temporal Integration
【24h】

Eye Can Hear Clearly Now: Inverse Effectiveness in Natural Audiovisual Speech Processing Relies on Long-Term Crossmodal Temporal Integration

机译:眼睛现在可以清楚地听到:自然视听语音处理中的逆向有效性依赖于长期的跨模式时间整合

获取原文
获取原文并翻译 | 示例
       

摘要

Speech comprehension is improved by viewing a speaker's face, especially in adverse hearing conditions, a principle known as inverse effectiveness. However, the neural mechanisms that help to optimize how we integrate auditory and visual speech in such suboptimal conversational environments are not yet fully understood. Using human EEG recordings, we examined how visual speech enhances the cortical representation of auditory speech at a signal-to-noise ratio that maximized the perceptual benefit conferred by multisensory processing relative to unisensory processing. We found that the influence of visual input on the neural tracking of the audio speech signal was significantly greater in noisy than in quiet listening conditions, consistent with the principle of inverse effectiveness. Although envelope tracking during audio-only speech was greatly reduced by background noise at an early processing stage, it was markedly restored by the addition of visual speech input. In background noise, multisensory integration occurred at much lower frequencies and was shown to predict the multisensory gain in behavioral performance at a time lag of similar to 250 ms. Critically, we demonstrated that inverse effectiveness, in the context of natural audiovisual (AV) speech processing, relies on crossmodal integration over long temporal windows. Our findings suggest that disparate integration mechanisms contribute to the efficient processing of AV speech in background noise.
机译:通过观察说话者的脸部,尤其是在不利的听力条件下,可以改善语音理解能力,这是一种称为反效果的原理。但是,尚未充分了解在这种次优对话环境中有助于优化我们如何整合听觉和视觉语音的神经机制。使用人类的EEG录音,我们检查了视觉语音如何以信噪比增强听觉语音的皮质表示,该信噪比使多感官处理相对于单感官处理所带来的感知利益最大化。我们发现视觉输入对音频语音信号的神经跟踪的影响在嘈杂情况下要比在安静的聆听条件下大得多,这与反向有效性的原理一致。尽管在早期处理阶段由于背景噪声而大大减少了纯音频语音中的包络跟踪,但通过添加可视语音输入可以显着地恢复包络跟踪。在背景噪声中,多感官整合发生在低得多的频率上,并显示出可以在类似于250毫秒的时间滞后预测行为表现上的多感官增益。至关重要的是,我们证明了在自然视听(AV)语音处理的背景下,逆有效性取决于长时窗上的交叉模式集成。我们的发现表明,不同的集成机制有助于在背景噪声中有效处理AV语音。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号