首页> 外文期刊>Experimental Brain Research >Searching for audiovisual correspondence in multiple speaker scenarios
【24h】

Searching for audiovisual correspondence in multiple speaker scenarios

机译:在多个说话者场景中搜索视听对应

获取原文
获取原文并翻译 | 示例
           

摘要

A critical question in multisensory processing is how the constant information flow that arrives to our different senses is organized in coherent representations. Some authors claim that pre-attentive detection of inter-sensory correlations supports crossmodal binding, whereas other findings indicate that attention plays a crucial role. We used visual and auditory search tasks for speaking faces to address the role of selective spatial attention in audiovisual binding. Search efficiency amongst faces for the match with a voice declined with the number of faces being monitored concurrently, consistent with an attentive search mechanism. In contrast, search amongst auditory speech streams for the match with a face was independent of the number of streams being monitored concurrently, as long as localization was not required. We suggest that the fundamental differences in the way in which auditory and visual information is encoded play a limiting role in crossmodal binding. Based on these unisensory limitations, we provide a unified explanation for several previous apparently contradictory findings.
机译:多传感器处理中的一个关键问题是,如何以连贯的表示形式组织到达我们不同感官的恒定信息流。一些作者声称,对注意力之间的相互关系进行细心的检测可以支持跨峰绑定,而其他发现则表明注意力起着至关重要的作用。我们使用视觉和听觉搜索任务来处理人脸,以解决选择性空间注意力在视听绑定中的作用。与同时进行监视的脸部数量同时,与语音匹配的脸部搜索效率下降,这与注意力搜索机制一致。相反,只要不需要本地化,在听觉语音流中搜索与面部的匹配与并发监视的流的数量无关。我们建议听觉和视觉信息的编码方式的根本差异在跨峰绑定中起着限制作用。基于这些单感的局限性,我们为先前几个明显矛盾的发现提供了统一的解释。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号