首页> 外文期刊>Journal of vision >Auditory Scene context, visual object identification, and spatial frequency
【24h】

Auditory Scene context, visual object identification, and spatial frequency

机译:听觉场景上下文,视觉对象识别和空间频率

获取原文
获取外文期刊封面目录资料

摘要

How do we use cross-modal cues to accurately identify the objects and scenes we see and hear? Furthermore, how do the different sensory processes influence each other in these identification processes? Participants were presented with auditory contexts for 5 s before a target object was briefly presented. Observers then identified both the auditory scene and the visual object. The aforementioned questions were examined with objects presented in high and low spatial frequency, in either congruent, incongruent, or neutral (white noise) contextual relations. Additionally, two levels of object and contextual constraints, defined in a pilot study, were examined. Auditory scenes and visual objects more easily (i.e., more accurately) identified were categorized as a "strong" stimuli and were paired with each other. Less accurately identified auditory scenes were paired with less accurately identified visual objects and were categorized as "weak" (ambiguous) stimuli. First results concern object identification. When paired with a strong auditory context, congruently paired objects were more accurately identified than both incongruent and neutral contexts. These results were similar across spatial frequency. With weak contexts, the question was, could two weak sources of information (e.g., scene and object) combine to facilitate identification? The data suggests that such effects were not present. In the main experiment, with additional experiments for power, there were no advantages for congruent contexts over incongruent or neutral contexts. However, there was an unexpected main effect of spatial frequency for these "weak" stimuli: high spatial frequency objects were better identified across all contextual relational conditions. These results are in contrast to the strong constraint stimuli. There was a small reciprocal effect for auditory scene identification. Congruent auditory scenes were somewhat better identified than incongruent conditions. These results provide new information about the detailed interactions between sources of information in multimodal identification.
机译:我们如何使用跨模态线索来准确识别我们看到和听到的物体和场景?此外,在这些识别过程中,不同的感觉过程如何相互影响?在简短介绍目标对象之前,向参与者展示听觉环境5 s。观察者然后识别听觉场景和视觉对象。对上述问题进行了研究,对象以高低空间频率出现,处于一致,不一致或中性(白噪声)上下文关系中。此外,还研究了在初步研究中定义的对象和上下文约束的两个级别。更容易(即,更准确地)识别出的听觉场景和视觉对象被分类为“强”刺激,并且彼此配对。不太准确地识别出的听觉场景与不太准确地识别出的视觉对象配对,并被归类为“弱”(模糊)刺激。第一结果与对象识别有关。当与强烈的听觉环境配对时,与不一致和中立的环境相比,可以更准确地识别出完全配对的对象。这些结果在空间频率上相似。在弱环境中,问题是,两个弱信息源(例如场景和物体)是否可以结合在一起以促进识别?数据表明不存在这种影响。在主要实验中,在进行附加的功率实验时,与不一致或中立的上下文相比,一致的上下文没有任何优势。但是,这些“弱”刺激对空间频率产生了意想不到的主要影响:在所有上下文关系条件下,可以更好地识别高空间频率对象。这些结果与强烈的约束刺激相反。听觉场景识别的相互影响很小。一致的听觉场景比不一致的情况要好一些。这些结果提供了有关多模式识别中信息源之间详细交互的新信息。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号