首页> 美国卫生研究院文献>Journal of Vision >Bimodal moment-by-moment coupling in perceptual multistability
【2h】

Bimodal moment-by-moment coupling in perceptual multistability

机译:感知多稳态中的双峰 moment-by-moment 耦合

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Multistable perception occurs in all sensory modalities, and there is ongoing theoretical debate about whether there are overarching mechanisms driving multistability across modalities. Here we study whether multistable percepts are coupled across vision and audition on a moment-by-moment basis. To assess perception simultaneously for both modalities without provoking a dual-task situation, we query auditory perception by direct report, while measuring visual perception indirectly via eye movements. A support-vector-machine (SVM)–based classifier allows us to decode visual perception from the eye-tracking data on a moment-by-moment basis. For each timepoint, we compare visual percept (SVM output) and auditory percept (report) and quantify the co-occurrence of integrated (one-object) or segregated (two-objects) interpretations in the two modalities. Our results show an above-chance coupling of auditory and visual perceptual interpretations. By titrating stimulus parameters toward an approximately symmetric distribution of integrated and segregated percepts for each modality and individual, we minimize the amount of coupling expected by chance. Because of the nature of our task, we can rule out that the coupling stems from postperceptual levels (i.e., decision or response interference). Our results thus indicate moment-by-moment perceptual coupling in the resolution of visual and auditory multistability, lending support to theories that postulate joint mechanisms for multistable perception across the senses.
机译:多稳态感知发生在所有感觉模式中,并且关于是否存在驱动跨模态多稳态的总体机制的理论争论正在进行中。在这里,我们研究了多稳态感知是否每时每刻都在视觉和听觉之间耦合。为了在不引发双重任务情况的情况下同时评估两种模式的感知,我们通过直接报告查询听觉感知,同时通过眼球运动间接测量视觉感知。基于支持向量机 (SVM) 的分类器使我们能够每时每刻从眼动追踪数据中解码视觉感知。对于每个时间点,我们比较视觉感知(SVM 输出)和听觉感知(报告),并量化两种模式中整合(一个对象)或分离(两个对象)解释的共现。我们的结果显示听觉和视觉知觉解释的偶然耦合。通过为每种模式和个体调整刺激参数,使其接近近似对称的整合和分离感知分布,我们最大限度地减少了偶然预期的耦合量。由于我们任务的性质,我们可以排除耦合源于后知觉水平(即决策或反应干扰)。因此,我们的结果表明,在视觉和听觉多重稳态的分辨率中,每时每刻的感知耦合,为假设跨感官多重稳态感知的联合机制的理论提供了支持。

著录项

代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号