...
首页> 外文期刊>Frontiers in Psychology >Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
【24h】

Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

机译:双眼融合和不变类别学习,这是由于在用眼球运动扫描深度场景期间进行了预测性重映射

获取原文
           

摘要

How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
机译:眼睛移动时,大脑如何保持3D场景的稳定融合?每次眼睛运动都会导致每个视网膜位置处理一组不同的风景特征,因此,在眼睛运动之后,大脑需要在每个位置双目融合新的特征组合。尽管由于每次移动视网膜视点融合都出现了这些中断,但以前融合的景深表示常常显得稳定。 3D ARTSCAN神经模型提出了大脑如何通过统一关于“何处和何处”皮质流中的多个皮质区域如何相互作用以协调3D边界和表面感知,空间注意力,不变对象类别学习,预测性重映射,眼睛运动的过程的概念来实现此目的的方法控制和学习到的坐标转换。该模型解释了来自单个神经元和心理生理研究的数据,这些数据涉及在眼睛运动之前隐性视觉注意力转移。该模型进一步阐明了多个大脑区域(LGN,V1,V2,V3A,V4,MT,MST,PPC,LIP,ITp,ITa,SC)之间的知觉,注意力和认知交互作用如何作为过程的一部分完成预测性重新映射从而学习视图不变的对象类别。这些结果建立在早期的3D视觉和地物背景分离的神经模型以及对眼睛自由扫描场景的不变对象类别的学习上。关键过程涉及对象的表面表示如何在顶叶皮层中生成空间注​​意力或注意力覆盖物的贴合分布,以帮助维持多个感知和认知过程的稳定性。预测性的眼动信号可保持导流罩以及双眼融合感知边界和表面表示的稳定性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号