首页> 外文学位 >A neural model of visually-guided navigation and object tracking in a cluttered world: Computing ego and object motion in a model of the primate magnocellular pathway.
【24h】

A neural model of visually-guided navigation and object tracking in a cluttered world: Computing ego and object motion in a model of the primate magnocellular pathway.

机译:杂乱世界中视觉引导导航和对象跟踪的神经模型:在灵长类大细胞通路模型中计算自我和对象运动。

获取原文
获取原文并翻译 | 示例

摘要

Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. This thesis introduces the Visually-guided Steering, Tracking, Avoidance and Route Selection (ViSTARS) model, which proposes how primates use motion information to segment objects and determine heading, or direction of travel, for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by describing processes performed by neurons in several areas of the primate magnocellular pathway, from retina through V1, MT and MST. In particular, ViSTARS predicts how computationally complementary processes in cortical areas MT--/MSTv and MT+/MSTd compute object motion for tracking, and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams sampled while driving in real-world environments. Simulated camera or eye rotations of less than 1° per second do not affect model performance, but faster simulated rotation rates degrade performance, as they do in humans Model MT-- computes ON-center OFF-surround differential motion signals and interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance. ViSTARS demonstrates that processing in the primate magnocellular pathway can provide sufficient information for human-like performance even with low resolution noisy inputs.
机译:在混乱的自然场景中进行视觉引导导航是动物和人类轻松完成的具有挑战性的问题。本文介绍了视觉引导的转向,跟踪,回避和路线选择(ViSTARS)模型,该模型提出了灵长类动物如何使用运动信息来分割对象并确定前进方向或行进方向,以达到目标进近和避开障碍物的目的。来自真实和虚拟环境的视频输入。该模型产生的轨迹与人类导航员的轨迹相似。它是通过描述从视网膜到V1,MT和MST的灵长类大细胞途径的多个区域中神经元执行的过程来实现的。特别是,ViSTARS预测皮质区域MT-/ MSTv和MT + / MSTd中的计算互补过程分别如何计算对象运动以进行跟踪和自运动以进行导航。模型视网膜响应输入流中的瞬变。模型V1生成局部速度和方向估计。由于神经孔径问题,该局部运动估计是不明确的。 MT +模型通过细心的反馈回路与MSTd交互,以计算MSTd中的准确航向估计,从而定量模拟人类航向估计数据的属性。该模型估计,在现实环境中行驶时,在随机点或逼真的渲染场景中,航向将在1.5°范围内,而在采样的视频流中将在3°范围内。相机或眼睛的模拟旋转速度小于每秒1°不会影响模型性能,但是更快的模拟旋转速度会降低性能,就像在人类中一样。MT型-计算ON中心OFF周围的差分运动信号,并通过MSTv与之交互细心的反馈回路,可计算出移动物体的速度,方向和位置的准确估算值。该目标信息与航向信息相结合以产生操纵决策,其中目标的行为像吸引子,而障碍物的行为像驱避器。这些操纵决策会导致导航轨迹与人类的表现紧密匹配。 ViSTARS证明,即使具有低分辨率的嘈杂输入,在灵长类大细胞通路中进行的处理也可以为类人性能提供足够的信息。

著录项

  • 作者

    Browning, Neil Andrew.;

  • 作者单位

    Boston University.;

  • 授予单位 Boston University.;
  • 学科 Biology Neuroscience.;Psychology Cognitive.
  • 学位 Ph.D.
  • 年度 2009
  • 页码 119 p.
  • 总页数 119
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 神经科学;心理学;
  • 关键词

  • 入库时间 2022-08-17 11:37:56

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号