首页> 外文期刊>Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on >An Object-Based Visual Attention Model for Robotic Applications
【24h】

An Object-Based Visual Attention Model for Robotic Applications

机译:机器人应用程序的基于对象的视觉注意模型

获取原文
获取原文并翻译 | 示例

摘要

By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top–down biasing, bottom–up competition, mediation between top–down and bottom–up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top–down biases. The mediation between automatic bottom–up competition and conscious top–down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.
机译:通过扩展综合竞争假设,本文提出了一种基于对象的视觉注意模型,该模型使用低维特征选择一个感兴趣的对象,从而导致视觉感知从快速的注意选择过程开始。拟议的注意力模型涉及七个模块:学习存储在长期记忆(LTM)中的对象表示,前瞻性处理,自上而下的偏见,自下而上的竞争,自上而下和自下而上的方式之间的中介,显着性的产生地图和感知完成处理。它分为两个阶段:学习阶段和参与阶段。在学习阶段,当一个对象参加时,相应的对象表示将进行统计训练。提出了一种由局部和全局编码组成的双编码对象表示。强度,颜色和方向特征用于构建局部编码,轮廓特征用于构成全局编码。在参与阶段,该模型首先使用格式塔规则将视野预先划分为离散的原型对象。如果给出了特定于任务的对象,则模型从LTM调用相应的表示,并推导与任务相关的特征以评估自上而下的偏差。然后执行自下而上的竞争与自觉的自上而下的偏见之间的中介,以生成基于位置的显着性图。通过组合每个原型对象中基于位置的显着性,可以评估基于原型对象的显着性。选择最突出的原始对象进行关注,最后将其放入感知完成处理模块中以生成完整的对象区域。该模型已应用于机器人的不同任务:检测特定于任务的固定和运动对象。显示了在不同条件下的实验结果以验证该模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号