...
首页> 外文期刊>Industrial Electronics, IEEE Transactions on >Spatial-Aware Object-Level Saliency Prediction by Learning Graphlet Hierarchies
【24h】

Spatial-Aware Object-Level Saliency Prediction by Learning Graphlet Hierarchies

机译:通过学习Graphlet层次结构进行空间感知的对象级显着性预测

获取原文
获取原文并翻译 | 示例
           

摘要

To fill the semantic gap between the predictive power of computational saliency models and human behavior, this paper proposes to predict where people look at using spatial-aware object-level cues. While object-level saliency has been recently suggested by psychophysics experiments and shown effective with a few computational models, the spatial relationship between the objects has not yet been explored in this context. We in this work for the first time explicitly model such spatial relationship, as well as leveraging semantic information of an image to enhance object-level saliency modeling. The core computational module is a graphlet-based (i.e., graphlets are moderate-sized connected subgraphs) deep architecture, which hierarchically learns a saliency map from raw image pixels to object-level graphlets (oGLs) and further to spatial-level graphlets (sGLs). Eye tracking data are also used to leverage human experience in saliency prediction. Experimental results demonstrate that the proposed oGLs and sGLs well capture object-level and spatial-level cues relating to saliency, and the resulting saliency model performs competitively compared with the state-of-the-art.
机译:为了填补计算显着性模型的预测能力与人类行为之间的语义鸿沟,本文提出使用空间感知的对象级线索来预测人们在哪里看。虽然最近通过心理物理学实验提出了对象级显着性,并且在一些计算模型中显示出了对象级显着性,但是在这种情况下还没有探索对象之间的空间关系。我们在这项工作中首次显式地对这种空间关系进行建模,并利用图像的语义信息来增强对象级显着性建模。核心计算模块是基于图小图(即,图小图是中等大小的连接子图)的深层体系结构,它从原始图像像素到对象级图小图(oGL),再到空间级图小图(sGL)分层学习显着图。 )。眼动追踪数据还用于在显着性预测中利用人类经验。实验结果表明,所提出的oGL和sGL能够很好地捕获与显着性有关的对象级和空间级线索,并且所产生的显着性模型与最新技术相比具有竞争优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号