首页> 外文会议>IEEE International Conference on Image Processing >DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?
【24h】

DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?

机译:做深度学习的持久性模型真正型号显着吗?

获取原文

摘要

Visual attention allows the human visual system to effectively deal with the huge flow of visual information acquired by the retina. Since the years 2000, the human visual system began to be modelled in computer vision to predict abnormal, rare and surprising data. Attention is a product of the continuous interaction between bottom-up (mainly feature-based) and top-down (mainly learning-based) information. Deep-learning (DNN) is now well established in visual attention modelling with very effective models. The goal of this paper is to investigate the importance of bottom-up versus top-down attention. First, we enrich with top-down information classical bottom-up models of attention. Then, the results are compared with DNN-based models. Our provocative question is: "do deep-learning saliency models really predict saliency or they simply detect interesting objects?". We found that if DNN saliency models very accurately detect top-down features, they neglect a lot of bottom-up information which is surprising and rare, thus by definition difficult to learn.
机译:视觉注意力允许人类视觉系统有效地处理视网膜获取的视觉信息的巨大流动。自2000年以来,人类视觉系统开始在计算机愿景中进行建模,以预测异常,罕见和令人惊讶的数据。注意力是自下而上(主要特征)和自上而下(主要学习的)信息之间的连续交互的产物。深度学习(DNN)现在以非常有效的模型在视觉上建立了很好的建模。本文的目标是调查自下而上与自上而下的关注的重要性。首先,我们丰富自上而下的信息经典自下而上的注意力。然后,将结果与基于DNN的模型进行比较。我们的挑衅性问题是:“做深度学习的持久性模型真正预测显着性,或者他们只是检测有趣的物体吗?”。我们发现,如果DNN显着模型非常准确地检测到自上而下的功能,他们忽略了很多令人惊讶和罕见的大量自下而上的信息,因此难以学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号