【2h】

Information-theoretic model comparison unifies saliency metrics

机译:信息理论模型比较统一了显着性指标

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Learning the properties of an image associated with human gaze placement is important both for understanding how biological systems explore the environment and for computer vision applications. There is a large literature on quantitative eye movement models that seeks to predict fixations from images (sometimes termed “saliency” prediction). A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the primary reason for these inconsistencies is because different metrics and models use different definitions of what a “saliency map” entails. For example, some metrics expect a model to account for image-independent central fixation bias whereas others will penalize a model that does. Here we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain. We jointly optimize the scale, the center bias, and spatial blurring of all models within this framework. Evaluating existing metrics on these rephrased models produces almost perfect agreement in model rankings across the metrics. Model performance is separated from center bias and spatial blurring, avoiding the confounding of these factors in model comparison. We additionally provide a method to show where and how models fail to capture information in the fixations on the pixel level. These methods are readily extended to spatiotemporal models of fixation scanpaths, and we provide a software package to facilitate their use.
机译:学习与人的视线位置相关的图像的特性对于理解生物系统如何探索环境以及计算机视觉应用都很重要。关于定量眼动模型,有大量文献试图从图像中预测注视(有时称为“显着性”预测)。该领域已知的一个主要问题是现有的模型比较度量给出不一致的结果,从而引起混乱。我们认为,这些不一致的主要原因是因为不同的度量标准和模型对“显着性图”的含义使用了不同的定义。例如,某些度量标准期望模型能够解决与图像无关的中央注视偏差,而其他度量标准则会惩罚能够做到这一点的模型。在这里,我们通过概率性地建立注视预测模型并计算信息增益,将显着性评估引入信息领域。我们在此框架内共同优化了所有模型的比例,中心偏差和空间模糊。在这些改写的模型上评估现有度量标准,可以在各个度量标准的模型排名中产生几乎完美的一致性。模型性能与中心偏差和空间模糊分开,避免了模型比较中这些因素的混淆。我们还提供了一种方法来显示模型在何处以及如何无法捕获像素级注视中的信息。这些方法很容易扩展到固定扫描路径的时空模型,并且我们提供了一个软件包来促进其使用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号