首页> 外文OA文献 >Ultra-rapid object categorization in real-world scenes with top-down manipulations
【2h】

Ultra-rapid object categorization in real-world scenes with top-down manipulations

机译:通过自上而下操纵的现实世界场景中的超快速对象分类

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

Humans are able to achieve visual object recognition rapidly and effortlessly. Object categorization is commonly believed to be achieved by interaction between bottom-up and top-down cognitive processing. In the ultra-rapid categorization scenario where the stimuli appear briefly and response time is limited, it is assumed that a first sweep of feedforward information is sufficient to discriminate whether or not an object is present in a scene. However, whether and how feedback/top-down processing is involved in such a brief duration remains an open question. To this end, here, we would like to examine how different top-down manipulations, such as category level, category type and real-world size, interact in ultra-rapid categorization. We have constructed a dataset comprising real-world scene images with a built-in measurement of target object display size. Based on this set of images, we have measured ultra-rapid object categorization performance by human subjects. Standard feedforward computational models representing scene features and a state-of-the-art object detection model were employed for auxiliary investigation. The results showed the influences from 1) animacy (animal, vehicle, food), 2) level of abstraction (people, sport), and 3) real-world size (four target size levels) on ultra-rapid categorization processes. This had an impact to support the involvement of top-down processing when rapidly categorizing certain objects, such as sport at a fine grained level. Our work on human vs. model comparisons also shed light on possible collaboration and integration of the two that may be of interest to both experimental and computational vision researches. All the collected images and behavioral data as well as code and models are publicly available at https://osf.io/mqwjz/.
机译:人类能够迅速而毫不费力地实现视觉对象识别。通常据信通过自下而下和自上而下的认知处理之间的相互作用来实现对象分类。在短快速分类方案中,刺激呈现短暂和响应时间是有限的,假设第一扫描的前馈信息足以区分对象是否存在于场景中。但是,在这种短暂的持续时间内是否涉及反馈/自上而下处理仍然是一个打开的问题。为此,在这里,我们想检查如何不同的自上而下的操作,例如类别级别,类别类型和真实世界尺寸,在超快速分类中互动。我们已经构建了一个数据集,该数据集包含了具有内置目标对象显示大小的内置测量的实际场景图像。基于这组图像,我们通过人类受试者测量了超快速的对象分类性能。代表场景特征和最先进的对象检测模型的标准馈电计算模型用于辅助研究。结果表明,来自1)的影响(动物,车辆,食物),2)抽象(人,体育)和3)实际尺寸(四个目标尺寸水平)在超快速分类过程中的影响。在快速对待某些物体时,这对自上而下处理的参与进行了影响,这是一种影响,例如在细粒度的体育级别。我们对人类的工作与模型比较也在阐明可能对两者可能感兴趣的合作和整合可能对实验和计算视觉研究感兴趣。所有收集的图像和行为数据以及代码和模型都公开可在https://osf.io/mqwjz/处获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号