首页> 外文会议>Annual conference of the North American Chapter of the Association for Computational Linguistics: human language technologies >Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
【24h】

Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision

机译:比较器,量词,比例:从视觉学习数量的多任务模型

获取原文

摘要

The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, non-symbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lower-complexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of targeton-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.
机译:本工作研究是否可以通过多任务计算模型从视觉场景中共同学习不同的量化机制(集合比较,模糊量化和比例估计)。其动机是,在人类中,这些过程是相同的认知,非符号能力的基础,该能力允许自动估计和比较设置的幅度。我们表明,当可获得有关较低复杂性任务的信息时,与单独执行时相比,较高级别的比例任务变得更加准确。而且,多任务模型能够概括为目标/非目标对象的看不见的组合。与显示比例任务中绝对数量有干扰的行为证据一致,当要求提供场景中目标对象的数量时,多任务模型不再起作用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号