首页> 外文会议>European conference on computer vision >Physically Grounded Spatio-temporal Object Affordances
【24h】

Physically Grounded Spatio-temporal Object Affordances

机译:物理接地的时空物质可供选择

获取原文

摘要

Objects in human environments support various functionalities which govern how people interact with their environments in order to perform tasks. In this work, we discuss how to represent and learn a functional understanding of an environment in terms of object affordances. Such an understanding is useful for many applications such as activity detection and assistive robotics. Starting with a semantic notion of affordances, we present a generative model that takes a given environment and human intention into account, and grounds the affordances in the form of spatial locations on the object and temporal trajectories in the 3D environment. The probabilistic model also allows uncertainties and variations in the grounded affordances. We apply our approach on RGB-D videos from Cornell Activity Dataset, where we first show that we can successfully ground the affordances, and we then show that learning such affordances improves performance in the labeling tasks.
机译:人类环境中的对象支持各种功能,该功能控制人们如何与其环境进行交互,以便执行任务。 在这项工作中,我们讨论如何在对象可取性方面代表和学习对环境的功能理解。 这种理解对于许多诸如活动检测和辅助机器人的许多应用是有用的。 从人的语义概念开始,我们介绍了一个产生给定的环境和人类意图的生成模型,并以3D环境中的物体和时间轨迹的空间位置形式的带来。 概率模型也允许接地带来的不确定性和变化。 我们在康奈尔活动数据集中应用了我们的方法,我们首先显示我们可以成功地奠定随力,我们展示了学习此类承受能力提高了标签任务中的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号