首页> 外文会议>2010 IEEE International Conference on Robotics and Automation >Categorizing object-action relations from semantic scene graphs
【24h】

Categorizing object-action relations from semantic scene graphs

机译:根据语义场景图对对象-动作关系进行分类

获取原文

摘要

In this work we introduce a novel approach for detecting spatiotemporal object-action relations, leading to both, action recognition and object categorization. Semantic scene graphs are extracted from image sequences and used to find the characteristic main graphs of the action sequence via an exact graph-matching technique, thus providing an event table of the action scene, which allows extracting object-action relations. The method is applied to several artificial and real action scenes containing limited context. The central novelty of this approach is that it is model free and needs a priori representation neither for objects nor actions. Essentially actions are recognized without requiring prior object knowledge and objects are categorized solely based on their exhibited role within an action sequence. Thus, this approach is grounded in the affordance principle, which has recently attracted much attention in robotics and provides a way forward for trial and error learning of object-action relations through repeated experimentation. It may therefore be useful for recognition and categorization tasks for example in imitation learning in developmental and cognitive robotics.
机译:在这项工作中,我们引入了一种新颖的方法来检测时空对象与对象之间的关系,从而导致动作识别和对象分类。从图像序列中提取语义场景图,并通过精确的图匹配技术将其用于找到动作序列的特征主图,从而提供了动作场景的事件表,该事件表允许提取对象-动作关系。该方法应用于包含有限上下文的几个人工和真实动作场景。这种方法的主要新颖之处在于它是无模型的,不需要对象或动作的先验表示。本质上,无需先验对象知识即可识别动作,并且仅根据对象在动作序列中所扮演的角色对其进行分类。因此,这种方法基于可负担性原则,该原则最近在机器人技术中引起了广泛关注,并通过重复实验为对象-作用关系的尝试和错误学习提供了前进的途径。因此,它可能对于识别和分类任务很有用,例如在发展型和认知机器人的模仿学习中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号