首页> 外文会议>IEEE International Conference on Robotics and Automation >Visual Geometric Skill Inference by Watching Human Demonstration
【24h】

Visual Geometric Skill Inference by Watching Human Demonstration

机译:通过观看人类演示来进行视觉几何技能推理

获取原文

摘要

We study the problem of learning manipulation skills from human demonstration video by inferring the association relationships between geometric features. Motivation for this work stems from the observation that humans perform eye-hand coordination tasks by using geometric primitives to define a task while a geometric control error drives the task through execution. We propose a graph based kernel regression method to directly infer the underlying association constraints from human demonstration video using Incremental Maximum Entropy Inverse Reinforcement Learning (InMaxEnt IRL). The learned skill inference provides human readable task definition and outputs control errors that can be directly plugged into traditional controllers. Our method removes the need for tedious feature selection and robust feature trackers required in traditional approaches (e.g. feature-based visual ser-voing). Experiments show our method infers correct geometric associations even with only one human demonstration video and can generalize well under variance.
机译:通过推断几何特征之间的关联关系,我们研究了从人类演示视频中学习操纵技巧的问题。开展这项工作的动机来自于这样的观察:人类通过使用几何图元来定义任务,而几何控制错误通过执行来驱动任务,从而执行眼手协调任务。我们提出了一种基于图的核回归方法,可以使用增量最大熵逆强化学习(InMaxEnt IRL)直接从人类演示视频中推断潜在的关联约束。学到的技能推论提供了人类可读的任务定义,并输出了可以直接插入传统控制器的控制错误。我们的方法消除了传统方法(例如基于特征的视觉服务)中繁琐的特征选择和强大的特征跟踪器的需求。实验表明,即使仅使用一个人类演示视频,我们的方法也可以推断出正确的几何关联,并且可以在方差下很好地概括。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号