...
首页> 外文期刊>IEEE Robotics and Automation Letters >Learning Task-Oriented Grasping From Human Activity Datasets
【24h】

Learning Task-Oriented Grasping From Human Activity Datasets

机译:从人类活动数据集学习任务掌握

获取原文
获取原文并翻译 | 示例
           

摘要

We propose to leverage a real-world, human activity RGB dataset to teach a robot Task-Oriented Grasping (TOG). We develop a model that takes as input an RGB image and outputs a hand pose and configuration as well as an object pose and a shape. We follow the insight that jointly estimating hand and object poses increases accuracy compared to estimating these quantities independently of each other. Given the trained model, we process an RGB dataset to automatically obtain the data to train a TOG model. This model takes as input an object point cloud and outputs a suitable region for task-specific grasping. Our ablation study shows that training an object pose predictor with the hand pose information (and vice versa) is better than training without this information. Furthermore, our results on a real-world dataset show the applicability and competitiveness of our method over state-of-the-art. Experiments with a robot demonstrate that our method can allow a robot to preform TOG on novel objects.
机译:我们建议利用现实世界,人类活动RGB数据集来教授机器人<斜体>面向任务化的Grasping (Tog)。我们开发一种模型,其用作输入RGB图像并输出手姿势和配置以及对象姿势和形状。与彼此独立估计这些数量相比,我们遵循联合估计手和物体姿势提高准确性的洞察力。鉴于训练有素的模型,我们处理RGB数据集以自动获取要训练TOG模型的数据。此模型作为输入对象点云并输出合适的特定于任务掌握区域。我们的消融研究表明,使用手姿势信息训练对象姿势预测器(反之亦然)优于没有此信息的训练。此外,我们对现实世界数据集的结果显示了我们对最先进的方法的适用性和竞争力。具有机器人的实验表明我们的方法可以允许机器人在新颖的物体上预成功。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号