...
首页> 外文期刊>IEEE Robotics & Automation Magazine >Multifingered Grasp Planning via Inference in Deep Neural Networks: Outperforming Sampling by Learning Differentiable Models
【24h】

Multifingered Grasp Planning via Inference in Deep Neural Networks: Outperforming Sampling by Learning Differentiable Models

机译:深神经网络中推断的多孔掌握规划:通过学习可分辨率模型表现优于取样

获取原文
获取原文并翻译 | 示例
           

摘要

We propose a novel approach to multifingered grasp planning that leverages learned deep neural network (DNN) models. We trained a voxel-based 3D convolutional neural network (CNN) to predict grasp-success probability as a function of both visual information of an object and grasp configuration. From this, we formulated grasp planning as inferring the grasp configuration that maximizes the probability of grasp success. In addition, we learned a prior over grasp configurations as a mixture-density network (MDN) conditioned on our voxel-based object representation. We show that this object-conditional prior improves grasp inference when used with the learned grasp success-prediction network compared to a learned, objectagnostic prior or an uninformed uniform prior. Our work is the first to directly plan high-quality multifingered grasps in configuration space using a DNN without the need of an external planner. We validated our inference method by performing multifinger grasping on a physical robot. Our experimental results show that our planning method outperforms existing grasp-planning methods for neural networks (NNs).
机译:我们提出了一种新颖的方法,可以利用学识渊博的深度神经网络(DNN)模型来提出多宾式掌握规划。我们培训了基于体素的3D卷积神经网络(CNN),以将掌握成功概率预测为对象和掌握配置的视觉信息。由此,我们将掌握规划制定为推断掌握配置,从而最大限度地提高了掌握成功的概率。此外,我们通过在基于体素的对象表示的混合密度网络(MDN)上学习了先前的掌握配置。我们表明,与学习的掌握成功预测网络相比,与学习,观察的先前的先前或不合意的制服一起使用时,这种对象的对象预先提高了掌握推理。我们的作品是第一个直接使用DNN在配置空间中进行高质量的多吃的掌握,而无需外部策划仪。我们通过在物理机器人上执行MultiFinger来验证我们的推理方法。我们的实验结果表明,我们的规划方法优于神经网络(NNS)的现有掌握规划方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号