首页> 外文期刊>IEEE Robotics and Automation Letters >Multi-Modal Transfer Learning for Grasping Transparent and Specular Objects
【24h】

Multi-Modal Transfer Learning for Grasping Transparent and Specular Objects

机译:用于抓握透明和镜面对象的多模态转移学习

获取原文
获取原文并翻译 | 示例

摘要

State-of-the-art object grasping methods rely on depth sensing to plan robust grasps, but commercially available depth sensors fail to detect transparent and specular objects. To improve grasping performance on such objects, we introduce a method for learning a multi-modal perception model by bootstrapping from an existing uni-modal model. This transfer learning approach requires only a pre-existing uni-modal grasping model and paired multi-modal image data for training, foregoing the need for ground-truth grasp success labels nor real grasp attempts. Our experiments demonstrate that our approach is able to reliably grasp transparent and reflective objects. Video and supplementary material are available at https://sites.google.com/view/transparent-specular-grasping.
机译:最先进的对象抓握方法依靠深度感测来规划稳健的掌握,但商业上可获得的深度传感器无法检测透明和镜面物体。为了提高掌握这些对象的掌握性能,我们介绍一种通过从现有的UNI模型的引导引导来学习多模态感知模型的方法。这种转移学习方法只需要一个预先存在的Uni-Modal掌握模型和用于训练的配对多模态图像数据,因此需要对地面掌握成功标签和实际掌握尝试的需要。我们的实验表明,我们的方法能够可靠地掌握透明和反射物体。视频和补充材料可在https://sites.google.com/view/transparent-specular-grasping。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号