首页> 外文期刊>Quality Control, Transactions >Visually Guided Picking Control of an Omnidirectional Mobile Manipulator Based on End-to-End Multi-Task Imitation Learning
【24h】

Visually Guided Picking Control of an Omnidirectional Mobile Manipulator Based on End-to-End Multi-Task Imitation Learning

机译:基于端到端多任务仿制学习的全向移动操纵器的视觉引导拣选控制

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, a novel deep convolutional neural network (CNN) based high-level multi-task control architecture is proposed to address the visual guide-and-pick control problem of an omnidirectional mobile manipulator platform based on deep learning technology. The proposed mobile manipulator control system only uses a stereo camera as a sensing device to accomplish the visual guide-and-pick control task. After the stereo camera captures the stereo image of the scene, the proposed CNN-based high-level multi-task controller can directly predict the best motion guidance and picking action of the omnidirectional mobile manipulator by using the captured stereo image. In order to collect the training dataset, we manually controlled the mobile manipulator to navigate in an indoor environment for approaching and picking up an object-of-interest (OOI). In the meantime, we recorded all of the captured stereo images and the corresponding control commands of the robot during the manual teaching stage. In the training stage, we employed the end-to-end multi-task imitation learning technique to train the proposed CNN model by learning the desired motion and picking control strategies from prior expert demonstrations for visually guiding the mobile platform and then visually picking up the OOI. Experimental results show that the proposed visually guided picking control system achieves a picking success rate of about 78.2% on average.
机译:本文提出了一种基于新的深度卷积神经网络(CNN)的高级多任务控制架构,以解决基于深度学习技术的全向移动操纵器平台的视觉指南控制问题。所提出的移动操纵器控制系统仅使用立体声相机作为传感设备,以完成视觉指南和拾取控制任务。在立体声相机捕获场景的立体图像之后,所提出的基于CNN的高级多任务控制器可以通过使用捕获的立体图像直接预测全向移动操纵器的最佳运动引导和拾取动作。为了收集训练数据集,我们手动控制移动机械手在室内环境中导航,以接近和拾取兴趣对象(OOI)。在此期间,我们在手动教学阶段记录了所有捕获的立体图像和机器人的相应控制命令。在培训阶段,我们采用端到端的多任务仿制技术来培训所提出的CNN模型,通过学习来自先前专家演示的所需的运动和采摘控制策略来直观地引导移动平台,然后在视觉上拾取OOI。实验结果表明,拟议的视觉引导拣选控制系统平均达到约78.2%的成功率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号