首页> 外文期刊>Intelligent automation and soft computing >HGG-CNN: The Generation of the Optimal Robotic Grasp Pose Based on Vision
【24h】

HGG-CNN: The Generation of the Optimal Robotic Grasp Pose Based on Vision

机译:HGG-CNN:基于视觉的最佳机器人掌握姿势的产生

获取原文
获取原文并翻译 | 示例

摘要

Robotic grasping is an important issue in the field of robot control. In order to solve the problem of optimal grasping pose of the robotic arm, based on the Generative Grasping Convolutional Neural Network (GG-CNN), a new convolutional neural network called Hybrid Generative Grasping Convolutional Neural Network (HGG-CNN) is proposed by combining three small network structures called Inception Block, Dense Block and SELayer. This new type of convolutional neural network structure can improve the accuracy rate of grasping pose based on the GG-CNN network, thereby improving the success rate of grasping. In addition, the HGG-CNN convolutional neural network structure can also overcome the problem that the original GG-CNN network structure has in yielding a recognition rate of less than 70% for complex artificial irregular objects. After experimental tests, the HGG-CNN convolutional neural network can improve the average grasping pose accuracy of the original GG-CNN network from 83.83% to 92.48%. For irregular objects with complex man-made shapes such as spoons, the recognition rate of grasping pose can also be increased from 21.38% to 55.33%.
机译:机器人掌握是机器人控制领域的一个重要问题。为了解决机器人臂的最佳抓握姿势的问题,基于生成抓握卷积神经网络(GG-CNN),通过组合提出了一种名为混合生成抓取卷积神经网络(HGG-CNN)的新的卷积神经网络三个小型网络结构称为incepion块,密集块和selayer。这种新型卷积神经网络结构可以基于GG-CNN网络提高抓取姿势的精度率,从而提高了抓握的成功率。此外,HGG-CNN卷积神经网络结构还可以克服原始GG-CNN网络结构的问题在于对于复杂的人工不规则对象产生小于70%的识别率。在实验测试之后,HGG-CNN卷积神经网络可以从83.83%到92.48%的原始GG-CNN网络的平均抓取姿态精度。对于具有复杂的人造形状的不规则物体,例如勺子,抓握姿势的识别率也可以从21.38%增加到55.33%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号