首页> 外文会议>IEEE International Conference on Real-time Computing and Robotics >A Depthwise Separable Convolution Based 6D Pose Estimation Network by Efficient 2D-3D Feature Fusion
【24h】

A Depthwise Separable Convolution Based 6D Pose Estimation Network by Efficient 2D-3D Feature Fusion

机译:基于深度可分离的卷积的基于6D姿势估计网络,有效的2D-3D特征融合

获取原文

摘要

Precise 6D pose estimation of the target object is an essential prerequisite for robots to understand the real world. Previous 6D pose estimation methods based on 3D data usually have problems such as long model training time, imperfect feature extraction, redundant network model parameters, and complicated follow-up processing steps. This paper proposes a 2D-3D feature fusion module that could enhance feature extraction for the 6D pose estimation network. Furthermore, we compress the size of model parameters by adopting depthwise separable convolution to accelerate training speed and to reduce memory consumption. The experiment results on LineMOD dataset show the effectiveness of our method. Our method achieves on par or better performance than the state-of-art methods for 6D pose estimation and reduces model training time and the number of model parameters simultaneously.
机译:目标对象的精确姿势估计是机器人理解现实世界的必要先决条件。 基于3D数据的前面的6D姿态估计方法通常具有长型培训时间,不完美的特征提取,冗余网络模型参数和复杂的后续处理步骤等问题。 本文提出了一个2D-3D特征融合模块,可以增强6D姿势估计网络的特征提取。 此外,我们通过采用深度可分离的卷积来加速培训速度并降低内存消耗来压缩模型参数的大小。 LineMod DataSet的实验结果显示了我们方法的有效性。 我们的方法达到了PAR或更好的性能,而不是6D姿势估计的最先进方法,并同时降低模型训练时间和模型参数的数量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号