首页> 外文会议>IEEE International Conference on Anti-counterfeiting, Security, and Identification >Use of LSTM Regression and Rotation Classification to Improve Camera Pose Localization Estimation
【24h】

Use of LSTM Regression and Rotation Classification to Improve Camera Pose Localization Estimation

机译:使用LSTM回归和旋转分类以提高相机姿态定位估计

获取原文

摘要

More accurately estimating camera pose can be used to greatly improve localization in applications such as augmented reality, autonomous driving, and intelligent robots. Deep learning methods have achieved great progress to improve accuracy but still have limitations with respect to rotation, which results in angle regression errors. In this work, we combine a LSTM module with rotation classification loss to regress the camera pose. The algorithm uses a robust processing pipeline to supervise the pose estimation with dynamic, weighted, multi-losses in order to limit separate Euler angle (yaw, pitch, roll) losses, and common translation-quaternion losses. An empirical test on the 7Scenes benchmark dataset shows better results than when using common absolute pose regression methods.
机译:更准确地估算相机姿势可用于大大提高应用中的本地化,如增强现实,自动驾驶和智能机器人。深度学习方法取得了很大的进展,以提高准确性,但仍然对旋转仍然有局限性,这导致角度回归误差。在这项工作中,我们将LSTM模块与旋转分类丢失相结合以回归相机姿势。该算法使用稳健的处理流水线来监督动态,加权,多损害的姿势估计,以限制单独的欧拉角(偏航,俯仰,滚动)损耗和共同的平移 - 四元损失。 7Scenes基准数据集的实证测试显示比使用常见绝对姿势回归方法的结果更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号