首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops >Real-Time 6DOF Pose Relocalization for Event Cameras With Stacked Spatial LSTM Networks
【24h】

Real-Time 6DOF Pose Relocalization for Event Cameras With Stacked Spatial LSTM Networks

机译:具有堆叠空间LSTM网络的事件摄像机的实时6DOF姿势重新定位

获取原文

摘要

We present a new method to relocalize the 6DOF pose of an event camera solely based on the event stream. Our method first creates the event image from a list of events that occurs in a very short time interval, then a Stacked Spatial LSTM Network (SP-LSTM) is used to learn the camera pose. Our SP-LSTM is composed of a CNN to learn deep features from the event images and a stack of LSTM to learn spatial dependencies in the image feature space. We show that the spatial dependency plays an important role in the relocalization task with event images and the SP-LSTM can effectively learn this information. The extensively experimental results on a publicly available dataset show that our approach outperforms recent state-of-the-art methods by a substantial margin, as well as generalizes well in challenging training/testing splits. The source code and trained models are available at https://github.comqanh/pose_relocalization.
机译:我们提出了一种新方法,可以仅基于事件流重新定位活动相机的6DOF姿势。我们的方法首先从非常短的时间间隔中发生的事件列表创建事件图像,然后堆叠空间LSTM网络(SP-LSTM)用于学习相机姿势。我们的SP-LSTM由CNN组成,以从事件图像和一堆LSTM学习深度功能,以学习图像特征空间中的空间依赖关系。我们表明空间依赖性在具有事件图像的重川化任务中发挥着重要作用,并且SP-LSTM可以有效地学习该信息。广泛的数据集中的广泛实验结果表明,我们的方法优于最近的最先进的方法,通过大量保证金,以及概括在挑战训练/测试分裂中。源代码和培训的型号可在https://github.com/nqanh/pose_relocalization获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号