首页> 外文会议>IEEE International Conference on Robotics and Automation >Combined image- and world-space tracking in traffic scenes
【24h】

Combined image- and world-space tracking in traffic scenes

机译:在交通场景中结合图像和世界空间跟踪

获取原文

摘要

Tracking in urban street scenes plays a central role in autonomous systems such as self-driving cars. Most of the current vision-based tracking methods perform tracking in the image domain. Other approaches, e.g. based on LIDAR and radar, track purely in 3D. While some vision-based tracking methods invoke 3D information in parts of their pipeline, and some 3D-based methods utilize image-based information in components of their approach, we propose to use image- and world-space information jointly throughout our method. We present our tracking pipeline as a 3D extension of image-based tracking. From enhancing the detections with 3D measurements to the reported positions of every tracked object, we use world-space 3D information at every stage of processing. We accomplish this by our novel coupled 2D-3D Kalman filter, combined with a conceptually clean and extendable hypothesize-and-select framework. Our approach matches the current state-of-the-art on the official KITTI benchmark, which performs evaluation in the 2D image domain only. Further experiments show significant improvements in 3D localization precision by enabling our coupled 2D-3D tracking.
机译:在自动驾驶汽车等自动驾驶系统中,城市街道场景中的追踪起着至关重要的作用。当前大多数基于视觉的跟踪方法都在图像域中执行跟踪。其他方法,例如基于LIDAR和雷达的纯3D跟踪。虽然一些基于视觉的跟踪方法在其管道的某些部分中调用3D信息,而一些基于3D的方法在其方法的组成部分中使用了基于图像的信息,但我们建议在整个方法中共同使用图像和世界空间信息。我们将跟踪管道作为基于图像的跟踪的3D扩展提出。从通过3D测量增强检测功能到每个被跟踪物体的报告位置,我们在处理的每个阶段都使用世界空间3D信息。我们通过新颖的2D-3D卡尔曼滤波器,结合概念上清晰且可扩展的假设选择框架,实现了这一目标。我们的方法与官方KITTI基准上的最新技术相匹配,该基准仅在2D图像域中执行评估。进一步的实验表明,通过启用我们耦合的2D-3D跟踪,可以显着提高3D定位精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号