...
首页> 外文期刊>IEEE Robotics and Automation Letters >Pedestrian Planar LiDAR Pose (PPLP) Network for Oriented Pedestrian Detection Based on Planar LiDAR and Monocular Images
【24h】

Pedestrian Planar LiDAR Pose (PPLP) Network for Oriented Pedestrian Detection Based on Planar LiDAR and Monocular Images

机译:基于平面激光雷达和单眼图像的行人平面平面图姿势(PPLP)网络的面向行人检测

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Pedestrian detection is an important task for human-robot interaction and autonomous driving applications. Most previous pedestrian detection methods rely on data collected from three-dimensional (3D) Light Detection and Ranging (LiDAR) sensors in addition to camera imagery, which can be expensive to deploy. In this letter, we propose a novel Pedestrian Planar LiDAR Pose Network (PPLP Net) based on two-dimensional (2D) LiDAR data and monocular camera imagery, which offers a far more affordable solution to the oriented pedestrian detection problem. The proposed PPLP Net consists of three sub-networks: an orientation detection network (OrientNet), a Region Proposal Network (RPN), and a PredictorNet. The OrientNet leverages state-of-the-art neural-network-based 2D pedestrian detection algorithms, including Mask R-CNN and ResNet, to detect the Bird & x0027;s Eye View (BEV) orientation of each pedestrian. The RPN transfers 2D LiDAR point clouds into occupancy grid map and uses a frustum-based matching strategy for estimating non-oriented 3D pedestrian bounding boxes. Outputs from both OrientNet and RPN are passed through the PredictorNet for a final regression. The overall outputs of our proposed network are 3D bounding box locations and orientation values for all pedestrians in the scene. We present oriented pedestrian detection results on two datasets, the CMU Panoptic Dataset and a newly collected FCAV M-Air Pedestrian (FMP) Dataset, and show that our proposed PPLP network based on 2D LiDAR and monocular camera achieves similar or better performance to previous state-of-the-art 3D-LiDAR-based pedestrian detection methods in both indoor and outdoor environments.
机译:行人检测是人机交互和自主驾驶应用的重要任务。除了相机图像之外,最先前的行人检测方法还依赖于由三维(3D)光检测和测距(LIDAR)传感器收集的数据,这可能是昂贵的部署。在这封信中,我们提出了一种基于二维(2D)LIDAR数据和单眼摄像头图像的新型行人平面LIDAR姿势网络(PPLP网),可为面向行人检测问题提供更高效的解决方案。所提出的PPLP网包括三个子网:定向检测网络(OrientNet),区域提案网络(RPN)和预测。 OrientNET利用了最先进的基于神经网络的2D行人检测算法,包括掩模R-CNN和Reset,以检测每个行人的鸟类和X0027; S眼视图(BEV)方向。 RPN将2D LIDAR点云传输到占用网格图中,并使用基于截图的匹配策略来估计非导向的3D行人边界框。来自OrientNet和RPN的输出通过预测orgornet进行最终回归。我们所提出的网络的整体输出是场景中所有行人的3D边界框位置和方向值。我们在两个数据集中呈现导向的行人检测结果,CMU Panoptic DataSet和新收集的FCAV M-Air PeStrian(FMP)数据集,并显示我们所提出的基于2D激光雷达和单眼摄像机的PPLP网络实现了与以前的状态相似或更好的性能室内和室外环境的基于ART的3D-LIDAR的行人检测方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号