首页> 外文会议>Conference on Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure >Real-time object detection and geolocation using 3d calibrated camera/LiDAR pair
【24h】

Real-time object detection and geolocation using 3d calibrated camera/LiDAR pair

机译:使用3D校准相机/ LIDAR对的实时对象检测和地理位置

获取原文

摘要

We present results from testing a multi-modal sensor system (consisting of a camera, LiDAR, and positioning system) for real-time object detection and geolocation. The system's eventual purpose is to assess damage and detect foreign objects on a disrupted airfield surface to reestablish a minimum airfield operating surface. It uses an AI to detect objects and generate bounding boxes or segmentation masks in data acquired with a high-resolution area scan camera. It locates the detections in the local, sensor-centric coordinate system in real time using returns from a low-cost commercial LiDAR system. This is accomplished via an intrinsic camera calibration together with a 3D extrinsic calibration of the camera-LiDAR pair. A coordinate transform service uses data from a navigation system (comprising an inertial measurement unit and global positioning system) to transform local coordinates of the detections obtained with the AI and calibrated sensor pair into earth-centered coordinates. The entire sensor system is mounted on a pan-tilt unit to achieve 360-degree perception. All data acquisition and computation are performed on a low SWAP-C system-on-module that includes an integrated GPU. Computer vision code runs real-time on the GPU and has been accelerated using CUDA. We have chosen Robot Operating System (ROS1 at present but porting to ROS2 in the near term) as the control framework for the system. All computer vision, motion, and transform services are configured as ROS nodes.
机译:我们通过测试用于实时对象检测和地理位置的多模态传感器系统(由摄像机,LIDAR和定位系统组成)的结果。该系统的最终目的是评估损坏并检测扰动的机场表面上的异物,以重新建立最小机场工作表面。它使用AI来检测对象并在使用高分辨率区域扫描相机获取的数据中生成限定框或分段掩码。它使用低成本商业激光雷达系统的返回实时定位在本地的传感器中心坐标系中的检测。这是通过固有的相机校准完成的,以及相机-IDAR对的3D外在校准。坐标变换服务使用来自导航系统的数据(包括惯性测量单元和全球定位系统)来将用AI和校准的传感器对获得的检测的局部坐标转换成以地为中心的坐标。整个传感器系统安装在泛倾斜单元上,以实现360度的感知。所有数据采集和计算都是在包含集成GPU的低Swap-C系统上执行的。计算机视觉代码在GPU上实时运行,并使用CUDA加速。我们选择了机器人操作系统(目前ROS1,而是在近期移植到ROS2)作为系统的控制框架。所有计算机视觉,运动和变换服务都被配置为ROS节点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号