...
首页> 外文期刊>Remote Sensing >Calibrate Multiple Consumer RGB-D Cameras for Low-Cost and Efficient 3D Indoor Mapping
【24h】

Calibrate Multiple Consumer RGB-D Cameras for Low-Cost and Efficient 3D Indoor Mapping

机译:校准多个消费类RGB-D摄像机以实现低成本和高效的3D室内贴图

获取原文

摘要

Traditional indoor laser scanning trolley/backpacks with multi-laser scanner, panorama cameras, and an inertial measurement unit (IMU) installed are a popular solution to the 3D indoor mapping problem. However, the cost of those mapping suits is quite expensive, and can hardly be replicated by consumer electronic components. The consumer RGB-Depth (RGB-D) camera (e.g., Kinect V2) is a low-cost option for gathering 3D point clouds. However, because of the narrow field of view (FOV), its collection efficiency and data coverages are lower than that of laser scanners. Additionally, the limited FOV leads to an increase of the scanning workload, data processing burden, and risk of visual odometry (VO)/simultaneous localization and mapping (SLAM) failure. To find an efficient and low-cost way to collect 3D point clouds data with auxiliary information (i.e., color) for indoor mapping, in this paper we present a prototype indoor mapping solution that is built upon the calibration of multiple RGB-D sensors to construct an array with large FOV. Three time-of-flight (ToF)-based Kinect V2 RGB-D cameras are mounted on a rig with different view directions in order to form a large field of view. The three RGB-D data streams are synchronized and gathered by the OpenKinect driver. The intrinsic calibration that involves the geometry and depth calibration of single RGB-D cameras are solved by homography-based method and ray correction followed by range biases correction based on pixel-wise spline line functions, respectively. The extrinsic calibration is achieved through a coarse-to-fine scheme that solves the initial exterior orientation parameters (EoPs) from sparse control markers and further refines the initial value by an iterative closest point (ICP) variant minimizing the distance between the RGB-D point clouds and the referenced laser point clouds. The effectiveness and accuracy of the proposed prototype and calibration method are evaluated by comparing the point clouds derived from the prototype with ground truth data collected by a terrestrial laser scanner (TLS). The overall analysis of the results shows that the proposed method achieves the seamless integration of multiple point clouds from three Kinect V2 cameras collected at 30 frames per second, resulting in low-cost, efficient, and high-coverage 3D color point cloud collection for indoor mapping applications.
机译:安装了多激光扫描仪,全景相机和惯性测量单元(IMU)的传统室内激光扫描手推车/背包是解决3D室内制图问题的常用解决方案。然而,那些绘图服的成本非常昂贵,并且几乎不能被消费电子组件所复制。消费类RGB深度(RGB-D)相机(例如Kinect V2)是用于收集3D点云的低成本选择。但是,由于视野狭窄(FOV),其收集效率和数据覆盖范围均低于激光扫描仪。此外,有限的FOV会增加扫描工作量,数据处理负担以及视觉测距法(VO)/同时定位和映射(SLAM)失败的风险。为了找到一种有效且低成本的方式来收集带有辅助信息(即颜色)的3D点云数据进行室内地图绘制,在本文中,我们提出了一种基于多个RGB-D传感器校准的室内地图绘制解决方案的原型。构造一个具有较大FOV的数组。将三个基于飞行时间(ToF)的Kinect V2 RGB-D摄像机安装在具有不同视角方向的装备上,以形成较大的视野。 OpenKinect驱动程序同步并收集了三个RGB-D数据流。涉及单个RGB-D相机的几何和深度校准的固有校准分别通过基于单应性的方法和射线校正以及基于像素方式样条线函数的范围偏差校正来解决。通过从粗到细的方案实现外部校准,该方案解决了稀疏控制标记的初始外部定向参数(EoPs),并通过迭代最近点(ICP)变量进一步细化了初始值,从而最大程度地减小了RGB-D之间的距离点云和参考的激光点云。通过比较从原型获得的点云与地面激光扫描仪(TLS)收集的地面真实数据,可以评估所提出的原型和校准方法的有效性和准确性。结果的整体分析表明,该方法实现了以每秒30帧的速度采集来自三台Kinect V2摄像机的多点云的无缝集成,从而实现了低成本,高效且高覆盖率的室内3D彩色点云的采集映射应用程序。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号