首页> 外文OA文献 >Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation
【2h】

Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation

机译:基于激光和视觉的基于激光和视觉传感器集成的数据调查

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.
机译:本文侧重于数据融合,这是任何自主系统中最重要的模块之一的基础:感知。在过去的十年中,智能/自主移动系统的使用情况发生了激增。这种系统可以在障碍,高级公民等安全移动性的各种生活领域中使用,并且依赖于准确的传感器信息,以便最佳地运行。该信息可以来自单个传感器或具有相同或不同模式的传感器套件。我们查看各种类型的传感器,数据以及彼此融合数据的需求,以输出手头的任务的最佳数据,在这种情况下是自主导航。为了获得如此准确的数据,我们需要具有最佳技术来读取传感器数据,处理数据,消除或至少减少噪声,然后使用数据进行所需的任务。我们对当前数据处理技术进行了调查,这些技术使用像使用光扫描技术,立体声/深度摄像头,红色绿色单眼单眼(RGB)和使用光学的飞行时间(TOF)相机等不同传感器使用不同的传感器使用不同的传感器进行数据融合技术和审查使用多个传感器的熔融数据的效率而不是自主导航任务中的单个传感器,如映射,障碍物检测和避免或定位。该调查将向打算完成机器人运动控制的任务的研究人员提供传感器信息,并详细利用LIDAR和摄像机来完成机器人导航。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号