首页> 外文期刊>Instrumentation and Measurement, IEEE Transactions on >RGB-DI Images and Full Convolution Neural Network-Based Outdoor Scene Understanding for Mobile Robots
【24h】

RGB-DI Images and Full Convolution Neural Network-Based Outdoor Scene Understanding for Mobile Robots

机译:RGB-DI图像和基于全卷积神经网络的移动机器人户外场景理解

获取原文
获取原文并翻译 | 示例

摘要

This paper presents a multisensor-based approach to outdoor scene understanding of mobile robots. Since laser scanning points in 3-D space are distributed irregularly and unbalanced, a projection algorithm is proposed to generate RGB, depth, and intensity (RGB-DI) images so that the outdoor environments can be optimally measured with a variable resolution. The 3-D semantic segmentation in RGB-DI cloud points is, therefore, transformed to the semantic segmentation in RGB-DI images. A full convolution neural network (FCN) model with deep layers is designed to perform semantic segmentation of RGB-DI images. According to the exact correspondence between each 3-D point and each pixel in a RGB-DI image, the semantic segmentation results of the RGB-DI images are mapped back to the original point clouds to realize the 3-D scene understanding. The proposed algorithms are tested on different data sets, and the results show that our RGB-DI image and FCN model-based approach can provide a superior performance for outdoor scene understanding. Moreover, real-world experiments were conducted on our mobile robot platform to show the validity and practicability of the proposed approach.
机译:本文提出了一种基于多传感器的方法来了解移动机器人的户外场景。由于3-D空间中的激光扫描点分布不规则且不平衡,因此提出了一种投影算法来生成RGB,深度和强度(RGB-DI)图像,以便可以以可变分辨率最佳地测量室外环境。因此,将RGB-DI云点中的3-D语义分割转换为RGB-DI图像中的语义分割。设计具有深层的全卷积神经网络(FCN)模型以执行RGB-DI图像的语义分割。根据RGB-DI图像中每个3-D点与每个像素之间的精确对应关系,将RGB-DI图像的语义分割结果映射回原始点云,以实现3-D场景理解。所提出的算法在不同的数据集上进行了测试,结果表明我们的RGB-DI图像和基于FCN模型的方法可以为户外场景理解提供出色的性能。此外,在我们的移动机器人平台上进行了真实世界的实验,以证明该方法的有效性和实用性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号