首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >SGC-VSLAM: A Semantic and Geometric Constraints VSLAM for Dynamic Indoor Environments
【2h】

SGC-VSLAM: A Semantic and Geometric Constraints VSLAM for Dynamic Indoor Environments

机译:SGC-VSLAM:动态室内环境的语义和几何约束VSLAM

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

As one of the core technologies for autonomous mobile robots, Visual Simultaneous Localization and Mapping (VSLAM) has been widely researched in recent years. However, most state-of-the-art VSLAM adopts a strong scene rigidity assumption for analytical convenience, which limits the utility of these algorithms for real-world environments with independent dynamic objects. Hence, this paper presents a semantic and geometric constraints VSLAM (SGC-VSLAM), which is built on the RGB-D mode of ORB-SLAM2 with the addition of dynamic detection and static point cloud map construction modules. In detail, a novel improved quadtree-based method was adopted for SGC-VSLAM to enhance the performance of the feature extractor in ORB-SLAM (Oriented FAST and Rotated BRIEF-SLAM). Moreover, a new dynamic feature detection method called semantic and geometric constraints was proposed, which provided a robust and fast way to filter dynamic features. The semantic bounding box generated by YOLO v3 (You Only Look Once, v3) was used to calculate a more accurate fundamental matrix between adjacent frames, which was then used to filter all of the truly dynamic features. Finally, a static point cloud was estimated by using a new drawing key frame selection strategy. Experiments on the public TUM RGB-D (Red-Green-Blue Depth) dataset were conducted to evaluate the proposed approach. This evaluation revealed that the proposed SGC-VSLAM can effectively improve the positioning accuracy of the ORB-SLAM2 system in high-dynamic scenarios and was also able to build a map with the static parts of the real environment, which has long-term application value for autonomous mobile robots.
机译:作为自主移动机器人的核心技术之一,可视化同时定位与映射(VSLAM)近年来得到了广泛的研究。但是,大多数最新的VSLAM都采用了强大的场景刚性假设,以方便分析,这限制了这些算法在具有独立动态对象的现实环境中的实用性。因此,本文提出了一种语义和几何约束VSLAM(SGC-VSLAM),它基于ORB-SLAM2的RGB-D模式,并添加了动态检测和静态点云图构建模块。详细地,为SGC-VSLAM采用了一种新颖的基于四叉树的改进方法,以增强ORB-SLAM(定向FAST和旋转的Brief-SLAM)中特征提取器的性能。此外,提出了一种新的动态特征检测方法,称为语义和几何约束,为动态特征过滤提供了一种鲁棒,快速的方法。由YOLO v3(您只看一次,v3)生成的语义边界框用于计算相邻帧之间的更准确的基本矩阵,然后将其用于过滤所有真正的动态特征。最后,通过使用新的绘图关键帧选择策略来估计静态点云。进行了公共TUM RGB-D(红-绿-蓝深度)数据集的实验,以评估该方法。评估结果表明,所提出的SGC-VSLAM可以有效地提高ORB-SLAM2系统在高动态场景下的定位精度,并且能够构建具有真实环境静态部分的地图,具有长期的应用价值。适用于自主移动机器人。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号