【24h】

Map-to-Text: Local Map Descriptor

机译:映射到文本:本地地图描述符

获取原文
获取原文并翻译 | 示例
       

摘要

Map matching, the ability to match a local map built by a mobile robot to previously built maps, is crucial in many robotic mapping, self-localization, and simultaneous localization and mapping (SLAM) applications. In this paper, we propose a solution to the "map-to-text (M2T)" problem, which involves the generation of text descriptions of local map content based on scene understanding to facilitate fast succinct text-based map matching. Unlike previous local feature approaches that trade discriminativity for viewpoint invariance, we develop a holistic view descriptor that is view-dependent and highly discriminative. Our approach is inspired by two independent observations: (1) The behavior of mobile robots given a local map can often be characterized by a unique viewpoint trajectory, and (2) a holistic view descriptor can be highly discriminative if the viewpoint is unique given the local map. Our method consists of three distinct steps: (1) First, an informative local map of the robot's local surroundings is built. (2) Next, a unique viewpoint trajectory is planned in accordance with the given local map. (3) Finally, a synthetic view is described at the designated viewpoint. Because the success of our holistic view descriptor depends on the assumption that the viewpoint is unique given a local map, we also address the issue of viewpoint planning and present a solution that provides similar views for similar local maps. Consequently, we also propose a practical map-matching framework that combines the advantages of the fast succinct bag-of-words technique and the highly discriminative M2T holistic view descriptor. The results of experiments conducted using the publicly available radish dataset verify the efficacy of our proposed approach.
机译:地图匹配功能(将移动机器人构建的本地地图与先前构建的地图进行匹配的能力)在许多机器人地图绘制,自定位以及同时定位和地图绘制(SLAM)应用中至关重要。在本文中,我们提出了一种“映射到文本(M2T)”问题的解决方案,该问题涉及基于场景理解来生成本地地图内容的文本描述,以促进快速简洁的基于文本的地图匹配。与先前的将局部可分辨性交换为视点不变性的局部特征方法不同,我们开发了一种整体视点描述符,该整体视点描述符依赖于视图并且具有高度的判别力。我们的方法受到两个独立观察的启发:(1)给定局部地图的移动机器人的行为通常可以通过唯一的视点轨迹来表征,并且(2)如果在给定的情况下视点是唯一的,则整体视图描述符可能具有很高的判别力本地地图。我们的方法包括三个不同的步骤:(1)首先,构建机器人本地环境的信息丰富的本地地图。 (2)接下来,根据给定的局部地图计划唯一的视点轨迹。 (3)最后,在指定的视点处描述合成视图。因为我们整体视图描述符的成功取决于假设在给定局部地图的情况下视点是唯一的,所以我们还解决了视点规划问题,并提出了一种为相似的局部地图提供相似视图的解决方案。因此,我们还提出了一种实用的地图匹配框架,该框架结合了快速简洁的词袋技术和高度区分性的M2T整体视图描述符的优势。使用公开的萝卜数据集进行的实验结果验证了我们提出的方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号