首页> 外文期刊>IEEE transactions on mobile computing >Efficient Indoor Positioning with Visual Experiences via Lifelong Learning
【24h】

Efficient Indoor Positioning with Visual Experiences via Lifelong Learning

机译:通过终生学习获得视觉体验的高效室内定位

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Positioning with visual sensors in indoor environments has many advantages: it doesn't require infrastructure or accurate maps, and is more robust and accurate than other modalities such as WiFi. However, one of the biggest hurdles that prevents its practical application on mobile devices is the time-consuming visual processing pipeline. To overcome this problem, this paper proposes a novel lifelong learning approach to enable efficient and real-time visual positioning. We explore the fact that when following a previous visual experience for multiple times, one could gradually discover clues on how to traverse it with much less effort, e.g., which parts of the scene are more informative, and what kind of visual elements we should expect. Such second-order information is recorded as parameters, which provide key insights of the context and empower our system to dynamically optimise itself to stay localised with minimum cost. We implement the proposed approach on an array of mobile and wearable devices, and evaluate its performance in two indoor settings. Experimental results show our approach can reduce the visual processing time up to two orders of magnitude, while achieving sub-metre positioning accuracy.
机译:在室内环境中使用视觉传感器进行定位具有许多优点:它不需要基础设施或精确的地图,并且比其他模式(如WiFi)更健壮和准确。但是,阻止其在移动设备上实际应用的最大障碍之一是耗时的视觉处理管道。为了克服这个问题,本文提出了一种新颖的终身学习方法,以实现高效,实时的视觉定位。我们探索了这样一个事实,即在多次遵循以前的视觉体验时,人们可以逐渐发现有关如何以更少的精力进行遍历的线索,例如,场景的哪些部分更富信息性,以及我们应该期望什么样的视觉元素。这样的二阶信息被记录为参数,这些参数提供了上下文的关键见解,并使我们的系统能够动态优化自身,以最小的成本保持本地化。我们在一系列移动和可穿戴设备上实施了建议的方法,并在两个室内环境下评估了其性能。实验结果表明,我们的方法可以将视觉处理时间减少两个数量级,同时达到亚米级的定位精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号