首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach
【2h】

Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach

机译:使用多传感器融合方法进行室内定位的场景识别

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for indoor scene model training and communication with an Android client. To evaluate the performance, comparison experiments are conducted and the results demonstrate that a positioning accuracy of 1.32 m at 95% is achievable with the proposed solution. Both positioning accuracy and robustness are enhanced compared to approaches without scene constraint including commercial products such as IndoorAtlas.
机译:经过数十年的研究,仍然没有像室外环境的GNSS(全球导航卫星系统)解决方案那样的室内定位解决方案。造成这种现象的主要原因是复杂的空间拓扑和RF传输环境。为了解决这些问题,本文提出了一种室内场景约束的定位方法,该方法受人脑的视觉认知能力以及计算机视觉领域在高级图像理解方面的进步的启发。此外,在包括照相机,WiFi和惯性传感器的商业智能电话上实现了多传感器融合方法。与以前的研究相比,智能手机上的摄像头用于“查看”用户所处的场景。利用此信息,采用受场景信息约束的粒子滤波算法来确定最终位置。对于室内场景识别,我们利用深度学习的优势,深度学习已被证明在计算机视觉社区中非常有效。对于粒子过滤器,WiFi和磁场信号均用于更新粒子的权重。与其他指纹定位方法类似,该系统有两个阶段:离线训练和在线定位。在离线阶段,室内场景模型由Caffe(最流行的深度学习开源框架之一)训练,而指纹数据库则由不同场景中的用户轨迹构建。为了减少深度学习训练数据的数量需求,模型训练采用了一种微调方法。在在线阶段,使用智能手机中的相机识别初始场景。然后,使用粒子过滤器算法融合传感器数据并确定最终位置。为了证明该方法的有效性,实现了Android客户端和Web服务器。 Android客户端用于收集数据和定位用户。该Web服务器是为室内场景模型训练和与Android客户端通信而开发的。为了评估性能,进行了对比实验,结果表明,所提出的解决方案可以实现1.32 m在95%的定位精度。与没有场景约束的方法(包括IndoorAtlas等商业产品)相比,定位精度和鲁棒性都得到了提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号