首页> 外文期刊>日本ロボット学会誌 >Self-localization method for mobile robots using multiple omnidirectional vision sensors
【24h】

Self-localization method for mobile robots using multiple omnidirectional vision sensors

机译:使用多个全向视觉传感器的移动机器人自定位方法

获取原文
获取原文并翻译 | 示例
           

摘要

This paper proposes a method for estimating position and orientation of multiple robots from a set of azimuth angles of with respect to landmarks and another robots which are observed by multiple omnidirectional vision sensors. Our method simultaneously perform self-localization by each robot and reconstruction of relative configuration between robots. Under the situation where it is impossible to identify each index for the observed azimuth angles with that for the robots, our method reconstruct not only relative configuration between robots using "triangle and enumeration constraints" hut also absolute one using the knowledge of landmarks in the environment. In order to show the validity of our method this method is applied to multiple mobile robots each of which has an omnidirectional vision sensor in the real environment. The experimental results show that the result of our method is more precise than that of self-localization by each robot.
机译:本文提出了一种从一组相对于地标的方位角估计多个机器人的位置和方向的方法,以及一种由多个全向视觉传感器观测到的机器人的方法。我们的方法同时执行每个机器人的自定位和机器人之间相对配置的重构。在无法识别与机器人的方位角的每个索引的情况下,我们的方法不仅利用“三角形和枚举约束”重建机器人之间的相对配置,而且利用环境中的地标知识重建绝对的配置。为了证明我们方法的有效性,该方法被应用于多个移动机器人,每个机器人在真实环境中均具有全向视觉传感器。实验结果表明,该方法的结果比每个机器人的自定位结果更精确。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号