首页> 外文期刊>IEICE Transactions on Information and Systems >Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning
【24h】

Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning

机译:在线机器人路径规划中自组织增量联想记忆的推理

获取原文
获取原文并翻译 | 示例
           

摘要

Robot path-planning is one of the important issues in robotic navigation. This paper presents a novel robot path-planning approach based on the associative memory using Self-Organizing Incremental Neural Networks (SOINN). By the proposed method, an environment is first autonomously divided into a set of path-fragments by junctions. Each fragment is represented by a sequence of preliminarily generated common patterns (CPs). In an online manner, a robot regards the current path as the associative path-fragments, each connected by junctions. The reasoning technique is additionally proposed for decision making at each junction to speed up the exploration time. Distinct from other methods, our method does not ignore the important information about the regions between junctions (path-fragments). The resultant number of path-fragments is also less than other method. Evaluation is done via Webots physical 3D-simulated and real robot experiments, where only distance sensors are available. Results show that our method can represent the environment effectively; it enables the robot to solve the goal-oriented navigation problem in only one episode, which is actually less than that necessary for most of the Reinforcement Learning (RL) based methods. The running time is proved finite and scales well with the environment. The resultant number of path-fragments matches well to the environment.
机译:机器人路径规划是机器人导航中的重要问题之一。本文提出了一种使用自组织增量神经网络(SOINN)的基于联想记忆的机器人路径规划方法。通过提出的方法,首先通过结点将环境自动分为一组路径片段。每个片段由一系列预先生成的公共模式(CP)表示。机器人以在线方式将当前路径视为关联路径片段,每个片段均通过结点进行连接。另外提出了推理技术,用于在每个路口进行决策,以加快探索时间。与其他方法不同,我们的方法不会忽略有关路口(路径片段)之间区域的重要信息。路径片段的最终数量也少于其他方法。通过Webots物理3D模拟和实际机器人实验进行评估,其中只有距离传感器可用。结果表明,该方法可以有效地代表环境。它使机器人能够仅在一个情节中解决面向目标的导航问题,这实际上比大多数基于强化学习(RL)的方法所需要的要少。事实证明运行时间是有限的,并且与环境成比例。路径片段的最终数量与环境非常匹配。

著录项

  • 来源
    《IEICE Transactions on Information and Systems》 |2010年第3期|p.569-582|共14页
  • 作者单位

    Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama-shi, 226-8503 Japan;

    Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama-shi, 226-8503 Japan;

    Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama-shi, 226-8503 Japan;

    Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama-shi, 226-8503 Japan Imaging Science and Engineering Laboratory, Tokyo Institute of Technology, Yokohama-shi, 226-8503 Japan;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    neural networks; associative memory; path-planning; reinforcement learning (RL);

    机译:神经网络;联想记忆路径规划;强化学习(RL);

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号