首页> 外文会议>Chinese Control and Decision Conference >Mobile Robot Navigation based on Deep Reinforcement Learning
【24h】

Mobile Robot Navigation based on Deep Reinforcement Learning

机译:基于深度强化学习的移动机器人导航

获取原文

摘要

Learning to navigate in an unknown environment is a crucial capability of mobile robot. Conventional method for robot navigation consists of three steps, involving localization, map building and path planning. However, most of the conventional navigation methods rely on obstacle map, and dont have the ability of autonomous learning. In contrast to the traditional approach, we propose an end-to-end approach in this paper using deep reinforcement learning for the navigation of mobile robots in an unknown environment. Based on dueling network architectures for deep reinforcement learning (Dueling DQN) and deep reinforcement learning with double q learning (Double DQN), a dueling architecture based double deep q network (D3QN) is adapted in this paper. Through D3QN algorithm, mobile robot can learn the environment knowledge gradually through its wonder and learn to navigate to the target destination autonomous with an RGB-D camera only. The experiment results show that mobile robot can reach to the desired targets without colliding with any obstacles.
机译:学习在未知环境中导航是移动机器人的一项关键功能。机器人导航的常规方法包括三个步骤,包括定位,地图构建和路径规划。然而,大多数常规导航方法依赖于障碍物图,并且不具有自主学习的能力。与传统方法相反,我们在本文中提出了一种端到端方法,该方法使用深度强化学习来在未知环境中导航移动机器人。基于用于深度强化学习的对决网络架构(Dueling DQN)和具有双q学习的深度强化学习(Double DQN),本文采用了基于双深度q网络的对决架构。通过D3QN算法,移动机器人可以通过其奇迹逐渐学习环境知识,并学会仅使用RGB-D摄像机自主导航到目标目的地。实验结果表明,移动机器人可以达到期望的目标,而不会碰到任何障碍物。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号