首页> 外文会议>International Joint Conference on Neural Networks >Learning to Play Hard Exploration Games Using Graph-Guided Self-Navigation
【24h】

Learning to Play Hard Exploration Games Using Graph-Guided Self-Navigation

机译:使用图形引导自导航学习玩艰苦的探索游戏

获取原文

摘要

This work considers the problem of deep reinforcement learning (RL) with long time dependencies and sparse rewards, as are found in many hard exploration games. A graph-based representation is proposed to allow an agent to perform self-navigation for environmental exploration. The graph representation not only effectively models the environment structure, but also efficiently traces the agent state changes and the corresponding actions. By encouraging the agent to earn a new influence-based curiosity reward for new game observations, the whole exploration task is divided into sub-tasks, which are effectively solved using a unified deep RL model. Experimental evaluations on hard exploration Atari Games demonstrate the effectiveness of the proposed method. The source code and learned models will be released to facilitate further studies on this problem.
机译:这项工作考虑了深度强化学习(RL)的问题,它具有长时间的依赖性和稀疏的回报,这在许多艰苦的探索游戏中都可以找到。提出了一种基于图的表示方法,允许agent进行环境探测的自导航。该图不仅能有效地对环境结构进行建模,还能有效地跟踪agent的状态变化和相应的动作。通过鼓励代理为新的游戏观察获得新的基于影响力的好奇心奖励,整个探索任务被划分为子任务,使用统一的深度RL模型有效地解决了这些子任务。通过对艰苦探索的Atari游戏的实验评估,证明了该方法的有效性。将发布源代码和学习的模型,以促进对这个问题的进一步研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号