首页> 外文会议>International Symposium on Computing and Networking Workshops >Efficient Exploration by Decision Making Considering Curiosity and Episodic Memory in Deep Reinforcement Learning
【24h】

Efficient Exploration by Decision Making Considering Curiosity and Episodic Memory in Deep Reinforcement Learning

机译:考虑深增强学习中的好奇心和情节记忆的决策有效探索

获取原文

摘要

Reinforcement learning is difficult in sparse environments where rewards are not easy to obtain, such as large scale real space. In recent years, methods to promote exploration by generating intrinsic rewards based on curiosity in such environments have received much attention. However, in large state spaces, efficiency is a problem. In this paper, we propose a DQN-based algorithm that takes into account both episodic memory and intrinsic rewards during making decision in order to improve the efficiency of exploration. We evaluate this method through some tasks.
机译:在稀疏环境中,加固学习是困难的,其中奖励不容易获得,例如大规模的真实空间。近年来,通过基于这种环境中的好奇心产生内在奖励来促进探索的方法受到了很多关注。但是,在大状态空间中,效率是一个问题。在本文中,我们提出了一种基于DQN的算法,该算法考虑了在做出决定期间考虑了焦虑记忆和内在奖励,以提高勘探效率。我们通过一些任务评估此方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号