...
首页> 外文期刊>計測自動制御学会論文集 >A study on architecture, algorithms and internal representation for reinforcement learning with recurrent neural networks
【24h】

A study on architecture, algorithms and internal representation for reinforcement learning with recurrent neural networks

机译:递归神经网络的强化学习的体系结构,算法和内部表示的研究

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Most algorithms for reinforcement learning face difficulty in achieving optimal performance when the state of the environment is not completely known. The authors have proposed a method for overcoming this problem by using recurrent neural networks in a learning agent. In this paper, we discuss the implementation of the proposed method using several types of network architecture and supervised learning algorithms. Further, the internal representation of the environment acquired in the learning agent is examined using a technique of cluster analysis. The results show that the learning agent achieves optimal performance in reinforcement learning tasks by constructing an accurate internal model, despite incomplete perception of the state of the environment.
机译:当不完全了解环境状态时,大多数用于强化学习的算法都难以获得最佳性能。作者提出了一种通过在学习代理中使用递归神经网络来克服此问题的方法。在本文中,我们讨论了使用几种类型的网络体系结构和监督学习算法来实现该方法的实现。此外,使用聚类分析技术检查在学习代理中获取的环境的内部表示。结果表明,尽管对环境状态的感知不完整,但学习主体通过构建准确的内部模型可以在强化学习任务中实现最佳性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号