首页> 外文OA文献 >A new Q-learning algorithm based on the Metropolis criterion
【2h】

A new Q-learning algorithm based on the Metropolis criterion

机译:一种新的基于metropolis准则的Q学习算法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The balance between exploration and exploitation is one of the key problems of action selection in Q-learning. Pure exploitation causes the agent to reach the locally optimal policies quickly, whereas excessive exploration degrades the performance of the Q-learning algorithm even if it may accelerate the learning process and allow avoiding the locally optimal policies. In this paper, finding the optimum policy in Q-learning is de scribed as search for the optimum solution in combinatorial optimization. The Metropolis criterion of simulated annealing algorithm is introduced in order to balance exploration and exploitation of Q-learning, and the modified Q-learning algorithm based on this criterion, SA-Q-learning, is presented. Experiments show that SA-Q-learning converges more quickly than Q-learning or Boltzmann exploration, and that the search does not suffer of performance degradation due to excessive exploration.
机译:探索与开发之间的平衡是Q学习中动作选择的关键问题之一。纯粹的利用会导致代理迅速达到局部最优策略,而过度探索则会降低Q学习算法的性能,即使它可能会加速学习过程并避免局部最优策略。本文将在Q学习中寻找最优策略描述为在组合优化中寻找最优解。为了平衡Q学习的探索和开发,引入了Metropolis模拟退火算法准则,并提出了基于该准则的改进的Q学习算法SA-Q学习。实验表明,SA-Q学习的收敛速度比Q学习或Boltzmann探索更快,并且搜索不会因过度探索而降低性能。

著录项

  • 作者

    Guo, MZ; Liu, Y; Malec, Jacek;

  • 作者单位
  • 年度 2004
  • 总页数
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号