首页> 外文会议>Conference on Neural Information Processing Systems >A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning
【24h】

A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning

机译:钢筋学习中稀疏最优政策的正规方法

获取原文

摘要

We propose and study a general framework for regularized Markov decision processes (MDPs) where the goal is to find an optimal policy that maximizes the expected discounted total reward plus a policy regularization term. The extant entropy-regularized MDPs can be cast into our framework. Moreover, under our framework, many regularization terms can bring multi-modality and sparsity, which are potentially useful in reinforcement learning. In particular, we present sufficient and necessary conditions that induce a sparse optimal policy. We also conduct a full mathematical analysis of the proposed regularized MDPs, including the optimality condition, performance error, and sparseness control. We provide a generic method to devise regularization forms and propose off-policy actor critic algorithms in complex environment settings. We empirically analyze the numerical properties of optimal policies and compare the performance of different sparse regularization forms in discrete and continuous environments.
机译:我们提出并研究了正规化的马尔可夫决策过程(MDP)的一般框架,其中目标是找到最佳政策,以最大化预期的折扣总奖励加上政策正则化术语。扩展熵正则MDP可以投入我们的框架。此外,在我们的框架下,许多正则化术语可以带来多种方式和稀疏性,这可能在加强学习中有用。特别是,我们存在足够的和必要条件,促使稀疏最佳政策。我们还对所提出的正规化MDP进行全部数学分析,包括最优性,性能误差和稀疏性控制。我们提供了一种在复杂环境设置中制定正规化表格并提出违规行为批评算法的通用方法。我们经验分析了最佳策略的数值,并比较了不同稀疏正则化形式在离散和连续环境中的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号