首页> 外文期刊>Journal of machine learning research >Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes
【24h】

Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes

机译:马尔可夫决策过程有效截止政策评估的双重加固学习

获取原文
           

摘要

Off-policy evaluation (OPE) in reinforcement learning allows one to evaluate novel decision policies without needing to conduct exploration, which is often costly or otherwise infeasible. We consider for the first time the semiparametric efficiency limits of OPE in Markov decision processes (MDPs), where actions, rewards, and states are memoryless. We show existing OPE estimators may fail to be efficient in this setting. We develop a new estimator based on cross-fold estimation of $q$-functions and marginalized density ratios, which we term double reinforcement learning (DRL). We show that DRL is efficient when both components are estimated at fourth-root rates and is also doubly robust when only one component is consistent. We investigate these properties empirically and demonstrate the performance benefits due to harnessing memorylessness.
机译:加强学习中的违规评估(OPE)允许人们在不需要进行勘探的情况下评估新的决策政策,这通常是昂贵或以其他方式不可行的。 我们首次考虑首次在马尔可夫决策过程(MDP)中的OPE的半游戏效率限制,其中措施,奖励和状态无记忆。 我们显示现有的ope估计在此设置中可能无法高效。 我们基于$ Q $的交叉折叠估计和边缘化密度比率开发新的估计,我们术语双加强学习(DRL)。 我们显示DRL在第四根速率估计两个组件时,DRL是有效的,并且在只有一个组件一致时也是双重稳健的。 我们凭经验调查这些属性,并展示由于利用无核性而导致的性能益处。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号