首页> 美国政府科技报告 >Approximate Receding Horizon Approach for Markov Decision Processes: Average Award Case
【24h】

Approximate Receding Horizon Approach for Markov Decision Processes: Average Award Case

机译:马尔可夫决策过程的近似后退水平方法:平均奖励案例

获取原文

摘要

The authors consider an approximation scheme for solving Markov Decision Processes (MDPs) with countable state space, finite action space, and bounded rewards that uses an approximate solution of a fixed finite-horizon sub- MDP of a given infinite-horizon MDP to create a stationary policy, which they call 'approximate receding horizon control.' They first analyze the performance of the approximate receding horizon control for infinite-horizon average reward under an ergodicity assumption, which also generalizes the result obtained by White. The authors then study two examples of the approximate receding horizon control via lower bounds to the exact solution to the sub-MDP. The first control policy is based on a finite-horizon approximation of Howard's policy improvement of a single policy and the second policy is based on a generalization of the single policy improvement for multiple policies. They also provide a simple alternative proof on the policy improvement for countable state space. The authors discuss practical implementations of these schemes via simulation.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号