首页> 美国政府科技报告 >Computing Approximate Solutions to Markov Renewal Programs with Continuous State Spaces.
【24h】

Computing Approximate Solutions to Markov Renewal Programs with Continuous State Spaces.

机译:用连续状态空间计算马尔可夫更新程序的近似解。

获取原文

摘要

Value iteration and policy iteration are two well known computational methods for solving Markov renewal decision processes. Value iteration converges linearly, while policy iteration (typically) converges quadratically and is therefore more attractive in principle. However, when the state space is very large (or continuous), the latter asks for solving at each iteration a large linear system (or integral equation) and becomes unpractical. We propose an approximate policy iteration method, targeted especially to systems with continuous or large state spaces, for which the Bellman (expected cost-to-go) function is relatively smooth (or piecewise smooth). These systems occur quite frequently in practice. The method is based on an approximation of the Bellman function by a linear combination of an a priori fixed set of base functions. At each policy iteration, we build a linear system in terms of the coefficients of these base functions, and solve this system approximately. We give special attention to a particular case of finite element approximation where the Bellman function is expressed directly as a convex combination of its values at a finite set of grid points.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号