首页> 中文期刊> 《电子学报》 >一种基于最优策略概率分布的 POMDP 值迭代算法

一种基于最优策略概率分布的 POMDP 值迭代算法

         

摘要

With the enlargement of the scale of POMDP problems in applications,the research of heuristic methods for reachable area based on the optimal policy becomes current hotspot.However,the standard of existing algorithms about choosing the best action is not perfect enough thus the efficiency of the algorithms is affected.This paper proposes a new value iteration method PBVIOP (Probability-based Value Iteration on Optimal Policy).In depth-first heuristic exploration,this method uses the Monte Carlo algorithm to calculate the probability of each optimal action according to the distribution of each action′s Q function value between its upper and lower bounds,and chooses the maximum probability action.Experiment results of four benchmarks show that PBVIOP algorithm can obtain global optimal solution and significantly improve the convergence efficiency.%随着应用中POMDP问题的规模不断扩大,基于最优策略可达区域的启发式方法成为了目前的研究热点.然而目前已有的算法虽然保证了全局最优,但选择最优动作还不够精确,影响了算法的效率.本文提出一种基于最优策略概率的值迭代方法 PBVIOP.该方法在深度优先的启发式探索中,根据各个动作值函数在其上界和下界之间的分布,用蒙特卡罗法计算动作最优的概率,选择概率最大的动作作为最优探索策略.在4个基准问题上的实验结果表明 PBVIOP 算法能够收敛到全局最优解,并明显提高了收敛效率.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号