首页> 外文会议>Advances in Neural Networks - ISNN 2007 pt.2; Lecture Notes in Computer Science; 4492 >State Space Partition for Reinforcement Learning Based on Fuzzy Min-Max Neural Network
【24h】

State Space Partition for Reinforcement Learning Based on Fuzzy Min-Max Neural Network

机译:基于模糊最小-最大神经网络的强化学习状态空间划分

获取原文
获取原文并翻译 | 示例

摘要

In this paper, a tabular reinforcement learning (RL) method is proposed based on improved fuzzy min-max (FMM) neural network. The method is named FMM-RL. The FMM neural network is used to segment the state space of the RL problem. The aim is to solve the "curse of dimensionality" problem of RL. Furthermore, the speed of convergence is improved evidently. Regions of state space serve as the hyperboxes of FMM. The minimal and maximal points of the hyperbox are used to define the state space partition boundaries. During the training of FMM neural network, the state space is partitioned via operations on hyperbox. Therefore, a favorable generalization performance of state space can be obtained. Finally, the method of this paper is applied to learn behaviors for the reactive robot. The experiment shows that the algorithm can effectively solve the problem of navigation in a complicated unknown environment.
机译:本文提出了一种基于改进的模糊最小-最大(FMM)神经网络的表格强化学习(RL)方法。该方法名为FMM-RL。 FMM神经网络用于分割RL问题的状态空间。目的是解决RL的“维数诅咒”问题。此外,收敛速度明显提高。状态空间区域充当FMM的超级框。超级框的最小和最大点用于定义状态空间分区边界。在训练FMM神经网络的过程中,状态空间通过对hyperbox的操作进行划分。因此,可以获得良好的状态空间泛化性能。最后,本文的方法被应用于学习反应机器人的行为。实验表明,该算法可以有效解决复杂未知环境下的导航问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号