首页> 外文会议>Proceedings of the 29th Chinese Control Conference >Asymptotically Stable Reinforcement Learning-Based Neural Network Controller using Adaptive Bounding Technique
【24h】

Asymptotically Stable Reinforcement Learning-Based Neural Network Controller using Adaptive Bounding Technique

机译:基于自适应边界技术的基于渐近稳定强化学习的神经网络控制器

获取原文

摘要

In this paper, a novel asymptotically stable reinforcement learning-based neural network controller using adaptive bounding technique for the tracking problem of a class continuous nonlinear system is proposed. An actor-critic structure is adopted for designing the controller, in which the critic network is tuned by itself and generates the reinforcement learning signal to tune actor network which generates the input signal to the system. The designed controller can achieve asymptotic convergence of the tracking error and performance measurement signal to zero, while ensuring boundedness of parameter estimation errors. No a prior knowledge of bounds of unknown quantities in designing the controller is assumed. Simulation results on a two-link robot manipulator show the satisfactory performance of the proposed control scheme.
机译:针对一类连续非线性系统的跟踪问题,提出了一种基于自适应渐近技术的新型渐近稳定的基于强化学习的神经网络控制器。采用行为者批评结构来设计控制器,其中批评者网络自身被调整并产生增强学习信号以调整行为者网络,该参与者网络生成系统的输入信号。设计的控制器可以实现跟踪误差和性能测量信号渐近收敛到零,同时确保参数估计误差的有界性。假设在设计控制器时没有关于未知量范围的先验知识。在两连杆机器人操纵器上的仿真结果表明,所提出的控制方案具有令人满意的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号