首页> 外文期刊>Neurocomputing >Integral reinforcement learning-based online adaptive event-triggered control for non-zero-sum games of partially unknown nonlinear systems
【24h】

Integral reinforcement learning-based online adaptive event-triggered control for non-zero-sum games of partially unknown nonlinear systems

机译:基于整体强化学习的部分未知非线性系统非零和博弈在线自适应事件触发控制

获取原文
获取原文并翻译 | 示例

摘要

This paper develops an integral reinforcement learning (IRL)-based adaptive control method for the multi-player non-zero-sum (NZS) games of the nonlinear continuous-time systems with partially unknown dynamics, in the context of event-triggered mechanism. With the principle of IRL method, the requirement for the system drift dynamics is relaxed in the controller design. Moreover, different from the conventional iteration computation methods, the algorithm developed in this work is implemented in an online adaptive fashion, which provides a new way to combine the IRL algorithm and the event-triggered control framework in solving the NZS game issues. In the event-based algorithm, a state-dependent triggering condition is presented, which not only guarantees the closed-loop system stability, but also reduces the computation and communication loads of the controlled plant. By means of Lyapunov theorem, the uniform ultimate boundedness (UUB) properties of the system states and the critic weight estimation errors have been proved. Finally, two numerical examples are utilized to demonstrate the efficacy of the proposed method. (c) 2019 Elsevier B.V. All rights reserved.
机译:在事件触发机制的背景下,针对非线性连续时间系统的多玩家非零和(NZS)游戏,开发了一种基于积分强化学习(IRL)的自适应控制方法。利用IRL方法的原理,在控制器设计中放宽了对系统漂移动力学的要求。而且,与传统的迭代计算方法不同,本工作开发的算法以在线自适应方式实现,为解决NZS游戏问题提供了一种结合IRL算法和事件触发控制框架的新方法。在基于事件的算法中,提出了一种状态相关的触发条件,该条件不仅保证了闭环系统的稳定性,而且减少了受控工厂的计算和通信负荷。利用李雅普诺夫定理,证明了系统状态的统一极限有界性和批评者权重估计误差。最后,利用两个数值例子来证明所提方法的有效性。 (c)2019 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号