...
首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >H∞ Static Output-Feedback Control Design for Discrete-Time Systems Using Reinforcement Learning
【24h】

H∞ Static Output-Feedback Control Design for Discrete-Time Systems Using Reinforcement Learning

机译:基于强化学习的离散系统H∞静态输出反馈控制设计

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

This paper provides necessary and sufficient conditions for the existence of the static output-feedback (OPFB) solution to the H-infinity control problem for linear discrete-time systems. It is shown that the solution of the static OPFB H-infinity control is a Nash equilibrium point. Furthermore, a Q-learning algorithm is developed to find the H-infinity OPFB solution online using data measured along the system trajectories and without knowing the system matrices. This is achieved by solving a game algebraic Riccati equation online and using the measured data. A simulation example shows the effectiveness of the proposed method.
机译:本文为线性离散时间系统的H-无穷大控制问题的静态输出反馈(OPFB)解决方案的存在提供了必要和充分的条件。结果表明,静态OPFB H无限控制的解是一个Nash平衡点。此外,开发了一种Q学习算法,可以使用沿着系统轨迹测量的数据而无需了解系统矩阵来在线找到H-infinity OPFB解决方案。这是通过在线求解游戏代数Riccati方程并使用测量数据来实现的。仿真实例表明了该方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号