首页> 外文会议>Chinese Control Conference >Off-policy Reinforcement Learning for Robust Control of Discrete-time Uncertain Linear Systems
【24h】

Off-policy Reinforcement Learning for Robust Control of Discrete-time Uncertain Linear Systems

机译:离散时间不确定线性系统鲁棒控制的禁止策略加固学习

获取原文

摘要

In this paper, an off-policy reinforcement learning method is developed for the robust stabilizing controller design of discrete-time uncertain linear systems. The proposed robust control design consists of two steps. First, the robust control problem is transformed to an optimal control problem. Second, the off-policy RL method is used to design the optimal control policy which guarantees the robust stability of the original system with uncertainty. The condition for the equivalence between the robust control problem and the optimal control problem is discussed. The off-policy does not require any knowledge of the system knowledge and efficiently utilize the data collected from on-line to improve the performance of approximate optimal control policy in each iteration successively. Finally, a simulation example is carried out to verify the effectiveness of the presented algorithm for the robust control problem of discrete-time linear system with uncertainty.
机译:本文开发了一种用于离散时间不确定线性系统的鲁棒稳定控制器设计的脱策强化学习方法。建议的强大控制设计包括两个步骤。首先,将稳健的控制问题转换为最佳控制问题。其次,off-police rl方法用于设计最佳控制策略,保证原始系统具有不确定性的鲁棒稳定性。讨论了鲁棒控制问题与最优控制问题之间的等价的条件。违规策略不需要对系统知识的任何知识并有效地利用从在线收集的数据,以提高连续的每次迭代中的近似最佳控制策略的性能。最后,执行模拟示例以验证具有不确定性的离散时间线性系统的鲁棒控制问题的施加算法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号