首页> 外文期刊>Neurocomputing >Online adaptive optimal control algorithm based on synchronous integral reinforcement learning with explorations
【24h】

Online adaptive optimal control algorithm based on synchronous integral reinforcement learning with explorations

机译:Online adaptive optimal control algorithm based on synchronous integral reinforcement learning with explorations

获取原文
获取原文并翻译 | 示例
       

摘要

In this study, we present a novel algorithm, based on synchronous policy iteration, to solve the continuous-time infinite-horizon optimal control problem of input affine system dynamics. The integral reinforcement is measured as an excitation signal to estimate the solution to the Hamilton-Jacobi-Bell man equation. In addition, the proposed method is completely model-free, that is, no a priori knowledge of the system is required. Using the adaptive tuning law, the actor and critic neural networks can simultaneously approximate the optimal value function and policy. The persistence of excitation condition is required to guarantee the convergence of the two networks. Unlike in traditional policy iteration algorithms, the restriction of the initial admissible policy was eliminated using this method. The effectiveness of the proposed algorithm is verified through numerical simulations. (c) 2022 Elsevier B.V. All rights reserved.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号