...
首页> 外文期刊>Complexity >A Stable Distributed Neural Controller for Physically Coupled Networked Discrete-Time System via Online Reinforcement Learning
【24h】

A Stable Distributed Neural Controller for Physically Coupled Networked Discrete-Time System via Online Reinforcement Learning

机译:通过在线强化学习的物理耦合网络离散时间系统的稳定分布式神经控制器

获取原文
           

摘要

The large scale, time varying, and diversification of physically coupled networked infrastructures such as power grid and transportation system lead to the complexity of their controller design, implementation, and expansion. For tackling these challenges, we suggest an online distributed reinforcement learning control algorithm with the one-layer neural network for each subsystem or called agents to adapt the variation of the networked infrastructures. Each controller includes a critic network and action network for approximating strategy utility function and desired control law, respectively. For avoiding a large number of trials and improving the stability, the training of action network introduces supervised learning mechanisms into reduction of long-term cost. The stability of the control system with learning algorithm is analyzed; the upper bound of the tracking error and neural network weights are also estimated. The effectiveness of our proposed controller is illustrated in the simulation; the results indicate the stability under communication delay and disturbances as well.
机译:物理耦合的网络基础架构(如电网和运输系统)的大规模,时变和多样化导致其控制器设计,实现和扩展的复杂性。为了应对这些挑战,我们建议使用一种在线分布式强化学习控制算法,该算法具有用于每个子系统或称为代理的单层神经网络,以适应网络基础结构的变化。每个控制器都包括一个批注者网络和一个动作网络,分别用于逼近策略效用函数和所需控制律。为了避免大量试验并提高稳定性,对动作网络进行培训将监督学习机制引入了降低长期成本的方法。分析了带有学习算法的控制系统的稳定性;跟踪误差的上限和神经网络权重也被估算。仿真结果表明了我们提出的控制器的有效性。结果表明在通信延迟和干扰下的稳定性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号