...
首页> 外文期刊>Annual Review in Control >Reinforcement learning for control: Performance, stability, and deep approximators
【24h】

Reinforcement learning for control: Performance, stability, and deep approximators

机译:用于控制的强化学习:性能,稳定性和深度近似器

获取原文
获取原文并翻译 | 示例
           

摘要

Reinforcement learning (RL) offers powerful algorithms to search for optimal controllers of systems with nonlinear, possibly stochastic dynamics that are unknown or highly uncertain. This review mainly covers artificial-intelligence approaches to RL, from the viewpoint of the control engineer. We explain how approximate representations of the solution make RL feasible for problems with continuous states and control actions. Stability is a central concern in control, and we argue that while the control-theoretic RL subfield called adaptive dynamic programming is dedicated to it, stability of RL largely remains an open question. We also cover in detail the case where deep neural networks are used for approximation, leading to the field of deep RL, which has shown great success in recent years. With the control practitioner in mind, we outline opportunities and pitfalls of deep RL; and we close the survey with an outlook that - among other things - points out some avenues for bridging the gap between control and artificial-intelligence RL techniques. (C) 2018 Elsevier Ltd. All rights reserved.
机译:强化学习(RL)提供了强大的算法,可以搜索具有未知或高度不确定的非线性,可能是随机动力学的系统的最佳控制器。从控制工程师的角度来看,这篇综述主要涵盖了针对RL的人工智能方法。我们解释了解决方案的近似表示如何使RL对于具有连续状态和控制动作的问题变得可行。稳定性是控制中的核心问题,我们认为虽然控制理论的RL子字段称为自适应动态规划,但RL的稳定性在很大程度上仍是一个悬而未决的问题。我们还详细介绍了使用深度神经网络进行逼近的情况,这导致了深度RL领域的发展,这在近年来已取得了巨大的成功。考虑到控制从业人员,我们概述了深层RL的机会和陷阱;我们以一种展望结束调查,除其他外,该观点指出了弥合控制与人工智能RL技术之间差距的一些途径。 (C)2018 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号