首页> 外文会议>2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning >Safe reinforcement learning in high-risk tasks through policy improvement
【24h】

Safe reinforcement learning in high-risk tasks through policy improvement

机译:通过改进政策,在高风险任务中进行安全强化学习

获取原文

摘要

Reinforcement Learning (RL) methods are widely used for dynamic control tasks. In many cases, these are high risk tasks where the trial and error process may select actions which execution from unsafe states can be catastrophic. In addition, many of these tasks have continuous state and action spaces, making the learning problem harder and unapproachable with conventional RL algorithms. So, when the agent begins to interact with a risky and large state-action space environment, an important question arises: how can we avoid that the exploration of the state-action space causes damages in the learning (or other) systems. In this paper, we define the concept of risk and address the problem of safe exploration in the context of RL. Our notion of safety is concerned with states that can lead to damage. Moreover, we introduce an algorithm that safely improves suboptimal but robust behaviors for continuous state and action control tasks, and that learns efficiently from the experience gathered from the environment. We report experimental results using the helicopter hovering task from the RL Competition.
机译:强化学习(RL)方法广泛用于动态控制任务。在许多情况下,这些是高风险任务,其中试验和错误过程可以选择从不安全状态执行的动作可能是灾难性的。此外,许多这些任务具有连续状态和行动空间,使学习问题更难,并且与传统的RL算法更难。因此,当代理人开始与风险和大状态行动空间环境进行互动时,出现了一个重要问题:我们如何避免探索国家 - 行动空间导致学习(或其他)系统中的损害。在本文中,我们定义了风险的概念并解决了RL背景下的安全探索问题。我们的安全概念涉及可能导致损坏的国家。此外,我们介绍了一种算法,可以安全地改善次优,以实现连续状态和行动控制任务,并且从环境中收集的经验有效地学习。我们使用RL竞争中的直升机悬停任务报告实验结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号