首页> 外文会议>International Conference on Data Science and Advanced Analytics >Minimizing expected loss for risk-avoiding reinforcement learning
【24h】

Minimizing expected loss for risk-avoiding reinforcement learning

机译:最大限度地减少风险避免强化学习的预期损失

获取原文

摘要

This paper considers the design of a reinforcement learning (RL) agent that can strike a balance between return and risk. First, we discuss several favorable properties of an RL risk model, and then propose a definition of risk based on expected negative rewards. We also design a Q-decomposition-based framework that allows a reinforcement learning agent to control the balance between risk and profit. The results of experiments on both artificial and real-world stock datasets demonstrate that the proposed risk model satisfies the beneficial properties of an RL-based risk learning model, and also significantly outperforms other approaches in terms of avoiding risks.
机译:本文考虑了能够在返回和风险之间取得平衡的强化学习(RL)代理的设计。首先,我们讨论了RL风险模型的几个有利的属性,然后提出了基于预期的负奖励的风险定义。我们还设计了一种基于Q分解的框架,允许强化学习代理控制风险和利润之间的平衡。人工和现实世界股票数据集的实验结果表明,拟议的风险模式满足了基于RL的风险学习模型的有益特性,并且在避免风险方面也显着优于其他方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号