首页> 外文期刊>IEEE Transactions on Emerging Topics in Computational Intelligence >RSAC: A Robust Deep Reinforcement Learning Strategy for Dimensionality Perturbation
【24h】

RSAC: A Robust Deep Reinforcement Learning Strategy for Dimensionality Perturbation

机译:RSAC: A Robust Deep Reinforcement Learning Strategy for Dimensionality Perturbation

获取原文
获取原文并翻译 | 示例
           

摘要

Artificial agents are used in autonomous systems such as autonomous vehicles, autonomous robotics, and autonomous drones to make predictions based on data generated by fusing the values from many sources such as different sensors. Malfunctioning of sensors was noticed in the robotics domain. The correct observation from sensors corresponds to the true estimate of the dimension value of the state vector in deep reinforcement learning (DRL). Hence, noisy estimates from these sensors lead to dimensionality impairment in the state. DRL policies have shown to stagger its decision by the wrong choice of action in case of adversarial attack or modeling error. Hence, it is necessary to examine the effect of dimensionality perturbation on neural policy. In this regard, we analyze whether subtle dimensionality perturbation that occurs due to the noise in the source of input at the testing time distracts agent decisions. Also, we propose an RSAC (robust soft actor-critic) approach that uses a noisy state for prediction, and estimates target from nominal observation. We find that the injection of such noisy input during training will not hamper learning. We have done our simulation in the OpenAI gym MuJoCo (Walker2d-V2) environment and our empirical results demonstrate that the proposed approach competes for SAC’s performance and makes it robust to test time dimensionality perturbation.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号