...
首页> 外文期刊>Knowledge-Based Systems >RL-VAEGAN: Adversarial defense for reinforcement learning agents via style transfer
【24h】

RL-VAEGAN: Adversarial defense for reinforcement learning agents via style transfer

机译:RL-Vaegan:通过风格转移加固学习代理的对抗防御

获取原文
获取原文并翻译 | 示例
           

摘要

Reinforcement learning (RL) agents parameterized by deep neural networks have achieved great success in many domains. However, deep RL policies have been shown to be vulnerable to adversarial attacks, i.e., inputs with slight perturbations should result in a substantial agent failure. Inspired by recent advances in deep generative networks that have greatly facilitated the development of adversarial attacks, in this paper, we investigate the adversarial robustness of RL agents and propose a novel defense framework for RL based on the idea of style transfer. More precisely, our defense framework containing variational autoencoders (VAEs) and generative adversarial networks (GANs), called RL-VAEGAN, learns the distribution of the styles of the original and adversarial states, respectively, and naturally eliminates the threat of adversarial attacks for RL agents by transferring adversarial states to unperturbed legitimate one under the shared-content latent space assumption. We empirically show that our methods are effective against the state-of-the-art methods in white-box and black-box scenarios with diverse magnitudes of perturbations. (c) 2021 Elsevier B.V. All rights reserved.
机译:深神经网络参数化的强化学习(RL)代理在许多领域取得了巨大的成功。然而,已显示深度RL策略易受对抗性攻击的影响,即,具有轻微扰动的输入应导致大量的代理失败。在本文中,深入生成网络最近的深度生成网络进步的启发,我们调查了RL代理商的对抗鲁棒性,并根据风格转移的思想提出了一种新的防御框架。更确切地说,我们的防御框架包含变分AutoEncoders(VAES)和生成的对冲网络(GANS),称为RL-Vaegan,分别学会分别分配原始和对抗状态的样式,并且自然消除了RL对抗性攻击的威胁通过在共享内容潜在的空间假设下转移对抗性国家来不受干扰的政府。我们经验表明,我们的方法对白盒和黑匣子情景中的最先进方法有效,具有多种扰动大小。 (c)2021 elestvier b.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号