首页> 外文期刊>Evolving Systems >Adversarial learning: the impact of statistical sample selection techniques on neural ensembles
【24h】

Adversarial learning: the impact of statistical sample selection techniques on neural ensembles

机译:对抗学习:统计样本选择技术对神经集成的影响

获取原文
获取原文并翻译 | 示例

摘要

Adversarial learning is a recently introduced term which refers to the machine learning process in the presence of an adversary whose main goal is to cause dysfunction to the learning machine. The key problem in adversarial learning is to determine when and how an adversary will launch its attacks. It is important to equip the deployed machine learning system with an appropriate defence strategy so that it can still perform adequately in an adversarial learning environment. In this paper we investigate artificial neural networks as the machine learning algorithm to operate in such an environment, owing to their ability to learn a complex and nonlinear function even with little prior knowledge about the underlying true function. Two types of adversarial attacks are investigated: targeted attacks, which are aimed at a specific group of instances, and random attacks, which are aimed at arbitrary instances. We hypothesise that a neural ensemble performs better than a single neural network in adversarial learning. We test this hypothesis using simulated adversarial attacks, based on artificial, UCI and spam data sets. The results demonstrate that an ensemble of neural networks trained on attacked data is more robust against both types of attack than a single network. While many papers have demonstrated that an ensemble of neural networks is more robust against noise than a single network, the significance of the current work lies in the fact that targeted attacks are not white noise.
机译:对抗学习是最近引入的术语,它是指在存在对手的情况下进行机器学习的过程,其主要目标是导致学习机功能障碍。对抗学习的关键问题是确定对手何时以及如何发起攻击。重要的是,为部署的机器学习系统配备适当的防御策略,以便它仍可以在对抗性学习环境中充分发挥作用。在本文中,我们研究了作为在这种环境下运行的机器学习算法的人工神经网络,这是因为即使没有关于基础真实函数的先验知识,它们也能够学习复杂的非线性函数。研究了两种类型的对抗性攻击:针对特定实例组的针对性攻击,以及针对任意实例的随机性攻击。我们假设在对抗性学习中,神经集成的表现要优于单个神经网络。我们使用基于人工,UCI和垃圾邮件数据集的模拟对抗攻击来检验此假设。结果表明,针对攻击数据训练的神经网络的集成比单个网络对两种类型的攻击都更健壮。虽然许多论文已经证明,神经网络的集成比单个网络对噪声的鲁棒性更高,但当前工作的意义在于,针对性攻击不是白噪声。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号