首页> 外文会议>2010 IEEE International Conference on Fuzzy Systems >Robustness of neural ensembles against targeted and random Adversarial Learning
【24h】

Robustness of neural ensembles against targeted and random Adversarial Learning

机译:神经集成体针对有针对性和随机对抗学习的鲁棒性

获取原文

摘要

Machine learning has become a prominent tool in various domains owing to its adaptability. However, this adaptability can be taken advantage of by an adversary to cause dysfunction of machine learning; a process known as Adversarial Learning. This paper investigates Adversarial Learning in the context of artificial neural networks. The aim is to test the hypothesis that an ensemble of neural networks trained on the same data manipulated by an adversary would be more robust than a single network. We investigate two attack types: targeted and random. We use Mahalanobis distance and covariance matrices to selected targeted attacks. The experiments use both artificial and UCI datasets. The results demonstrate that an ensemble of neural networks trained on attacked data are more robust against the attack than a single network. While many papers have demonstrated that an ensemble of neural networks is more robust against noise than a single network, the significance of the current work lies in the fact that targeted attacks are not white noise.
机译:由于机器学习的适应性,它已成为各个领域的重要工具。但是,对手可能会利用这种适应性来导致机器学习功能障碍。被称为对抗学习的过程。本文研究了人工神经网络环境下的对抗学习。目的是检验以下假设:在由对手操纵的相同数据上训练的神经网络的集成要比单个网络更健壮。我们研究两种攻击类型:有针对性的攻击和随机攻击。我们使用马氏距离和协方差矩阵来选择有针对性的攻击。实验使用人工和UCI数据集。结果表明,针对攻击数据训练的一组神经网络比单个网络具有更强的抵抗攻击能力。尽管许多论文已经证明,神经网络的集合比单个网络对噪声的鲁棒性更高,但是当前工作的意义在于,针对性攻击不是白噪声。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号