首页> 外文会议>International Conference on Neural Information Processing >Effectiveness of Adversarial Attacks on Class-Imbalanced Convolutional Neural Networks
【24h】

Effectiveness of Adversarial Attacks on Class-Imbalanced Convolutional Neural Networks

机译:对抗对卷积卷大神经网络的对抗攻击的有效性

获取原文

摘要

Convolutional neural networks (CNNs) performance has increased considerably in the last couple of years. However, as with most machine learning methods, these networks suffer from the data imbalance problem - when the underlying training dataset is comprised of an unequal number of samples for each label/class. Such imbalance enforces a phenomena known as domain shift that causes the model to have poor generalisation when presented with previously unseen data. Recent research has focused on a technique called gradient sign that intensifies domain shift in CNNs by modifying inputs to deliberately yield erroneous model outputs, while appearing unmodified to human observers. Several commercial systems rely on image recognition techniques to perform well. Therefore, adversarial attacks poses serious threats to their integrity. In this work we present an experimental study that sheds light on the link between adversarial attacks, imbalanced learning and transfer learning. Through a series of experiments we evaluate the fast gradient sign method on class imbalanced CNNs, linking model vulnerabilities to the characteristics of its underlying training set and internal model knowledge.
机译:在过去的几年里,卷积神经网络(CNNS)性能大大增加。然而,与大多数机器学习方法一样,这些网络遭受数据不平衡问题 - 当底层训练数据集组成每个标签/类的不同样本数量时。这种不平衡强制了已知为域移位的现象,当呈现先前看不见的数据时,使模型具有较差的泛化。最近的研究专注于称为梯度标志的技术,通过修改输入来刻意产生错误模型输出,在CNN中加剧CNN的域移位,同时出现未经修改的人类观察者。几种商业系统依赖于图像识别技术表现良好。因此,对抗性袭击对他们的诚信构成了严重的威胁。在这项工作中,我们提出了一个实验研究,揭示了对抗攻击,学习和转移学习之间的联系。通过一系列实验,我们评估课堂上的快速渐变标志方法,将模型漏洞链接到其底层培训集和内部模型知识的特征。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号