首页> 外文会议>International conference on neural information processing >Effectiveness of Adversarial Attacks on Class-Imbalanced Convolutional Neural Networks
【24h】

Effectiveness of Adversarial Attacks on Class-Imbalanced Convolutional Neural Networks

机译:类不平衡卷积神经网络对抗攻击的有效性

获取原文
获取外文期刊封面目录资料

摘要

Convolutional neural networks (CNNs) performance has increased considerably in the last couple of years. However, as with most machine learning methods, these networks suffer from the data imbalance problem - when the underlying training dataset is comprised of an unequal number of samples for each label/class. Such imbalance enforces a phenomena known as domain shift that causes the model to have poor generalisation when presented with previously unseen data. Recent research has focused on a technique called gradient sign that intensifies domain shift in CNNs by modifying inputs to deliberately yield erroneous model outputs, while appearing unmodified to human observers. Several commercial systems rely on image recognition techniques to perform well. Therefore, adversarial attacks poses serious threats to their integrity. In this work we present an experimental study that sheds light on the link between adversarial attacks, imbalanced learning and transfer learning. Through a series of experiments we evaluate the fast gradient sign method on class imbalanced CNNs, linking model vulnerabilities to the characteristics of its underlying training set and internal model knowledge.
机译:在过去的几年中,卷积神经网络(CNN)的性能已大大提高。但是,与大多数机器学习方法一样,当基础训练数据集由每个标签/类别的不相等数量的样本组成时,这些网络也会遇到数据不平衡的问题。这种不平衡导致了一种称为域移位的现象,当用以前看不见的数据呈现时,该现象会使模型的泛化性差。最近的研究集中在一种称为梯度符号的技术上,该技术通过修改输入以故意产生错误的模型输出来增强CNN的域移位,而对人类的观察者却没有修改。几种商业系统依靠图像识别技术来表现良好。因此,对抗性攻击对其完整性构成严重威胁。在这项工作中,我们提供了一项实验研究,阐明了对抗性攻击,不平衡学习和迁移学习之间的联系。通过一系列实验,我们评估了类不平衡CNN上的快速梯度符号方法,将模型漏洞与其基础训练集和内部模型知识的特征联系起来。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号