首页> 外文会议>International Joint Conference on Neural Networks >Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series
【24h】

Untargeted, Targeted and Universal Adversarial Attacks and Defenses on Time Series

机译:时间序列的非针对性,针对性和普遍性对抗攻击和防御

获取原文

摘要

Deep learning based models are vulnerable to adversarial attacks. These attacks can be much more harmful in case of targeted attacks, where an attacker tries not only to fool the deep learning model, but also to misguide the model to predict a specific class. Such targeted and untargeted attacks are specifically tailored for an individual sample and require addition of an imperceptible noise to the sample. In contrast, universal adversarial attack calculates a special imperceptible noise which can be added to any sample of the given dataset so that, the deep learning model is forced to predict a wrong class. To the best of our knowledge these targeted and universal attacks on time series data have not been studied in any of the previous works. In this work, we have performed untargeted, targeted and universal adversarial attacks on UCR time series datasets. Our results show that deep learning based time series classification models are vulnerable to these attacks. We also show that universal adversarial attacks have good generalization property as it need only a fraction of the training data. We have also performed adversarial training based adversarial defense. Our results show that models trained adversarially using Fast gradient sign method (FGSM), a single step attack, are able to defend against FGSM as well as Basic iterative method (BIM), a popular iterative attack.
机译:基于深度学习的模型容易受到对抗性攻击。在针对性攻击的情况下,这些攻击的危害可能更大得多,在这种攻击中,攻击者不仅试图欺骗深度学习模型,而且还会误导模型以预测特定类别。这种针对性和非针对性的攻击是专门针对单个样本量身定制的,并且需要向样本中添加不可察觉的噪声。相比之下,普遍对抗攻击会计算出一种特殊的,无法察觉的噪声,可以将其添加到给定数据集的任何样本中,从而迫使深度学习模型预测错误的类别。据我们所知,这些针对时间序列数据的有针对性的通用攻击在以前的任何工作中都没有进行过研究。在这项工作中,我们对UCR时间序列数据集进行了非针对性,针对性和普遍性的对抗性攻击。我们的结果表明,基于深度学习的时间序列分类模型容易受到这些攻击。我们还表明,通用对抗攻击具有良好的概括性,因为它只需要训练数据的一小部分。我们还进行了基于对抗训练的对抗训练。我们的结果表明,使用快速梯度符号方法(FGSM)(单步攻击)进行对抗训练的模型能够防御FGSM以及流行的迭代攻击基本迭代方法(BIM)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号