首页> 外文会议>IEEE Conference on Computer Communications >Threats of Adversarial Attacks in DNN-Based Modulation Recognition
【24h】

Threats of Adversarial Attacks in DNN-Based Modulation Recognition

机译:基于DNN的调制识别中的对抗攻击威胁

获取原文

摘要

With the emergence of the information age, mobile data has become more random, heterogeneous and massive. Thanks to its many advantages, deep learning is increasingly applied in communication fields such as modulation recognition. However, recent studies show that the deep neural networks (DNN) is vulnerable to adversarial examples, where subtle perturbations deliberately designed by an attacker can fool a classifier model into making mistakes. From the perspective of an attacker, this study adds elaborate adversarial examples to the modulation signal, and explores the threats and impacts of adversarial attacks on the DNN-based modulation recognition in different environments. The results show that, regardless of a white-box or a black-box model, the adversarial attack can reduce the accuracy of the target model. Among them, the performance of the iterative attack is superior to the one-step attack in most scenarios. In order to ensure the invisibility of the attack (the waveform being consistent before and after the perturbations), an appropriate perturbation level is found without losing the attack effect. Finally, it is attested that the signal confidence level is inversely proportional to the attack success rate, and several groups of signals with high robustness are obtained.
机译:随着信息时代的出现,移动数据变得更随机,异质和大规模。由于其许多优点,深度学习越来越多地应用于调制识别等通信领域。然而,最近的研究表明,深度神经网络(DNN)容易受到对抗的例子,其中由攻击者故意设计的微妙扰动可以欺骗分类器模型犯错误。从攻击者的角度来看,该研究将详细的对抗性实例添加到调制信号中,并探讨了对不同环境中基于DNN的调制识别的对抗攻击的威胁和影响。结果表明,无论白盒还是黑匣子模型,对抗性攻击都可以降低目标模型的准确性。其中,在大多数情况下,迭代攻击的性能优于一步攻击。为了确保攻击的隐形(波形在扰动之前和之后一致),发现了适当的扰动水平而不会失去攻击效果。最后,证明了信号置信水平与攻击成功率成反比,获得了几组具有高稳健性的信号。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号