...
首页> 外文期刊>IFAC PapersOnLine >Attacking DNN-based Intrusion Detection Models
【24h】

Attacking DNN-based Intrusion Detection Models

机译:攻击基于DNN的入侵检测模型

获取原文
   

获取外文期刊封面封底 >>

       

摘要

Intrusion detection plays an important role in public security domains. Dynamic deep neural network(DNN)-based intrusion detection models have been demonstrated to show effective performance for timely detecting network intrusions. While DNN-based intrusion detection models have shown powerful performance, in this paper, we verify that they could be easily attacked by well-designed small adversarial perturbations. We design an effective procedure to employ commonly used adversarial perturbations for attacking well-trained DNN detection models on NSL-KDD dataset. We further find that the performance of DNN models for recognizing real labels of abnormal data suffers more from attacks compared with that on normal samples.
机译:入侵检测在公共安全域中发挥着重要作用。 已经证明动态深度神经网络(DNN)基于基于侵入检测模型,以表现出有效的性能,以便及时检测网络入侵。 虽然基于DNN的入侵检测模型表现出强大的性能,但在本文中,我们验证了它们可能很容易被设计精心设计的小对抗扰动。 我们设计有效的程序,用于采用常用的对抗扰动,用于在NSL-KDD数据集上攻击训练良好的DNN检测模型。 我们进一步发现,对于识别异常数据的实际标签的DNN模型的性能与正常样本相比的攻击更多地受到攻击更多。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号