...
首页> 外文期刊>SIGKDD explorations >Adversary Resistant Deep Neural Networks with an Application to Malware Detection
【24h】

Adversary Resistant Deep Neural Networks with an Application to Malware Detection

机译:对抗性深度神经网络,应用于恶意软件检测

获取原文
获取原文并翻译 | 示例
           

摘要

Outside the highly publicized victories in the game of Go, there have been numerous successful applications of deep learning in the fields of information retrieval, computer vision, and speech recognition. In cybersecurity, an increasing number of companies have begun exploring the use of deep learning (DL) in a variety of security tasks with malware detection among the more popular. These companies claim that deep neural networks (DNNs) could help turn the tide in the war against malware infection. However, DNNs are vulnerable to adversarial samples, a shortcoming that plagues most, if not all, statistical and machine learning models. Recent research has demonstrated that those with malicious intent can easily circumvent deep learning-powered malware detection by exploiting this weakness. To address this problem, previous work developed defense mechanisms that are based on augmenting training data or enhancing model complexity. However, after analyzing DNN susceptibility to adversarial samples, we discover that the current defense mechanisms are limited and, more importantly, cannot provide theoretical guarantees of robustness against adversarial sampled-based attacks. As such, we propose a new adversary resistant technique that obstructs attackers from constructing impactful adversarial samples by randomly nullifying features within data vectors. Our proposed technique is evaluated on a real world dataset with 14,679 malware variants and 17,399 benign programs. We theoretically validate the robustness of our technique, and empirically show that our technique significantly boosts DNN robustness to adversarial samples while maintaining high accuracy in classification. To demonstrate the general applicability of our proposed method, we also conduct experiments using the MNIST and CIFAR-10 datasets, widely used in image recognition research.
机译:在比赛中高度公布的胜利之外,在信息检索,计算机视觉和语音识别的领域中有许多成功的深度学习。在网络安全中,越来越多的公司已经开始探索在各种安全任务中使用深度学习(DL),在更受欢迎的恶意软件检测中具有恶意软件检测。这些公司声称,深度神经网络(DNN)可以帮助在战争中转向恶意软件感染。然而,DNNS容易受到对抗性样本的影响,这是一个缺失的缺点,如果不是全部,统计和机器学习模型。最近的研究表明,恶意意图的人可以通过利用这种弱点来容易地绕过深度学习的恶意软件检测。为了解决这个问题,以前的工作开发了基于增强培训数据或提高模型复杂性的防御机制。然而,在分析对对抗性样本的DNN易感性后,我们发现当前的防御机制有限,更重要的是,不能为对基于对抗的对抗采样的攻击的鲁棒性提供理论保证。因此,我们提出了一种新的对手抗性技术,阻碍攻击者通过在数据向量中随机无序地无序地无序构建受影响的对抗性样本。我们提出的技术在现实世界数据集中评估了14,679个恶意软件变体和17,399个良性计划。我们理论上验证了我们技术的稳健性,并经验证明我们的技术显着提高了对抗对抗样本的DNN鲁棒性,同时保持了高精度。为了证明我们所提出的方法的一般适用性,我们还通过MNIST和CIFAR-10数据集进行实验,广泛用于图像识别研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号