首页> 外文会议>European Symposium on Research in Computer Security >Adversarial Examples for Malware Detection
【24h】

Adversarial Examples for Malware Detection

机译:恶意软件检测的对抗示例

获取原文

摘要

Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by introducing minor - yet carefully selected - perturbations. In this work, we expand on existing adversarial example crafting algorithms to construct a highly-effective attack that uses adversarial examples against malware detection models. To this end, we identify and overcome key challenges that prevent existing algorithms from being applied against malware detection: our approach operates in discrete and often binary input domains, whereas previous work operated only in continuous and differentiable domains. In addition, our technique guarantees the malware functionality of the adversarially manipulated program. In our evaluation, we train a neural network for malware detection on the DREBIN data set and achieve classification performance matching state-of-the-art from the literature. Using the augmented adversarial crafting algorithm we then manage to mislead this classifier for 63% of all malware samples. We also present a detailed evaluation of defensive mechanisms previously introduced in the computer vision contexts, including distillation and adversarial training, which show promising results.
机译:已知机器学习模型缺乏对由对手制作的投入的鲁棒性。例如,这种对手示例可以通过引入轻微且仔细地选择 - 扰动来源于常规输入。在这项工作中,我们扩展了现有的对手示例制定算法,构建了一种高效的攻击,该攻击使用对抗恶意软件检测模型的对抗示例。为此,我们识别并克服了防止现有算法应用于恶意软件检测的关键挑战:我们的方法在离散和通常的二进制输入域中运行,而以前的工作仅在连续和可微分的域中运行。此外,我们的技术保证了对流地操纵程序的恶意软件功能。在我们的评估中,我们训练一个神经网络,用于对DREBIN数据集进行恶意软件检测,并从文献中实现匹配的最先进的分类性能。使用增强的逆势制作算法,我们可以误导此类分类器的所有恶意软件样本的63%。我们还详细评估了先前在计算机视觉背景下引入的防御机制的评估,包括蒸馏和对抗性培训,这些培训表现出了有希望的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号