首页> 外文会议>International Symposium on Quality of Service >ATMPA: Attacking Machine Learning-based Malware Visualization Detection Methods via Adversarial Examples
【24h】

ATMPA: Attacking Machine Learning-based Malware Visualization Detection Methods via Adversarial Examples

机译:ATMPA:通过对抗性示例攻击基于机器学习的恶意软件可视化检测方法

获取原文

摘要

Since the threat of malicious software (malware) has become increasingly serious, automatic malware detection techniques have received increasing attention, where machine learning (ML)-based visualization detection methods become more and more popular. In this paper, we demonstrate that the state-of-the-art ML-based visualization detection methods are vulnerable to Adversarial Example (AE) attacks. We develop a novel Adversarial Texture Malware Perturbation Attack (ATMPA) method based on the gradient descent and L-norm optimization method, where attackers can introduce some tiny perturbations on the transformed dataset such that ML-based malware detection methods will completely fail. The experimental results on the MS BIG malware dataset show that a small interference can reduce the accuracy rate down to 0% for several ML-based detection methods, and the rate of transferability is 74.1% on average.
机译:由于恶意软件(malware)的威胁变得越来越严重,因此自动恶意软件检测技术已受到越来越多的关注,其中基于机器学习(ML)的可视化检测方法变得越来越流行。在本文中,我们证明了基于ML的最新可视化检测方法容易受到对抗示例(AE)攻击。我们基于梯度下降和L范数优化方法开发了一种新颖的对抗纹理恶意软件摄动攻击(ATMPA)方法,攻击者可以在转换后的数据集上引入一些微小的扰动,从而使基于ML的恶意软件检测方法完全失败。在MS BIG恶意软件数据集上的实验结果表明,对于几种基于ML的检测方法,较小的干扰可以将准确率降低到0%,平均可传递性率为74.1%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号