首页> 外文OA文献 >Security Evaluation of Support Vector Machines in Adversarial Environments
【2h】

Security Evaluation of Support Vector Machines in Adversarial Environments

机译:对抗环境下支持向量机的安全性评估

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Support vector machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion) or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.
机译:支持向量机(SVM)是安全应用程序中最流行的分类技术之一,例如恶意软件检测,入侵检测和垃圾邮件过滤。但是,如果将SVM集成到现实世界的安全系统中,则它们必须能够应对可能会误导学习算法(中毒),逃避检测(逃避)或获取有关其内部参数的信息(侵犯隐私)的攻击模式。 )。本章的主要贡献是双重的。首先,我们引入一个正式的通用框架,对机器学习系统的安全性进行实证评估。其次,根据我们的框架,我们证明了在现实世界中的安全问题中针对SVM进行逃避,中毒和隐私攻击的可行性。对于每种攻击技术,我们都会评估其影响,并讨论是否(可以如何)通过对抗者支持的SVM设计来应对它。多亏了我们在公共资源库中提供的开源代码以及所有使用的数据集,我们的实验很容易重现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号