首页> 外文会议>2011 IEEE International Conference on Systems, Man, and Cybernetics >Design of robust classifiers for adversarial environments
【24h】

Design of robust classifiers for adversarial environments

机译:对抗环境中强大分类器的设计

获取原文

摘要

In adversarial classification tasks like spam filtering, intrusion detection in computer networks, and biometric identity verification, malicious adversaries can design attacks which exploit vulnerabilities of machine learning algorithms to evade detection, or to force a classification system to generate many false alarms, making it useless. Several works have addressed the problem of designing robust classifiers against these threats, although mainly focusing on specific applications and kinds of attacks. In this work, we propose a model of data distribution for adversarial classification tasks, and exploit it to devise a general method for designing robust classifiers, focusing on generative classifiers. Our method is then evaluated on two case studies concerning biometric identity verification and spam filtering.
机译:在垃圾邮件过滤,计算机网络中的入侵检测和生物特征身份验证等对抗性分类任务中,恶意攻击者可以设计攻击,利用机器学习算法的漏洞来逃避检测,或迫使分类系统生成许多错误警报,从而使其无用。 。尽管主要针对特定​​的应用程序和攻击类型,但一些工作已经解决了设计针对这些威胁的鲁棒分类器的问题。在这项工作中,我们提出了一种用于对抗性分类任务的数据分配模型,并利用它来设计一种设计健壮分类器的通用方法,重点是生成分类器。然后,在涉及生物特征身份验证和垃圾邮件过滤的两个案例研究中评估了我们的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号