首页> 外文期刊>IEEE Transactions on Emerging Topics in Computational Intelligence >A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning
【24h】

A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning

机译:对抗性机器学习中的系统驱动的攻击分类和防御

获取原文
获取原文并翻译 | 示例

摘要

Machine Learning (ML) algorithms, specifically supervised learning, are widely used in modern real-world applications, which utilize Computational Intelligence (CI) as their core technology, such as autonomous vehicles, assistive robots, and biometric systems. Attacks that cause misclassifications or mispredictions can lead to erroneous decisions resulting in unreliable operations. Designing robust ML with the ability to provide reliable results in the presence of such attacks has become a top priority in the field of adversarial machine learning. An essential characteristic for rapid development of robust ML is an arms race between attack and defense strategists. However, an important prerequisite for the arms race is access to a well-defined system model so that experiments can be repeated by independent researchers. This article proposes a fine-grained system-driven taxonomy to specify ML applications and adversarial system models in an unambiguous manner such that independent researchers can replicate experiments and escalate the arms race to develop more evolved and robust ML applications. The article provides taxonomies for: 1) the dataset, 2) the ML architecture, 3) the adversary's knowledge, capability, and goal, 4) adversary's strategy, and 5) the defense response. In addition, the relationships among these models and taxonomies are analyzed by proposing an adversarial machine learning cycle. The provided models and taxonomies are merged to form a comprehensive system-driven taxonomy, which represents the arms race between the ML applications and adversaries in recent years. The taxonomies encode best practices in the field and help evaluate and compare the contributions of research works and reveals gaps in the field.
机译:机器学习(ML)算法,专门监督学习,广泛应用于现代实际应用,利用计算智能(CI)作为其核心技术,例如自主车辆,辅助机器人和生物识别系统。导致错误分类或错误预测的攻击可能导致错误的决定导致不可靠的行动。设计强大的ML具有在存在这种攻击的情况下提供可靠的结果,这已成为对抗机器学习领域的首要任务。强大的ML快速发展的重要特征是攻击和国防战略家之间的军备竞赛。然而,军备竞赛的重要前提是访问明确定义的系统模型,以便可以通过独立的研究人员重复实验。本文提出了一种细粒度的系统驱动分类法,以不繁二的方式指定ML应用和对抗性系统模型,使得独立的研究人员可以复制实验并升级手臂竞争,以开发更具发展和强大的ML应用程序。本文提供了分类管理:1)数据集,2)ML架构,3)对手的知识,能力和目标,4)逆境的战略和5)国防反应。此外,通过提出对抗机器学习循环来分析这些模型和分类学之间的关系。提供的模型和分类是合并的,以形成一个综合的系统驱动分类,代表近年来ML应用和对手之间的军备竞赛。分类管理编码现场的最佳实践,并帮助评估和比较研究工作的贡献,并揭示了现场的差距。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号