首页> 外文期刊>Journal of Parallel and Distributed Computing >The security of machine learning in an adversarial setting: A survey
【24h】

The security of machine learning in an adversarial setting: A survey

机译:对抗环境下机器学习的安全性:一项调查

获取原文
获取原文并翻译 | 示例

摘要

Machine learning (ML) methods have demonstrated impressive performance in many application fields such as autopilot, facial recognition, and spam detection. Traditionally, ML models are trained and deployed in a benign setting, in which the testing and training data have identical statistical characteristics. However, this assumption usually does not hold in the sense that the ML model is designed in an adversarial setting, where some statistical properties of the data can be tampered with by a capable adversary. Specifically, it has been observed that adversarial examples (also known as adversarial input perambulations) elaborately crafted during training/test phases can seriously undermine the ML performance. The susceptibility of ML models in adversarial settings and the corresponding countermeasures have been studied by many researchers in both academic and industrial communities. In this work, we present a comprehensive overview of the investigation of the security properties of ML algorithms under adversarial settings. First, we analyze the ML security model to develop a blueprint for this interdisciplinary research area. Then, we review adversarial attack methods and discuss the defense strategies against them. Finally, relying upon the reviewed work, we provide prospective relevant future works for designing more secure ML models. (C) 2019 Elsevier Inc. All rights reserved.
机译:机器学习(ML)方法在自动驾驶仪,面部识别和垃圾邮件检测等许多应用领域中已展示出令人印象深刻的性能。传统上,机器学习模型是在良性环境中进行训练和部署的,其中测试和训练数据具有相同的统计特征。但是,这种假设通常并不意味着ML模型是在对抗环境中设计的,在这种情况下,有能力的对手可能会篡改数据的某些统计属性。具体来说,已经观察到在训练/测试阶段精心制作的对抗示例(也称为对抗输入搜索)会严重破坏ML性能。学术界和工业界的许多研究人员都研究了对抗模型中ML模型的敏感性以及相应的对策。在这项工作中,我们提供了在对抗性设置下对ML算法的安全性研究的全面概述。首先,我们分析ML安全模型,以制定该跨学科研究领域的蓝图。然后,我们回顾了对抗性攻击方法并讨论了针对它们的防御策略。最后,依靠已审查的工作,我们为设计更安全的ML模型提供了预期的相关未来工作。 (C)2019 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号