首页> 外文期刊>Journal of Parallel and Distributed Computing >The security of machine learning in an adversarial setting: A survey
【24h】

The security of machine learning in an adversarial setting: A survey

机译:对敌人环境中机器学习的安全性:调查

获取原文
获取原文并翻译 | 示例

摘要

Machine learning (ML) methods have demonstrated impressive performance in many application fields such as autopilot, facial recognition, and spam detection. Traditionally, ML models are trained and deployed in a benign setting, in which the testing and training data have identical statistical characteristics. However, this assumption usually does not hold in the sense that the ML model is designed in an adversarial setting, where some statistical properties of the data can be tampered with by a capable adversary. Specifically, it has been observed that adversarial examples (also known as adversarial input perambulations) elaborately crafted during training/test phases can seriously undermine the ML performance. The susceptibility of ML models in adversarial settings and the corresponding countermeasures have been studied by many researchers in both academic and industrial communities. In this work, we present a comprehensive overview of the investigation of the security properties of ML algorithms under adversarial settings. First, we analyze the ML security model to develop a blueprint for this interdisciplinary research area. Then, we review adversarial attack methods and discuss the defense strategies against them. Finally, relying upon the reviewed work, we provide prospective relevant future works for designing more secure ML models. (C) 2019 Elsevier Inc. All rights reserved.
机译:机器学习(ML)方法在许多应用领域中表现出令人印象深刻的性能,例如自动驾驶仪,面部识别和垃圾邮件检测。传统上,ML模型培训并在良性设置中部署,其中测试和培训数据具有相同的统计特征。然而,这种假设通常不在对普发的环境中设计的ML模型的意义上,其中数据的一些统计特性可以通过能力的对手篡改。具体地,已经观察到在训练/测试阶段在训练/测试阶段进行精心制作的对抗性实例(也称为对抗性输入偏移)可以严重破坏ML性能。学术界和工业社区的许多研究人员研究了遍地环境中ML模型的易感性和相应的对策。在这项工作中,我们在对抗性环境下全面概述了ML算法的安全性质的调查。首先,我们分析ML安全模型以开发这种跨学科研究区的蓝图。然后,我们审查对抗性攻击方法并讨论对他们的防御战略。最后,依靠审查的工作,我们为设计更安全的ML模型提供了潜在的相关未来作品。 (c)2019 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号