【24h】

Data Poisoning on Deep Learning Models

机译:深度学习模型的数据中毒

获取原文

摘要

Deep learning is a form of artificial intelligence (AI) that has seen rapid development and deployment in computer software as a means to implementing AI functionality with greater efficiency and ease as compared to other alternative AI solutions, with usage seen in systems varying from search and recommendation engines to autonomous vehicles. With the demand for deep learning algorithms that can perform increasingly complex tasks in a shorter time frame growing at an exponential pace, the developments in the efficiency and productivity of algorithms has far outpaced that of the security of such algorithms, drawing concerns over the many unaddressed vulnerabilities that may be exploited to compromise the integrity of these software. This study investigated the ability of poisoning attacks, a form of attack targeting the vulnerability of deep learning training data, to compromise the integrity of a deep learning model’s classificational functionality. Experimentation involved the processing of training data sets with varying deep learning models and the incremental introduction of poisoned data sets to view the efficacy of a poisoning attack under multiple circumstances and correlate such with aspects of the model’s design conditions. Analysis of results showed evidence of a decrease of classificational ability correlating with an increase of poison percentage in the training data sets, but the scale of which the decrease occurred varied with the specified parameters in the model design. Based on this, it was concluded that poisoning can provide varying levels of damage to deep learning classificational ability depending on the parameters utilized in the model design, and methods to countermeasure such were proposed, such as increasing epoch count, implementing mechanisms bolstering model fit, and integrating input level filtration systems.
机译:深度学习是一种人工智能(AI)的一种形式,可以看到计算机软件的快速开发和部署,作为与其他替代AI解决方案相比,以更高的效率和轻松实现AI功能的手段,在系统中不同地看到的使用情况,从搜索和推荐发动机到自动车辆。随着对深度学习算法的需求,可以在以指数速度增长的较短时间范围内执行越来越复杂的任务,算法的效率和生产率的发展远远远远超过了这种算法的安全性,涉及许多未解决的缺陷的疑虑可能被剥削以危及这些软件的完整性的漏洞。本研究调查了中毒攻击的能力,一种攻击形式,针对深度学习培训数据的脆弱性,危及深度学习模型的分类功能的完整性。实验涉及具有不同深入学习模型的培训数据集的处理和中毒数据集的增量引入,以在多种情况下观看中毒攻击的功效,并在模型的设计条件的方面相关。结果分析显示了与训练数据集中的毒药百分比增加相关的分类能力的证据,但是在模型设计中的指定参数中发生了降低的规模。基于这一点,得出的结论是,取决于模型设计中使用的参数,中毒可以为深度学习分类的能力提供不同程度的损害,以及提出的对策的方法,例如增加时代计数,实施机制螺母模型适合,并集成输入电平过滤系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号