首页> 外文会议>International Conference on Science of Cyber Security >A Security Concern About Deep Learning Models
【24h】

A Security Concern About Deep Learning Models

机译:对深度学习模型的安全问题

获取原文

摘要

This paper mainly studies on the potential safety hazards in the obstacle recognition and processing system (ORPS) of the self-driving cars, which is constructed by deep learning architecture. We perform an attack that embeds a backdoor in the Mask R-CNN in ORPS by poisoning the dataset. Under normal circumstances, the backdoored model can accurately identify obstacles (vehicles). However, under certain circumstances, triggering the backdoor in the backdoored model may lead to change the size (bounding box and mask) and confidence of the detected obstacles, which may cause serious accidents. The experiment result shows that it is possible to embed a backdoor in ORPS. We can see that the backdoored network can obviously change the size of bounding box and corresponding mask of those poisoned instances. But on the other hand, embedding a backdoor in the deep learning based model will only slightly affect the accuracy of detecting objects without backdoor triggers, which is imperceptible for users. Eventually, we hope that our simple work can arouse people's attention to the self-driving technology and even other deep learning based models. It brings motivation about how to judge or detect the existence of the backdoors in these systems.
机译:本文主要研究了自动驾驶汽车障碍识别和加工系统(ORP)的潜在安全危害,由深度学习架构构建。通过中毒,我们执行攻击,该攻击将在ORPS中的掩模R-CNN中嵌入掩模R-CNN中。在正常情况下,回溯模型可以准确地识别障碍物(车辆)。然而,在某些情况下,触发回顾模型中的后门可能导致改变尺寸(边界盒和面罩)和检测到的障碍物的置信度,这可能导致严重事故。实验结果表明,可以在ORPS中嵌入后门。我们可以看到后卫网络可以显然更改边界框的大小和那些中毒实例的相应掩码。但另一方面,在基于深度学习的模型中嵌入后门只会略微影响检测物体的准确性,没有回溯触发器,这对于用户来说是难以察觉的。最终,我们希望我们的简单工作能够引起人们对自动驾驶技术甚至其他深度学习的模型的关注。它带来了关于如何判断或检测这些系统中后门存在的动机。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号