首页> 外文会议> >Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection
【24h】

Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection

机译:深层神经网络抗激光故障注入的安全性评估

获取原文

摘要

Deep learning is becoming a basis of decision making systems in many application domains, such as autonomous vehicles, health systems, etc., where the risk of misclassification can lead to serious consequences. It is necessary to know to which extent are Deep Neural Networks (DNNs) robust against various types of adversarial conditions. In this paper, we experimentally evaluate DNNs implemented in embedded device by using laser fault injection, a physical attack technique that is mostly used in security and reliability communities to test robustness of various systems. We show practical results on four activation functions, ReLu, softmax, sigmoid, and tanh. Our results point out the misclassification possibilities for DNNs achieved by injecting faults into the hidden layers of the network. We evaluate DNNs by using several different attack strategies to show which are the most efficient in terms of misclassification success rates. Outcomes of this work should be taken into account when deploying devices running DNNs in environments where malicious attacker could tamper with the environmental parameters that would bring the device into unstable conditions. resulting into faults.
机译:深度学习正在成为许多应用领域(例如自动驾驶汽车,医疗系统等)决策系统的基础,其中错误分类的风险可能会导致严重的后果。有必要知道深度神经网络(DNN)在各种对抗性条件下的鲁棒性。在本文中,我们通过使用激光故障注入(一种主要用于安全性和可靠性社区以测试各种系统的鲁棒性的物理攻击技术),对嵌入式设备中实现的DNN进行实验性评估。我们在四个激活函数ReLu,softmax,Sigmoid和tanh上显示了实际结果。我们的结果指出了通过将故障注入网络的隐藏层而实现的DNN分类错误的可能性。我们通过使用几种不同的攻击策略来评估DNN,以显示在误分类成功率方面最有效的策略。在恶意攻击者可能篡改会使设备进入不稳定状态的环境参数的环境中部署运行DNN的设备时,应考虑到这项工作的结果。导致故障。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号