首页> 外文会议>IEEE Computer Society Annual Symposium on VLSI >Hu-Fu: Hardware and Software Collaborative Attack Framework Against Neural Networks
【24h】

Hu-Fu: Hardware and Software Collaborative Attack Framework Against Neural Networks

机译:胡福:神经网络的软硬件协同攻击框架

获取原文

摘要

Recently, Deep Learning (DL), especially Convolutional Neural Network (CNN), develops rapidly and is applied to many tasks, such as image classification, face recognition, image segmentation, and human detection. Due to its superior performance, DL-based models have a wide range of application in many areas, some of which are extremely safety-critical, e.g. intelligent surveillance and autonomous driving. Due to the latency and privacy problem of cloud computing, embedded accelerators are popular in these safety-critical areas. However, the robustness of the embedded DL system might be harmed by inserting hardware/software Trojans into the accelerator and the neural network model, since the accelerator and deploy tool (or neural network model) are usually provided by third-party companies. Fortunately, inserting hardware Trojans can only achieve inflexible attack, which means that hardware Trojans can easily break down the whole system or exchange two outputs, but can't make CNN recognize unknown pictures as targets. Though inserting software Trojans has more freedom of attack, it often requires tampering input images, which is not easy for attackers. So, in this paper, we propose a hardware-software collaborative attack framework to inject hidden neural network Trojans, which works as a back-door without requiring manipulating input images and is flexible for different scenarios. We test our attack framework for image classification and face recognition tasks, and get attack success rate of 92.6% and 100% on CIFAR10 and YouTube Faces, respectively, while keeping almost the same accuracy as the unattacked model in the normal mode. In addition, we show a specific attack scenario in which a face recognition system is attacked and gives a specific wrong answer.
机译:最近,深度学习(DL),特别是卷积神经网络(CNN),迅速发展,并且应用于许多任务,例如图像分类,面部识别,图像分割和人类检测。由于其卓越的性能,基于DL的模型在许多领域具有广泛的应用,其中一些是非常安全的,例如非常严重的。智能监视和自主驾驶。由于云计算的延迟和隐私问题,嵌入式加速器在这些安全关键区域中很受欢迎。然而,由于加速器和部署工具(或神经网络模型)通常由第三方公司提供,因此嵌入式DL系统的稳健性可能受到插入加速器和神经网络模型的损害。幸运的是,插入硬件特洛伊木马只能实现不灵活的攻击,这意味着硬件特洛伊木马可以轻松地分解整个系统或交换两个输出,但不能使CNN将未知图片作为目标识别为目标。虽然插入软件特洛伊木马有更多的攻击自由,但它通常需要篡改输入图像,这对攻击者来说并不容易。因此,在本文中,我们提出了一种硬件 - 软件协作攻击框架来注入隐藏的神经网络特洛伊木马,其用作后门,而无需操纵输入图像,对于不同的场景是灵活的。我们测试我们的攻击框架进行图像分类和面部识别任务,并分别在CIFAR10和YouTube面上获得92.6 \%和100 \%的攻击成功率,同时在正常模式下保持与未经基奇模型相同的准确度。此外,我们展示了一个特定的攻击情景,其中攻击系统受到攻击并提供特定的错误答案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号