首页> 外文会议>ACM/IEEE Design Automation Conference >Hardware-Assisted Intellectual Property Protection of Deep Learning Models
【24h】

Hardware-Assisted Intellectual Property Protection of Deep Learning Models

机译:深度学习模型的硬件辅助知识产权保护

获取原文

摘要

The protection of intellectual property (IP) rights of well-trained deep learning (DL) models has become a matter of major concern, especially with the growing trend of deployment of Machine Learning as a Service (MLaaS). In this work, we demonstrate the utilization of a hardware root-of-trust to safeguard the IPs of such DL models which potential attackers have access to. We propose an obfuscation framework called Hardware Protected Neural Network (HPNN) in which a deep neural network is trained as a function of a secret key and then, the obfuscated DL model is hosted on a public model sharing platform. This framework ensures that only an authorized end-user who possesses a trustworthy hardware device (with the secret key embedded on-chip) is able to run intended DL applications using the published model. Extensive experimental evaluations show that any unauthorized usage of such obfuscated DL models result in significant accuracy drops ranging from 73.22 to 80.17% across different neural network architectures and benchmark datasets. In addition, we also demonstrate the robustness of proposed HPNN framework against a model fine-tuning type of attack.
机译:训练有素的深度学习(DL)模型的知识产权(IP)权利已成为主要关注的问题,尤其是随着机器学习即服务(MLaaS)部署的增长趋势。在这项工作中,我们演示了如何利用硬件信任根来保护潜在攻击者可以访问的此类DL模型的IP。我们提出了一种称为硬件保护神经网络(HPNN)的混淆框架,其中将深层神经网络作为密钥的功能进行训练,然后将混淆后的DL模型托管在公共模型共享平台上。该框架确保只有拥有值得信赖的硬件设备(具有嵌入在芯片中的密钥)的授权最终用户才能使用已发布的模型来运行预期的DL应用程序。广泛的实验评估表明,在不同的神经网络体系结构和基准数据集之间,未经授权使用此类混淆DL模型会导致准确性显着下降,范围从73.22到80.17%。此外,我们还展示了针对模型微调类型的攻击所提出的HPNN框架的鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号