首页> 外文会议>International Conference on Hardware/Software Codesign and System Synthesis >Model Stealing Defense with Hybrid Fuzzy Models: Work-in-Progress
【24h】

Model Stealing Defense with Hybrid Fuzzy Models: Work-in-Progress

机译:混合模糊模型的模型窃取防御:正在进行中

获取原文

摘要

With increasing applications of Deep Neural Networks (DNNs) to edge computing systems, security issues have received more attentions. Particularly, model stealing attack is one of the biggest challenge to the privacy of models. To defend against model stealing attack, we propose a novel protection architecture with fuzzy models. Each fuzzy model is designed to generate wrong predictions corresponding to a particular category. In addition’ we design a special voting strategy to eliminate the systemic errors, which can destroy the dark knowledge in predictions at the same time. Preliminary experiments show that our method substantially decreases the clone model's accuracy (up to 20%) without loss of inference accuracy for benign users.
机译:随着深度神经网络(DNN)在边缘计算系统中的应用越来越多,安全性问题受到了越来越多的关注。特别是,模型窃取攻击是对模型隐私的最大挑战之一。为了防御模型窃取攻击,我们提出了一种具有模糊模型的新型保护体系结构。每个模糊模型都设计为生成对应于特定类别的错误预测。此外,我们还设计了一种特殊的投票策略来消除系统错误,该错误会同时破坏预测中的黑暗知识。初步实验表明,我们的方法大大降低了克隆模型的准确性(最高20%),而不会损害良性用户的推理准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号