...
首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >Copycat CNN: Are random non-Lab ele d data enough to steal knowledge from black -box models?
【24h】

Copycat CNN: Are random non-Lab ele d data enough to steal knowledge from black -box models?

机译:CopyCat CNN:是随机的非实验室ELE D数据,足以从黑色 - 箱模型窃取知识?

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Convolutional neural networks have been successful lately enabling companies to develop neural-based products, which demand an expensive process, involving data acquisition and annotation; and model generation, usually requiring experts. With all these costs, companies are concerned about the security of their models against copies and deliver them as black-boxes accessed by APIs. Nonetheless, we argue that even black-box models still have some vulnerabilities. In a preliminary work, we presented a simple, yet powerful, method to copy black-box models by querying them with natural random images. In this work, we consolidate and extend the copycat method: (i) some constraints are waived; (ii) an extensive evaluation with several problems is performed; (iii) models are copied between different architectures; and, (iv) a deeper analysis is performed by looking at the copycat behavior. Results show that natural random images are effective to generate copycats for several problems. (c) 2021 Elsevier Ltd. All rights reserved.
机译:卷积神经网络最近取得了成功,使公司能够开发基于神经的产品,这需要昂贵的过程,包括数据采集和注释;以及模型生成,通常需要专家。考虑到所有这些成本,公司担心其模型的副本安全性,并将其作为API访问的黑盒交付。尽管如此,我们认为即使是黑盒模型也存在一些漏洞。在前期工作中,我们提出了一种简单但功能强大的方法,通过查询自然随机图像来复制黑盒模型。在这项工作中,我们巩固和扩展了模仿方法:(i)放弃了一些限制;(ii)对几个问题进行了广泛的评估;(iii)在不同的架构之间复制模型;(iv)通过观察模仿行为进行更深入的分析。结果表明,自然随机图像可以有效地生成多个问题的模仿者。(c)2021爱思唯尔有限公司保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号