首页> 外国专利> Protecting Cognitive Systems from Model Stealing Attacks

Protecting Cognitive Systems from Model Stealing Attacks

机译:保护认知系统免受模型窃取攻击

摘要

Mechanisms are provided for obfuscating training of trained cognitive model logic. The mechanisms receive input data for classification into one or more classes in a plurality of predefined classes as part of a cognitive operation of the cognitive system. The input data is processed by applying a trained cognitive model to the input data to generate an output vector having values for each of the plurality of predefined classes. A perturbation insertion engine modifies the output vector by inserting a perturbation in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The perturbation modifies the one or more values to obfuscate the trained configuration of the trained cognitive model logic while maintaining accuracy of classification of the input data.
机译:提供了用于模糊训练受过训练的认知模型逻辑的机制。这些机制接收输入数据,以将其分类为多个预定义类别中的一个或多个类别,作为认知系统的认知操作的一部分。通过将经训练的认知模型应用于输入数据来处理输入数据,以生成具有针对多个预定类别中的每个类别的值的输出矢量。扰动插入引擎通过将扰动插入与生成输出向量相关联的函数中来修改输出向量,从而生成修改后的输出向量。然后输出修改后的输出向量。扰动修改一个或多个值以混淆训练后的认知模型逻辑的训练后的配置,同时保持输入数据分类的准确性。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号