首页>
外国专利>
Defending Against Model Inversion Attacks on Neural Networks
Defending Against Model Inversion Attacks on Neural Networks
展开▼
机译:防御神经网络的模型反转攻击
展开▼
页面导航
摘要
著录项
相似文献
摘要
Mechanisms are provided for protecting a neural network model against model inversion attacks. The mechanisms generate a decoy dataset comprising decoy data for each class recognized by a neural network model. The mechanisms further configure the neural network model to generate a modified output based on the decoy dataset that directs a gradient of the modified output to the decoy dataset. The neural network model receives and process input data to generate an actual output. The neural network model modifies one or more actual elements of the actual output to be one or more corresponding modified elements of the modified output, and returns the one or more corresponding modified elements, instead of the one or more actual elements, to the source computing device.
展开▼