A training method for a differential privacy-based anomaly detection model, comprising: inputting a first feature vector of an arbitrary sample into an encoder, outputting a dimension-reduced second feature vector, and outputting, by the decoder (120), a restored third feature vector (21); constructing an evaluation vector on the basis of the second feature vector, and inputting the evaluation vector into an evaluation network (200) (22); obtaining a sub-distribution probability that the sample outputted by the evaluation network (200) belongs to K sub-Gaussian distributions in a Gaussian mixture distribution (23); obtaining a first probability of the arbitrary sample in the Gaussian mixture distribution according to the evaluation vectors and sub-distribution probabilities corresponding to samples in a training set (24); determining a predicted loss which is negatively correlated with the first probability corresponding to each sample and negatively correlated with the similarity between the first feature vector and the third feature vector (25); and adding noise to an original gradient obtained on the basis of the predicted loss in a differential privacy manner, and adjusting model parameters of the anomaly detection model by using the gradient comprising the noise (26).
展开▼