A method for generating a privacy-preserving mapping commences by characterizing an input data set Y with respect to a set of hidden features S. Thereafter, the privacy threat is modeled to create a threat model, which is a minimization of an inference cost gain on the hidden features S. The minimization is then constrained by adding utility constraints to introduce a privacy/accuracy trade-off. The threat model is represented with a metric related to a self-information cost function. Lastly, the metric is optimized to obtain an optimal mapping, in order to provide a mapped output U, which is privacy-preserving.
展开▼