...
首页> 外文期刊>IEEE Transactions on Knowledge and Data Engineering >Differentially Private Mixture of Generative Neural Networks
【24h】

Differentially Private Mixture of Generative Neural Networks

机译:生成神经网络的差分私有混合

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Generative models are used in a wide range of applications building on large amounts of contextually rich information. Due to possible privacy violations of the individuals whose data is used to train these models, however, publishing or sharing generative models is not always viable. In this paper, we present a novel technique for privately releasing generative models and entire high-dimensional datasets produced by these models. We model the generator distribution of the training data with a mixture of k generative neural networks. These are trained together and collectively learn the generator distribution of a dataset. Data is divided into k clusters, using a novel differentially private kernel k-means, then each cluster is given to separate generative neural networks, such as Restricted Boltzmann Machines or Variational Autoencoders, which are trained only on their own cluster using differentially private gradient descent. We evaluate our approach using the MNIST dataset, as well as call detail records and transit datasets, showing that it produces realistic synthetic samples, which can also be used to accurately compute arbitrary number of counting queries.
机译:生成模型用于基于大量上下文丰富信息的各种应用程序中。但是,由于可能会侵犯其数据用于训练这些模型的个人的隐私,因此发布或共享生成模型并不总是可行的。在本文中,我们提出了一种新技术,用于私下发布生成模型以及这些模型生成的整个高维数据集。我们使用k个生成神经网络的混合模型对训练数据的生成器分布进行建模。将它们一起训练,并共同学习数据集的生成器分布。使用新颖的差分私有核k均值将数据划分为k个聚类,然后将每个聚类提供给单独的生成神经网络,例如受限玻尔兹曼机或变分自动编码器,它们仅使用差分私有梯度下降法在自己的聚类上进行训练。我们使用MNIST数据集以及呼叫详细记录和转接数据集评估了我们的方法,表明该方法可以生成逼真的合成样本,也可以用于准确计算任意数量的计数查询。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号