首页> 外文学位 >Differential Privacy, Property Testing, and Perturbations
【24h】

Differential Privacy, Property Testing, and Perturbations

机译:差异隐私,属性测试和扰动

获取原文
获取原文并翻译 | 示例

摘要

Controlling the dissemination of information about ourselves has become a minefield in the modern age. We release data about ourselves every day and don't always fully understand what information is contained in this data. It is often the case that the combination of seemingly innocuous pieces of data can be combined to reveal more sensitive information about ourselves than we intended. Differential privacy has developed as a technique to prevent this type of privacy leakage. It borrows ideas from information theory to inject enough uncertainty into the data so that sensitive information is provably absent from the privatised data. Current research in differential privacy walks the fine line between removing sensitive information while allowing non-sensitive information to be released.;At its heart, this thesis is about the study of information. Many of the results can be formulated as asking a subset of the questions: does the data you have contain enough information to learn what you would like to learn? and how can I affect the data to ensure you can't discern sensitive information? We will often approach the former question from both directions: information theoretic lower bounds on recovery and algorithmic upper bounds.;We begin with an information theoretic lower bound for graphon estimation. This explores the fundamental limits of how much information about the underlying population is contained in a finite sample of data. We then move on to exploring the connection between information theoretic results and privacy in the context of linear inverse problems. We find that there is a discrepancy between how the inverse problems community and the privacy community view good recovery of information. Next, we explore black-box testing for privacy. We argue that the amount of information required to verify the privacy guarantee of an algorithm, without access to the internals of the algorithm, is lower bounded by the amount of information required to break the privacy guarantee. Finally, we explore a setting where imposing privacy is a help rather than a hindrance: online linear optimisation. We argue that private algorithms have the right kind of stability guarantee to ensure low regret for online linear optimisation.
机译:控制关于自己的信息的传播已成为现代的雷区。我们每天都会发布有关自己的数据,但并不总是完全了解此数据中包含哪些信息。通常情况下,看似无害的数据片段可以组合在一起,以显示比我们预期更多的关于自己的敏感信息。差分隐私已发展为一种防止此类隐私泄漏的技术。它借鉴了信息论的思想,为数据注入了足够的不确定性,因此,私有化数据可证明缺乏敏感信息。当前在差异隐私方面的研究走在删除敏感信息与允许发布非敏感信息之间的界限。;从本质上讲,本论文的主题是信息研究。许多结果可以表述为一个问题的子集:您所拥有的数据是否包含足够的信息以学习您想学习的东西?以及如何影响数据以确保您无法识别敏感信息?我们经常会从两个方向处理前一个问题:恢复的信息理论下限和算法上限。我们从石墨烯估计的信息理论下限开始。这探索了在有限的数据样本中包含多少有关底层人口的信息的基本限制。然后,我们继续探讨线性逆问题中信息理论结果与隐私之间的联系。我们发现,反问题社区和隐私社区如何看待信息的良好恢复之间存在差异。接下来,我们探索黑盒测试的隐私性。我们认为,在不访问算法内部的情况下,验证算法的隐私保证所需的信息量受打破隐私保证所需的信息量的下限限制。最后,我们探索了一种环境,其中强加隐私是一种帮助而不是障碍:在线线性优化。我们认为,私有算法具有正确的稳定性保证,可确保在线线性优化的后悔率不高。

著录项

  • 作者

    McMillan, Audra.;

  • 作者单位

    University of Michigan.;

  • 授予单位 University of Michigan.;
  • 学科 Mathematics.
  • 学位 Ph.D.
  • 年度 2018
  • 页码 110 p.
  • 总页数 110
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号