...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Privately Learning Thresholds: Closing the Exponential Gap
【24h】

Privately Learning Thresholds: Closing the Exponential Gap

机译:私人学习阈值:关闭指数差距

获取原文
           

摘要

We study the sample complexity of learning threshold functions under the constraint of differential privacy. It is assumed that each labeled example in the training data is the information of one individual and we would like to come up with a generalizing hypothesis $h$ while guaranteeing differential privacy for the individuals. Intuitively, this means that any single labeled example in the training data should not have a significant effect on the choice of the hypothesis. This problem has received much attention recently; unlike the non-private case, where the sample complexity is independent of the domain size and just depends on the desired accuracy and confidence, for private learning the sample complexity must depend on the domain size $X$ (even for approximate differential privacy). Alon et al. (STOC 2019) showed a lower bound of $Omega(log^*|X|)$ on the sample complexity and Bun et al. (FOCS 2015) presented an approximate-private learner with sample complexity $ilde{O}left(2^{log^*|X|}ight)$. In this work we reduce this gap significantly, almost settling the sample complexity. We first present a new upper bound (algorithm) of $ilde{O}left(left(log^*|X|ight)^2ight)$ on the sample complexity and then present an improved version with sample complexity $ilde{O}left(left(log^*|X|ight)^{1.5}ight)$. Our algorithm is constructed for the related interior point problem, where the goal is to find a point between the largest and smallest input elements. It is based on selecting an input-dependent hash function and using it to embed the database into a domain whose size is reduced logarithmically; this results in a new database, an interior point of which can be used to generate an interior point of the original database in a differentially private manner.
机译:我们在差异隐私的约束下研究了学习阈值功能的样本复杂性。假设培训数据中的每个标记示例是一个人的信息,我们想提出概括假设$ H $,同时保证个人的差别隐私。直观地,这意味着培训数据中的任何单一标记示例都不应该对假设的选择产生重大影响。最近这个问题受到了很多关注;与非私种的情况不同,在样本复杂性独立于域大小并且仅取决于所需的准确性和置信度,对于私人学习,示例复杂性必须取决于域大小$ x $(即使是近似差分隐私)。 Alon等人。 (STOC 2019)显示了$ OMEGA( log ^ * | x |)$上的较低限制的样本复杂性和bun等人。 (Focs 2015)介绍了一个近似私立的学习者,具有样本复杂度$ tilde {o} left(2 ^ { log ^ * | x |} 右)$。在这项工作中,我们显着降低了这种差距,几乎稳定了样本复杂性。我们首先介绍$ tilde {o} 的新上限(算法)left( lef( log ^ * | x | 右)^ 2 右)$上的样本复杂性,然后呈现改进的版本样本复杂度$ tilde {o} left( left( log ^ * | x | 右)^ {1.5}右)$。我们的算法为相关的内部点问题构建,目标是在最大和最小输入元素之间找到一个点。它基于选择输入依赖的哈希函数,并使用它将数据库嵌入到其大小降低对数的域;这导致新数据库,其内部点可用于以差别私人方式生成原始数据库的内部点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号