首页> 外文会议>IEEE Annual Symposium on Foundations of Computer Science >Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas
【24h】

Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas

机译:Juntas逼近亚模和XOS函数的最佳界

获取原文

摘要

We investigate the approximability of several classes of real-valued functions by functions of a small number of variables (juntas). Our main results are tight bounds on the number of variables required to approximate a function f:zon → [0, 1] within l2-error ε over the uniform distribution: If f is sub modular, then it is ε-close to a function of Õ(1ε2 log 1 ε) variables. This is an exponential improvement over previously known results FeldmanKV:13. We note that Ω(1 ε2) variables are necessary even for linear functions. If f is fractionally sub additive (XOS) it is ε-close to a function of 2O(1/ε2) variables. This result holds for all functions with low total ℓ1-influence and is a real-valued analogue of Fried gut's theorem for boolean functions. We show that 2Ω(1/ε) variables are necessary even for XOS functions. As applications of these results, we provide learning algorithms over the uniform distribution. For XOS functions, we give a PAC learning algorithm that runs in time 21/poly(ε) poly(n). For sub modular functions we give an algorithm in the more demanding PMAC learning model BalcanHarvey:12full which requires a multiplicative (1+γ) factor approximation with probability at least 1-ε over the target distribution. Our uniform distribution algorithm runs in time 21/poly(γε) poly(n). This is the first algorithm in the PMAC model that can achieve a constant approximation factor arbitrarily close to 1 for all sub modular functions (even over the uniform distribution). It relies crucially on our approximation by junta result. As follows from the lower bounds in FeldmanKV:13 both of these algorithms are close to optimal. We also give applications for proper learning, testing and agnostic learning with value que- ies of these classes.
机译:我们通过少量变量(juntas)的函数来研究几类实值函数的逼近性。我们的主要结果是逼近l2-errorε中的函数f:zon→[0,1]所需的变量数量的严格界限。在均匀分布上:如果f为子模,则它ε-接近Õ(1ε 2 log 1ε)变量的函数。这是对先前已知结果FeldmanKV:13的指数改进。我们注意到,即使对于线性函数,Ω(1ε 2)变量也是必需的。如果f是分数次加法(XOS),则它-ε接近2O(1 /ε2)变量的函数。该结果适用于总ℓ 1-影响较小的所有函数,并且是布尔函数的Fried gut定理的实值类似物。我们证明即使对于XOS功能,2Ω(1 /ε)变量也是必需的。作为这些结果的应用,我们提供了均匀分布上的学习算法。对于XOS功能,我们给出了以时间21 / poly(ε)poly(n)运行的PAC学习算法。对于子模块函数,我们在要求更高的PMAC学习模型BalcanHarvey:12full中给出一种算法,该算法要求乘数(1 +γ)因子近似,且概率至少为1-ε超过目标分配。我们的均匀分布算法以时间21 / poly(γε)poly(n)运行。这是PMAC模型中的第一个算法,对于所有子模块功能(甚至在均匀分布上),它都可以实现恒定接近1的恒定逼近因子。它主要取决于我们对军政府结果的近似。从FeldmanKV:13的下限来看,这两种算法都接近最优。我们还将针对这些类的价值问题,提供适当学习,测试和不可知论学习的应用程序。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号