首页> 外文期刊>LIPIcs : Leibniz International Proceedings in Informatics >Agnostically Learning Boolean Functions with Finite Polynomial Representation
【24h】

Agnostically Learning Boolean Functions with Finite Polynomial Representation

机译:具有有限多项式表示的不可知论布尔函数

获取原文
           

摘要

Agnostic learning is an extremely hard task in computational learning theory. In this paper we revisit the results in [Kalai et al. SIAM J. Comput. 2008] on agnostically learning boolean functions with finite polynomial representation and those that can be approximated by the former. An example of the former is the class of all boolean low-degree polynomials. For the former, [Kalai et al. SIAM J. Comput. 2008] introduces the l_1-polynomial regression method to learn them to error opt+epsilon. We present a simple instantiation for one step in the method and accordingly give the analysis. Moreover, we show that even ignoring this step can bring a learning result of error 2opt+epsilon as well. Then we consider applying the result for learning concept classes that can be approximated by the former to learn richer specific classes. Our result is that the class of s-term DNF formulae can be agnostically learned to error opt+epsilon with respect to arbitrary distributions for any epsilon in time poly(n^d, 1/epsilon), where d=O(sqrt{n}cdot scdot log slog^2(1/epsilon)).
机译:在计算学习理论中,不可知论学习是一项极其艰巨的任务。在本文中,我们将回顾[Kalai等人的结果。 SIAM J.计算。 [2008年]关于使用有限多项式表示以及可被前者近似的布尔函数进行不可知论地学习。前者的一个示例是所有布尔低阶多项式的类。对于前者,[Kalai等。 SIAM J.计算。 [2008年]引入了l_1多项式回归方法,以了解它们的误差opt + epsilon。我们为方法中的一个步骤提供了一个简单的实例,并据此进行了分析。而且,我们表明,即使忽略此步骤也可以带来错误2opt + epsilon的学习结果。然后,我们考虑将结果应用于学习概念类,后者可以被前者近似以学习更丰富的特定类。我们的结果是,可以在时间poly(n ^ d,1 / epsilon)上无知地得知s项DNF公式的类别相对于任意分布的opt + epsilon误差,其中d = O( sqrt { n} cdot s cdot log s log ^ 2(1 / epsilon))。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号