首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Hardness of Improper One-Sided Learning of Conjunctions For All Uniformly Falsifiable CSPs
【24h】

Hardness of Improper One-Sided Learning of Conjunctions For All Uniformly Falsifiable CSPs

机译:所有均匀可伪造的CSP的连词学习不当的难度

获取原文
       

摘要

We consider several closely related variants of PAC-learning in which false-positive and false-negative errors are treated differently. In these models we seek to guarantee a given, low rate of false-positive errors and asfew false-negative errors as possible given that we meet the false-positived constraint. Bshouty and Burroughs first observed that learning conjunctions in such models would enable PAC-learning of DNF in the usual distribution-free model; in turn, results of Daniely and Shalev-Shwartz establish that learning of DNF would imply algorithms for refuting random k-SAT using far fewer constraints than believed possible. Such algorithms would violate a slight strengthening of Feige’s R3SAT assumption, and would violate the RCSP hypothesis of Barak et al. We show here that actually, an algorithm for learning conjunctions in this model would have much more far-reaching consequences: it gives refutation algorithms for all predicates that are falsified by one of the uniform constant strings. To our knowledge, this is the first hardness result of improper learning for such a large class of natural average-case problems with natural distributions.
机译:我们考虑了PAC学习的几个紧密相关的变体,其中对假阳性和假阴性错误的处理方式有所不同。在这些模型中,我们力求保证给定的假阳性错误率低,并且在满足假阳性约束的情况下尽可能少地出现假阴性错误。 Bshouty和Burroughs首先观察到在这种模型中学习连词将能够在通常的无分布模型中进行DNF的PAC学习。反过来,Daniery和Shalev-Shwartz的结果证明,DNF的学习将暗示使用比认为可能少得多的约束来反驳随机k-SAT的算法。这样的算法将稍微削弱Feige的R3SAT假设,并违反Barak等人的RCSP假设。我们在这里表明,实际上,在该模型中用于学习连词的算法将产生更深远的影响:它为被统一常数字符串之一伪造的所有谓词提供反驳算法。据我们所知,这是对这类具有自然分布的自然平均情况问题进行不适当学习的第一个硬度结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号