【24h】

Boosting Using Branching Programs

机译:提升使用分支程序

获取原文
获取原文并翻译 | 示例

摘要

It is known that decision tree learning can be viewed as a form of boosting. Given a weak learning hypothesis one can show that the training error of a decision tree declines as |T|~(-β) where |T| is the size of the decision tree and β is a constant determined by the weak learning hypothesis. Here we consider the case of decision DAGs―decision trees in which a given node can be shared by different branches of the tree, also called branching programs (BP). Node sharing allows a branching programs to be exponentially more compact than the corresponding decision tree. We show that under the same weak learning assumption used for decision tree learning there exists a greedy BP-growth algorithm whose training error is guaranteed to decline as 2~(-β|T|~(1/2)), where |T| is the size of the branching program and β is a constant determined by the weak learning hypothesis. Therefore, from the perspective of boosting theory, branching programs are exponentially more efficient than decision trees.
机译:众所周知,决策树学习可以看作是增强的一种形式。给定一个较弱的学习假设,可以证明决策树的训练误差下降为| T |〜(-β)其中| T |是决策树的大小,β是由弱学习假设确定的常数。在这里,我们考虑决策DAG的情况-决策树,其中给定节点可以由树的不同分支共享,也称为分支程序(BP)。节点共享使分支程序比相应的决策树更紧凑。我们证明,在用于决策树学习的相同弱学习假设下,存在贪婪的BP-增长算法,其训练误差可保证降低为2〜(-β| T |〜(1/2)),其中| T |。是分支程序的大小,β是由弱学习假设确定的常数。因此,从增强理论的角度来看,分支程序比决策树效率高出几倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号