首页> 外文会议>13th European Conference on Machine Learning, Aug 19-23, 2002, Helsinki, Finland >Evidence that Incremental Delta-Bar-Delta Is an Attribute-Efficient Linear Learner
【24h】

Evidence that Incremental Delta-Bar-Delta Is an Attribute-Efficient Linear Learner

机译:增量Delta-Bar-Delta是属性有效的线性学习者的证据

获取原文
获取原文并翻译 | 示例

摘要

The Winnow class of on-line linear learning algorithms [10,11] was designed to be attribute-efficient. When learning with many irrelevant attributes, Winnow makes a number of errors that is only logarithmic in the number of total attributes, compared to the Perceptron algorithm, which makes a nearly linear number of errors. This paper presents data that argues that the Incremental Delta-Bar-Delta (IDBD) second-order gradient-descent algorithm [14] is attribute-efficient, performs similarly to Winnow on tasks with many irrelevant attributes, and also does better than Winnow on a task where Winnow does poorly. Preliminary analysis supports this empirical claim by showing that IDBD, like Winnow and other attribute-efficient algorithms, and unlike the Perceptron algorithm, has weights that can grow exponentially quickly. By virtue of its more flexible approach to weight updates, however, IDBD may be a more practically useful learning algorithm than Winnow.
机译:在线线性学习算法的Winnow类[10,11]被设计为属性有效的。与许多不相关的属性一起学习时,与Perceptron算法相比,Winnow会产生许多错误,而这些错误的总数仅是总属性数的对数,而Perceptron算法的错误率几乎是线性的。本文提出的数据认为,增量Delta-Bar-Delta(IDBD)二阶梯度下降算法[14]具有高效的属性,在具有许多不相关属性的任务上的性能类似于Winnow,并且比Winnow更好。 Winnow做得不好的任务。初步分析通过证明IDBD(如Winnow和其他高效属性的算法,与Perceptron算法不同)具有可以按指数级增长的权重来支持这一经验主张。但是,凭借其更灵活的权重更新方法,IDBD可能比Winnow更实用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号