首页> 外文会议>Fifth International Conference on Computing Anticipatory Systems (CASYS 2001) Aug 13-18, 2001 Liege, Belgium >Incremental Learning Algorithms for Classification and Regression: local strategies
【24h】

Incremental Learning Algorithms for Classification and Regression: local strategies

机译:分类和回归的增量学习算法:局部策略

获取原文
获取原文并翻译 | 示例

摘要

We present a new local strategy to solve incremental learning tasks. It allows to avoid re-learning of all the parameters by selecting a working subset where the incremental learning is performed. While this procedure can be applied to various schemes (hybrid decision trees, committee machines), we illustrate it with Support Vector Machines based on local kernel. We derive and compare three methods to perform the selection procedure: two of them take advantage of the estimation of generalization error by using theoretical error bounds devoted to S VM. Experimental simulations on three typical datasets of machine learning give promising results.
机译:我们提出一种新的本地策略来解决增量学习任务。通过选择执行增量学习的工作子集,它可以避免重新学习所有参数。尽管此过程可以应用于各种方案(混合决策树,委员会机器),但我们使用基于局部内核的支持向量机对其进行了说明。我们推导并比较了三种执行选择过程的方法:其中两种方法利用专用于SVM的理论误差范围来利用泛化误差的估计。在三个典型的机器学习数据集上进行的实验仿真给出了可喜的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号