...
首页> 外文期刊>Journal of machine learning research >Some Greedy Learning Algorithms for Sparse Regression and Classification with Mercer Kernels
【24h】

Some Greedy Learning Algorithms for Sparse Regression and Classification with Mercer Kernels

机译:Mercer核用于稀疏回归和分类的一些贪婪学习算法

获取原文

摘要

We present greedy learning algorithms for building sparse nonlinear regression and classification models from observational data using Mercer kernels. Our objective is to develop efficient numerical schemes for reducing the training and runtime complexities of kernel-based algorithms applied to largedatasets. In the spirit of Natarajan's greedy algorithm (Natarajan, 1995),we iteratively minimize the L2$ loss function subject to a specified constraint on the degree of sparsity required of the final model or till a specified stopping criterion is reached. We discuss various greedy criteria for basis selection and numerical schemes for improving the robustness and computational efficiency. Subsequently, algorithms based on residual minimization and thin QR factorization are presented for constructing sparse regression and classification models. During thecourse of the incremental model construction, the algorithms are terminatedusing model selection principles such as the minimum descriptive length (MDL) and Akaike's informationcriterion (AIC). Finally, experimental results onbenchmark data are presented to demonstrate the competitiveness of the algorithms developed in this paper.
机译:我们提出了贪婪的学习算法,用于使用Mercer内核从观测数据中构建稀疏的非线性回归和分类模型。我们的目标是开发有效的数值方案,以减少应用于大型数据集的基于内核的算法的训练和运行时复杂性。本着Natarajan贪婪算法(Natarajan,1995)的精神,我们在最终的稀疏度要求受特定约束的情况下,迭代地使 L 2 $损失函数最小化模型或直到达到指定的停止标准。我们讨论了用于基础选择的各种贪婪标准以及用于提高鲁棒性和计算效率的数值方案。随后,提出了基于残差最小化和薄QR分解的算法来构建稀疏回归和分类模型。在增量模型构建过程中,算法使用模型选择原理(例如最小描述长度(MDL)和Akaike的信息标准(AIC))终止。最后,给出了基准数据的实验结果,以证明本文开发的算法的竞争力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号