首页> 外文会议>International conference on artificial neural networks >Randomization vs Optimization in SVM Ensembles
【24h】

Randomization vs Optimization in SVM Ensembles

机译:SVM集成中的随机化与优化

获取原文
获取外文期刊封面目录资料

摘要

Ensembles of SVMs are notoriously difficult to build because of the stability of the model provided by a single SVM. The application of standard bagging or boosting algorithms generally leads to small accuracy improvements at a computational cost that increases with the size of the ensemble. In this work, we leverage on subsampling and the diversification of hyperparameters through optimization and randomization to build SVM ensembles at a much lower computational cost than training a single SVM on the same data. Furthermore, the accuracy of these ensembles is comparable to a single SVM and to a fully optimized SVM ensemble.
机译:由于单个SVM提供的模型的稳定性,众所周知,SVM的集成很难构建。标准装袋或增强算法的应用通常会导致精度的小幅度提高,但计算量却会随合奏的大小而增加。在这项工作中,我们通过优化和随机化利用子采样和超参数的多样化,以比在相同数据上训练单个SVM更低的计算成本来构建SVM集成。此外,这些合奏的准确性可与单个SVM和完全优化的SVM集成相媲美。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号