首页> 外文会议>2011 International Conference on Electronics and Optoelectronics >Research on new reduction strategy of SVM used to large-scale training sample set
【24h】

Research on new reduction strategy of SVM used to large-scale training sample set

机译:用于大规模训练样本集的SVM新约简策略研究

获取原文

摘要

It has become a bottleneck to use Support Vector Machine (SVM) due to such problems as slow learning speed, large buffer memory requirement, low generalization performance and so on, which are caused by large-scale training sample set. Concerning these problems, this paper proposed a new reduction strategy for large-scale training sample set. First authors train an initial classifier with a small training set, which is randomly selected from the original samples, then cut the vector which is not Support Vector with the initial classifier to obtain a small reduction set. Training with this reduction set, final classifier is obtained. Experiments show that the learning strategy not only reduces the cost greatly but also obtains a classifier that has almost the same accuracy as the classifier obtained by training large set directly. In addition, speed of classification is greatly improved.
机译:由于大规模训练样本集导致的学习速度慢,缓冲存储器需求大,泛化性能低等问题,使用支持向量机(SVM)已成为瓶颈。针对这些问题,本文提出了一种新的大规模训练样本集约简策略。首先,作者用一个小的训练集训练一个初始分类器,该训练集是从原始样本中随机选择的,然后使用初始分类器切掉不是Support Vector的向量,以获得一个小的归约集。使用该约简集进行训练,即可获得最终分类器。实验表明,该学习策略不仅大大降低了成本,而且获得了与通过直接训练大集合获得的分类器几乎相同的精度。另外,大大提高了分类速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号