首页> 外文期刊>International journal of advanced pervasive and ubiquitous computing >A New SVM Reduction Strategy of Large-Scale Training Sample Sets
【24h】

A New SVM Reduction Strategy of Large-Scale Training Sample Sets

机译:大规模训练样本集的新SVM约简策略

获取原文
获取原文并翻译 | 示例
       

摘要

There has become a bottleneck to use support vector machine (SVM) due to the problems such as slow learning speed, large buffer memoryrequirement, low generalization performance andsoon. These problems are caused by large-scale training sample set and outlier data immixed in the other class. Aiming at these problems, this paper proposed a new reduction strategy for large-scale training sample set according to analyzing on the structure of the training sample set based on the point set theory. By using fuzzy clustering method in this new strategy, the potential support vectors are obtained and the non-boundary outlier data immixed in the other class is removed. In view of reducing greatly the scale of the training sample set, it improves the generalization performance of SVM and effectively avoids over-learning. Finally, the experimental results shown the given reduction strategy can not only reduce the train samples of SVM and speed up the train process, but also ensure accuracy of classification.
机译:由于学习速度慢,缓冲存储器需求大,泛化性能低以及很快等问题,使用支持向量机(SVM)已成为瓶颈。这些问题是由大型训练样本集和另一类中混杂的异常数据引起的。针对这些问题,在分析基于点集理论的训练样本集结构的基础上,提出了一种新的大规模训练样本集约简策略。通过在这种新策略中使用模糊聚类方法,可以获得潜在的支持向量,并去除了混合在另一类中的非边界离群数据。鉴于大大减少了训练样本集的规模,提高了支持向量机的泛化性能,有效避免了过度学习。最后,实验结果表明,给定的降阶策略不仅可以减少支持向量机的训练样本,加快训练过程,而且可以保证分类的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号