首页> 外文期刊>Neurocomputing >Developing parallel sequential minimal optimization for fast training support vector machine
【24h】

Developing parallel sequential minimal optimization for fast training support vector machine

机译:开发用于快速训练支持向量机的并行顺序最小优化

获取原文
获取原文并翻译 | 示例

摘要

A parallel version of sequential minimal optimization (SMO) is developed in this paper for fast training support vector machine (SVM). Up to now, SMO is one popular algorithm for training SVM, but it still requires a large amount of computation time for solving large size problems. The parallel SMO is developed based on message passing interface (MPI). Unlike the sequential SMO which handle all the training data points using one CPU processor, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set, the MNIST data set and IDEVAL data set when many processors are used. There are also satisfactory results on the Web data set. This work is very useful for the research where multiple CPU processors machine is available.
机译:本文针对快速训练支持向量机(SVM)开发了并行版本的顺序最小优化(SMO)。到目前为止,SMO是一种用于训练SVM的流行算法,但是它仍然需要大量的计算时间才能解决大型问题。并行SMO是基于消息传递接口(MPI)开发的。与顺序SMO使用一个CPU处理器处理所有训练数据点不同,并行SMO首先将整个训练数据集划分为较小的子集,然后同时运行多个CPU处理器以处理每个分区的数据集。实验表明,使用许多处理器时,成人数据集,MNIST数据集和IDEVAL数据集的速度大大提高。 Web数据集也有令人满意的结果。这项工作对于可使用多个CPU处理器机器的研究非常有用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号