首页> 外文期刊>IEEE Transactions on Neural Networks >Parallel sequential minimal optimization for the training of support vector machines
【24h】

Parallel sequential minimal optimization for the training of support vector machines

机译:支持向量机的并行顺序最小优化

获取原文
获取原文并翻译 | 示例

摘要

Sequential minimal optimization (SMO) is one popular algorithm for training support vector machine (SVM), but it still requires a large amount of computation time for solving large size problems. This paper proposes one parallel implementation of SMO for training SVM. The parallel SMO is developed using message passing interface (MPI). Specifically, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set and the Mixing National Institute of Standard and Technology (MNIST) data set when many processors are used. There are also satisfactory results on the Web data set.
机译:序列最小优化(SMO)是一种用于训练支持向量机(SVM)的流行算法,但解决大型问题仍然需要大量的计算时间。本文提出了一种用于训练SVM的SMO并行实现。并行SMO使用消息传递接口(MPI)开发。具体而言,并行SMO首先将整个训练数据集划分为较小的子集,然后同时运行多个CPU处理器以处理每个已划分的数据集。实验表明,使用许多处理器时,成人数据集和美国国家标准与技术研究院混合(MNIST)数据集的速度大大提高。 Web数据集也有令人满意的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号