首页> 外文期刊>Neurocomputing >An efficient parallel implementation for training supervised optimum-path forest classifiers
【24h】

An efficient parallel implementation for training supervised optimum-path forest classifiers

机译:培训监督最佳路径森林分类器有效的平行实施

获取原文
获取原文并翻译 | 示例

摘要

In this work, we propose and analyze parallel training algorithms for the Optimum-Path Forest (OPF) classifier. We start with a naive parallelization approach where, following traditional sequential training that considers the supervised OPF, a priority queue is used to store the best samples at each learning iteration. The proposed approach replaces the priority queue with an array and a linear search aiming at using a parallel-friendly data structure. We show that this approach leads to less competition among threads, thus yielding a more temporal and spatial locality. Additionally, we show how the use of vectorization in distance calculations affects the overall speedup and also provide directions on the situations one can benefit from that. The experiments are carried out on five public datasets with a different number of samples and features on architectures with distinct levels of parallelism. On average, the proposed approach provides speedups of up to 11.8 x and 26 x in a 24-core Intel and 64-core AMD processors, respectively. (C) 2019 Elsevier B.V. All rights reserved.
机译:在这项工作中,我们提出并分析了最佳路径林(OPF)分类器的并行训练算法。我们从一个天真的并行化方法开始,在考虑监督OPF的传统顺序训练之后,优先级队列用于存储每个学习迭代的最佳样本。该方法用阵列替换优先级队列和针对使用并行友好数据结构的线性搜索。我们表明这种方法导致线程之间的竞争较少,从而产生了更为潮时和空间的位置。此外,我们展示了矢量化在距离计算中的使用如何影响整体加速,并且还可以在可以从中受益的情况下提供方向。该实验在五个公共数据集上进行,具有不同数量的样本和具有不同平行级别的架构上的样本和特征。平均而言,所提出的方法分别在24核Intel和64核AMD处理器中提供高达11.8 x和26 x的加速。 (c)2019 Elsevier B.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号