首页> 外文期刊>International journal of grid and high performance computing >Reducing inter-process communication overhead in parallel sparse matrix-matrix multiplication
【24h】

Reducing inter-process communication overhead in parallel sparse matrix-matrix multiplication

机译:减少并行稀疏矩阵-矩阵乘法的进程间通信开销

获取原文
获取原文并翻译 | 示例

摘要

Parallel sparse matrix-matrix multiplication algorithms (PSpGEMM) spend most of their running time on inter-process communication. In the case of distributed matrix-matrix multiplications, much of this time is spent on interchanging the partial results that are needed to calculate the final product matrix. This overhead can be reduced with a one-dimensional distributed algorithm for parallel sparse matrix-matrix multiplication that uses a novel accumulation pattern based on the logarithmic complexity of the number of processors (i.e., O (log (p)) where p is the number of processors). This algorithm's MPI communication overhead and execution time were evaluated on an HPC cluster, using randomly generated sparse matrices with dimensions up to one million by one million. The results showed a reduction of inter-process communication overhead for matrices with larger dimensions compared to another one dimensional parallel algorithm that takes O(p) run-time complexity for accumulating the results.
机译:并行稀疏矩阵矩阵乘法算法(PSpGEMM)将其大部分运行时间用于进程间通信。在分布式矩阵-矩阵乘法的情况下,大部分时间用于交换计算最终乘积矩阵所需的部分结果。使用并行稀疏矩阵矩阵乘法的一维分布式算法可以减少这种开销,该算法基于处理器数量(即O(log(p))的对数复杂度)使用新颖的累加模式,其中p是数量处理器)。该算法的MPI通信开销和执行时间在HPC群集上进行了评估,使用的是随机生成的稀疏矩阵,其维数最大为一百万乘一百万。结果表明,与采用一维并行运行时间复杂度来累积结果的一维并行算法相比,具有更大维度的矩阵减少了进程间通信开销。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号