首页> 外文会议>International Conference on Information Technology >Tuning Matrix-Vector Multiplication on GPU
【24h】

Tuning Matrix-Vector Multiplication on GPU

机译:在GPU上调矩阵矢量乘法

获取原文

摘要

A matrix times vector multiplication (matvec) is a cornerstone operation in iterative methods of solving large sparse systems of equations such as the conjugate gradients method (eg), the minimal residual method (minres), the generalized residual method (gmres) and exerts an influence on overall performance of those methods. An implementation of matvec is particularly demanding when one executes computations on a GPU (Graphics Processing Unit), because using this device one has to comply with certain programming rules in order to take advantage of parallel computing. In this paper, it will be shown how to modify the sparse matrix-vector multiplication based on CRS (Compressed Row Storage) to achieve about 3-5 times better performance on - a low cost - GPU (GeForce GTX 285, 1.48 GHz) than on a CPU (Intel Core i7, 2.67 GHz).
机译:矩阵时间向量乘法(MatVec)是迭代方法的基石操作,用于求解诸如共轭梯度方法(例如),最小残留方法(MINRES),广义残留方法(GMRES)的大稀疏系统的迭代方法,并施加一个对这些方法的整体性能的影响。当一个人在GPU(图形处理单元)上执行计算时,Matvec的实现特别要求,因为使用该设备必须符合某些编程规则,以便利用并行计算。在本文中,将显示如何根据CRS(压缩行存储)来修改稀疏矩阵矢量乘法,以实现大约3-5倍的性能 - 低成本 - GPU(GeForce GTX 285,1.48 GHz)比在CPU(英特尔核心I7,2.67 GHz)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号