首页> 外文会议>IEEE International Conference on High Performance Computing >Parallelizing Hines Matrix Solver in Neuron Simulations on GPU
【24h】

Parallelizing Hines Matrix Solver in Neuron Simulations on GPU

机译:在GPU上的神经元模拟中并行化Hines Matrix求解器

获取原文

摘要

Hines matrices arise in the simulations of mathematical models describing initiation and propagation of action potentials in a neuron. In this work, we exploit the structural properties of Hines matrices and design a scalable, linear work, recursive parallel algorithm for solving a system of linear equations where the underlying matrix is a Hines matrix, using the Exact Domain Decomposition Method (EDD). We give a general form for representing a Hines matrix and use the general form to prove that the intermediate matrix obtained via the EDD has the same structural properties as that of a Hines matrix. Using the above observation, we propose a novel decomposition strategy called fine decomposition which is suitable for a GPU architecture. Our algorithmic approach R-FINE-TPT based on fine decomposition outperforms the previously known approach in all the cases and gives a speedup of 2.5x on average for a variety of input neuron morphologies. We further perform experiments to understand the behaviour of R-FINE-TPT approach and show its robustness. We also employ a machine learning technique called linear regression to effectively guide recursion in our algorithm.
机译:汉斯矩阵在描述一个神经元开始和动作电位的传播的数学模型的仿真中产生。在这项工作中,我们利用汉斯矩阵的结构特性和设计一个可扩展的,线性工作,递归并行算法求解线性方程的系统,其中的底层矩阵是汉斯矩阵,使用该确切的域分解方法(EDD)。我们给出一个一般的形式用于表示Hines的矩阵,并使用一般形式,以证明通过EDD获得的中间矩阵具有相同的结构特性作为一个汉斯基质。使用上述观察,我们提出称为细分解的新型分解策略,其是适合于GPU架构。我们的算法方法基于精细分解R-FINE-TPT优于在所有的情况下,先前已知的方法,并给出了2.5倍,平均的加速为各种输入神经元形态。我们进一步进行实验,以了解R-FINE-TPT办法的行为,并显示出其稳健性。我们还聘请了被称为线性回归机器学习技术,有效引导递归在我们的算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号