首页> 外文会议>IEEE International Parallel and Distributed Processing Symposium Workshops >TNT: A Solver for Large Dense Least-Squares Problems that Takes Conjugate Gradient from Bad in Theory, to Good in Practice
【24h】

TNT: A Solver for Large Dense Least-Squares Problems that Takes Conjugate Gradient from Bad in Theory, to Good in Practice

机译:TNT:一种解决大密集最小二乘问题的方法,使共轭梯度从理论上的差变为实践上的好

获取原文

摘要

Since its inception by Gauss, the least-squares problem has frequently arisen in science, mathematics, and engineering. Iterative methods, such as Conjugate Gradient Normal Residual (CGNR), have been popular for solving sparse least-squares problems, but have historically been regarded as undesirable for dense applications due to poor convergence. We contend that this traditional "common knowledge" should be reexamined. Preconditioned CGNR, and perhaps other iterative methods, should be considered alongside standard methods when addressing large dense least-squares problems. In this paper we present TNT, a dynamite method for solving large dense least-squares problems. TNT implements a Cholesky preconditioner for the CGNR fast iterative method. The Cholesky factorization provides a preconditioner that, in the absence of round-off error, would yield convergence in a single iteration. Through this preconditioner and good parallel scaling, TNT provides improved performance over traditional least-squares solvers allowing for accelerated investigations of scientific and engineering problems. We compare a parallel implementation of TNT to parallel implementations of other conventional methods, including the normal equations and the QR method. For the small systems tested (15000 15000 or smaller), it is shown that TNT is capable of producing smaller solution errors and executing up to 16 faster than the other tested methods. We then apply TNT to a representative rock magnetism inversion problem where it yields the best solution accuracy and execution time of all tested methods.
机译:自高斯(Gauss)提出以来,最小二乘问题经常出现在科学,数学和工程学中。迭代方法,例如共轭梯度法向残差(CGNR),在解决稀疏最小二乘问题方面很受欢迎,但由于收敛性差,在历史上一直被认为不适用于密集型应用。我们认为应该重新审查这种传统的“常识”。解决大型密集最小二乘问题时,应将预处理CGNR和其他迭代方法与标准方法一起考虑。在本文中,我们介绍了TNT,这是一种用于解决大型密集最小二乘问题的炸药方法。 TNT为CGNR快速迭代方法实现了Cholesky预处理器。 Cholesky分解提供了一个前提条件,即在没有舍入误差的情况下,将在一次迭代中产生收敛。通过这种预处理器和良好的并行缩放,TNT提供了优于传统最小二乘法求解器的性能,从而可以加快对科学和工程问题的调查。我们将TNT的并行实现与其他常规方法(包括法线方程和QR方法)的并行实现进行了比较。对于测试的小型系统(15000 15000或更小),表明TNT与其他测试方法相比,能够产生更小的解决方案错误,并且最多可以执行16次。然后,我们将TNT应用于代表性的岩石磁性反演问题,该问题在所有测试方法中都能产生最佳的求解精度和执行时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号