首页> 外文学位 >Automatic scaling of OpenMP applications beyond shared memory.
【24h】

Automatic scaling of OpenMP applications beyond shared memory.

机译:OpenMP应用程序的自动扩展超出共享内存。

获取原文
获取原文并翻译 | 示例

摘要

The development of high-productivity programming environments that support the development of efficient programs on distributed-memory architectures is one of the most pressing needs in parallel computing, today. Many of today's parallel computer platforms have a distributed memory architecture, as most likely will future multi-cores.;Despite many approaches to provide improved programming models, such as HPF, Co-array Fortran, Treadmarks, and UPC, the state of the art for these platforms is to write explicit message-passing programs, using MPI. This process is tedious, but allows high-performance applications to be developed. Because expert software engineers are needed, many parallel computing platforms are inaccessible to the typical programmer.;The OpenMP programming model has been gaining popularity for writing shared memory system applications because of its preciseness for expressing parallelism with simple directives and clauses on top of serial program source code. To extend this OpenMP's high programmability to distributed memory systems, in this dissertation, we present a fully automated OpenMP to MPI translation system that consists of a translator and a runtime system. The system successfully executes the OpenMP versions of all regular, repetitive applications of the NAS Parallel Benchmarks on clusters. We describe the implementation of the system that introduces a novel, clean compile/runtime interface to generate inter-thread communication messages.;Communication accuracy is one of the key factors to get high performance comparable to hand-written MPI. We discuss intrinsic limitations of compile-time techniques for generating efficient communication messages, and as a solution, we propose a hybrid compiler-runtime translation scheme that features a new runtime data flow analysis technique and a compiler technique that makes a conservative analysis more accurate.;Enhancing data affinity and locality is also a critical issue and we discuss four data affinity problems that arise in the process of translating shared-memory applications to message passing variants. To resolve the issues, we propose corresponding compiler/runtime optimizations.;In this dissertation, we evaluate numerical/scientific applications that have repetitive communication patterns on a middle size laboratory cluster. We quantitatively compare compiler-time and runtime communication generation schemes as well as overheads of the runtime techniques. We also present and discuss the performance of our translated programs including the performance improvement of our data affinity optimizations, and compare them with the performance of the MPI, HPF and UPC versions of the benchmarks. The results show that our twelve translated programs achieve 88% of the hand-coded MPI programs, on average.
机译:支持并行存储架构上高效程序开发的高生产率编程环境的开发是当今并行计算中最紧迫的需求之一。当今许多并行计算机平台都具有分布式内存体系结构,未来的多核也很有可能。尽管有许多方法可以提供改进的编程模型,例如HPF,Co-array Fortran,Treadmarks和UPC,但它们都是最新技术。对于这些平台,是使用MPI编写显式的消息传递程序。这个过程很繁琐,但是允许开发高性能的应用程序。由于需要专业的软件工程师,典型的程序员无法访问许多并行计算平台。OpenMP编程模型在编写共享内存系统应用程序方面已广受欢迎,因为它具有在串行程序顶部使用简单指令和子句表达并行性的精确性。源代码。为了将这种OpenMP的高度可编程性扩展到分布式存储系统,本文提出了一种全自动的OpenMP到MPI转换系统,该系统由翻译器和运行时系统组成。系统成功执行集群上NAS并行基准测试的所有常规重复性应用程序的OpenMP版本。我们描述了引入新颖,干净的编译/运行时界面以生成线程间通信消息的系统的实现。通信精度是获得与手写MPI相媲美的高性能的关键因素之一。我们讨论了生成有效通信消息的编译时技术的固有局限性,作为解决方案,我们提出了一种混合编译器-运行时转换方案,该方案具有新的运行时数据流分析技术和使保守分析更加准确的编译器技术。 ;增强数据亲和力和局部性也是一个关键问题,我们讨论了在将共享内存应用程序转换为消息传递变体的过程中出现的四个数据亲和力问题。为解决这些问题,我们提出了相应的编译器/运行时优化方法。在本论文中,我们评估了在中等规模的实验室集群上具有重复通信模式的数值/科学应用。我们定量地比较编译器时间和运行时通信生成方案以及运行时技术的开销。我们还将介绍和讨论翻译程序的性能,包括改进数据亲和力优化的性能,并将它们与基准的MPI,HPF和UPC版本的性能进行比较。结果表明,我们的十二种翻译程序平均达到了88%的手工编码MPI程序。

著录项

  • 作者

    Kwon, Okwan.;

  • 作者单位

    Purdue University.;

  • 授予单位 Purdue University.;
  • 学科 Engineering Computer.
  • 学位 Ph.D.
  • 年度 2013
  • 页码 101 p.
  • 总页数 101
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号