首页> 外文会议>24th ACM international conference on supercomputing 2010 >Exascale Science: The Next Frontier in High Performance Computing
【24h】

Exascale Science: The Next Frontier in High Performance Computing

机译:Exascale科学:高性能计算的下一个前沿

获取原文
获取原文并翻译 | 示例

摘要

Scientific computation has come into its own as a mature technology in all fields of science and engineering. Never before have we been able to accurately anticipate, analyze, and plan for complex events in the future - from the analysis of a human cell to the climate change well into the future. In combination with theory and experiment, scientific computation provides a valuable tool for understanding causes as well as identifying solutions as we look at complex systems containing billions of elements. However, we still cannot do it all and there is a need for more computation capacity especially in areas such as the study of biology and medicine, materials science, climate, and national security.rnThe petascale systems (capable of 1015 floating point operations per second) of today have accelerated studies that were not possible 3 years ago - and address some of the challenges in the areas mentioned. However, indications from researchers are that they would need far more powerful computing tools to meet the ever increasing challenges of an increasingly complex world. Exascale systems (capable of 1018 floating point operations per second), with a processing capability close to that of the human brain, will enable the unraveling of longstanding scientific mysteries and present new opportunities. The question now is - What does it take to build an exascale system?rnThe path from Teraflops to Petaflops was driven by the growth of multi-core processors. While it is likely that an exascale system will be comprised of millions of cores, just riding the multi-core trend may not allow us to develop a sustainable exascale system. A number of challenges surface as we start increasing the number of cores in a CPU (Central Processing Unit). The first and most pressing issue is power consumption. Assuming today's technology to build an exascale system, it would consume over a Gigawatt of power. Other issues such as software scalability, memory, IO, and storage bandwidth, and system resiliency stem from the fact that processing power is outpacing the capabilities of all the surrounding technologies.rnBut the sky isn't really falling. While it appears like there are no ideal solutions today, new approaches will emerge that will provide fundamental breakthroughs in hardwarerntechnology, parallel programming, and resiliency. In this talk, the speaker addresses the challenges that we face as we take on the task of developing an exascale system along with the technical shifts needed to mitigate some of these challenges.
机译:科学计算已成为科学和工程学所有领域中的一项成熟技术。我们从未能够准确地预测,分析和计划未来的复杂事件-从对人类细胞的分析到气候变化以及未来。与理论和实验相结合,科学计算为了解原因并确定解决方案提供了有价值的工具,因为我们着眼于包含数十亿个元素的复杂系统。但是,我们仍然无法做到所有这些,并且还需要更多的计算能力,特别是在诸如生物学和医学研究,材料科学,气候和国家安全等领域。千万亿次级系统(每秒可进行1015个浮点运算) )今天已经加快了3年前不可能进行的研究-并解决了上述领域中的一些挑战。但是,研究人员的迹象表明,他们将需要功能更强大的计算工具来应对日益复杂的世界中日益增长的挑战。 Exascale系统(每秒可进行1018次浮点运算)的处理能力接近人脑,将能够揭示长期存在的科学奥秘,并提供新的机会。现在的问题是-构建百亿亿次级系统需要什么?从Teraflops到Petaflops的路径是由多核处理器的增长驱动的。虽然亿亿级系统可能包含数百万个内核,但仅采用多核趋势可能无法使我们开发可持续的亿亿级系统。随着我们开始增加CPU(中央处理单元)的内核数量,许多挑战浮出水面。第一个也是最紧迫的问题是功耗。假设采用当今的技术来构建百亿亿分之一系统,它将消耗超过一千兆瓦的功率。其他问题,例如软件可伸缩性,内存,IO和存储带宽以及系统弹性,是由于处理能力超过了所有周围技术的能力这一事实而引起的。尽管目前似乎没有理想的解决方案,但将出现新方法,这些新方法将在硬件技术,并行编程和弹性方面提供根本性突破。在本次演讲中,演讲者阐述了我们在开发百亿亿次系统时所面临的挑战,以及为缓解其中一些挑战而需要的技术转变。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号