首页> 外文OA文献 >Language and Compiler Support for Auto-Tuning Variable-Accuracy Algorithms
【2h】

Language and Compiler Support for Auto-Tuning Variable-Accuracy Algorithms

机译:语言和编译器支持自动调整可变精度算法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Approximating ideal program outputs is a common technique for solving computationally difficult problems, for adhering to processing or timing constraints, and for performance optimization in situations where perfect precision is not necessary. To this end, programmers often use approximation algorithms, iterative methods, data resampling, and other heuristics. However, programming such variable accuracy algorithms presents difficult challenges since the optimal algorithms and parameters may change with different accuracy requirements and usage environments. This problem is further compounded when multiple variable accuracy algorithms are nested together due to the complex way that accuracy requirements can propagate across algorithms and because of the size of the set of allowable compositions. As a result, programmers often deal with this issue in an ad-hoc manner that can sometimes violate sound programming practices such as maintaining library abstractions. In this paper, we propose language extensions that expose trade-offs between time and accuracy to the compiler. The compiler performs fully automatic compile-time and installtime autotuning and analyses in order to construct optimized algorithms to achieve any given target accuracy. We present novel compiler techniques and a structured genetic tuning algorithm to search the space of candidate algorithms and accuracies in the presence of recursion and sub-calls to other variable accuracy code. These techniques benefit both the library writer, by providing an easy way to describe and search the parameter and algorithmic choice space, and the library user, by allowing high level specification of accuracy requirements which are then met automatically without the need for the user to understand any algorithm-specific parameters. Additionally, we present a new suite of benchmarks, written in our language, to examine the efficacy of our techniques. Our experimental results show that by relaxing accuracy requirements , we can easily obtain performance improvements ranging from 1.1× to orders of magnitude of speedup.
机译:逼近理想程序输出是一种常见的技术,用于解决计算上的难题,遵守处理或时序约束以及在不需要完美精度的情况下进行性能优化。为此,程序员经常使用近似算法,迭代方法,数据重采样和其他启发式方法。然而,由于最佳算法和参数可能随着不同的精度要求和使用环境而改变,因此对这样的可变精度算法进行编程提出了困难的挑战。当多个可变精度算法嵌套在一起时,由于精度要求可以跨算法传播的复杂方式,并且由于允许组合的集合的大小,这个问题更加复杂。因此,程序员经常以临时方式处理此问题,有时可能会违反合理的编程做法,例如维护库抽象。在本文中,我们提出了语言扩展,以向编译器公开时间和准确性之间的权衡。编译器执行全自动的编译时和安装时自动调整和分析,以构造优化的算法来实现任何给定的目标精度。我们提出了新颖的编译器技术和结构化的遗传调整算法,以在存在递归和对其他可变精度代码的子调用的情况下搜索候选算法和精度的空间。通过提供一种描述和搜索参数和算法选择空间的简便方法,这些技术对图书馆编写者和图书馆用户都有利,因为它们允许对准确性要求进行高级别的说明,然后这些准确性要求可以自​​动满足,而无需用户理解任何特定于算法的参数。此外,我们还提供了一套用我们的语言编写的新基准测试,以检查我们技术的有效性。我们的实验结果表明,通过放宽对精度的要求,我们可以轻松地实现从1.1倍到提速几个数量级的性能改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号