...
首页> 外文期刊>ScientificWorldJournal >Using Heuristic Value Prediction and Dynamic Task Granularity Resizing to Improve Software Speculation
【24h】

Using Heuristic Value Prediction and Dynamic Task Granularity Resizing to Improve Software Speculation

机译:使用启发式价值预测和动态任务粒度调整为改善软件猜测

获取原文
   

获取外文期刊封面封底 >>

       

摘要

Exploiting potential thread-level parallelism (TLP) is becoming the key factor to improving performance of programs on multicore or many-core systems. Among various kinds of parallel execution models, the software-based speculative parallel model has become a research focus due to its low cost, high efficiency, flexibility, and scalability. The performance of the guest program under the software-based speculative parallel execution model is closely related to the speculation accuracy, the control overhead, and the rollback overhead of the model. In this paper, we first analyzed the conventional speculative parallel model and presented an analyticmodel of its expectation of the overall overhead, then optimized the conventional model based on the analytic model, and finally proposed a novel speculative parallel model named HEUSPEC. The HEUSPEC model includes three key techniques, namely, the heuristic value prediction, the value based correctness checking, and the dynamic task granularity resizing. We have implemented the runtime system of the model in ANSI C language. The experiment results show that when the speedup of the HEUSPEC model can reach 2.20 on the average (15% higher than conventional model) when depth is equal to 3 and 4.51 on the average (12% higher than conventional model) when speculative depth is equal to 7. Besides, it shows good scalability and lower memory cost.
机译:利用潜在的线程并行性(TLP)正在成为提高多核或多核系统上的程序性能的关键因素。在各种平行执行模型中,由于其低成本,高效率,灵活性和可扩展性,基于软件的投机平行模型已成为研究重点。在基于软件的推测平行执行模型下的客户程序的性能与猜测精度,控制开销和模型的回滚开销密切相关。在本文中,我们首先分析了传统的推测并行模型,并介绍了其对整体开销的预期的分析模型,然后基于分析模型优化了传统模型,最后提出了名为Heuspec的新型投机平行模型。 Heuspec模型包括三个关键技术,即启发式值预测,基于值的正确性检查,以及动态任务粒度调整。我们已经在ANSI C语言中实现了模型的运行时系统。实验结果表明,当HEUSPEC模型的加速可以在平均值等于3和4.51时平均达到2.20(比传统模型高15%)(比传统模型高于传统模型的12%)除此之外,它显示出良好的可扩展性和更低的记忆成本。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号