首页> 外文期刊>Concurrency, practice and experience >Heuristics for concurrent task scheduling on GPUs
【24h】

Heuristics for concurrent task scheduling on GPUs

机译:GPU上并发任务调度的启发式

获取原文
获取原文并翻译 | 示例

摘要

Concurrent execution of tasks in GPUs can reduce the computation time of a workload by overlapping data transfer and execution commands. However, it is difficult to implement an efficient runtime scheduler that minimizes the workload makespan as many execution orderings should be evaluated. In this paper, we employ scheduling theory to build a model that takes into account the device capabilities, workload characteristics, constraints, and objective functions. In our model, GPU tasks scheduling is reformulated as a flow shop scheduling problem, which allow us to apply and compare well-known heuristics already developed in the operations research field. In addition, we develop a new heuristic, specifically focused on executing GPU commands, that achieves better scheduling results than previous ones. It leverages on a precise GPU command execution model for both computation and data transfers to carry out more advantageous scheduling decisions. A comprehensive evaluation, showing the suitability and robustness of this new approach, is conducted in three different NVIDIA architectures (Kepler, Maxwell, and Pascal). Results confirm the proposed heuristic achieves the best results in more than 90% of the experiments. Furthermore, a comparison has been made with MPS (Multi-Process Service), the NVIDIA API that deals with the execution of concurrent tasks, which shows that our solution obtains speed-ups ranging from 1.15 to 1.20.
机译:通过重叠数据传输和执行命令,GPU中任务的并发执行可以减少工作负载的计算时间。但是,很难实现一个有效的运行时调度程序,最小化工作负载Mapespan,因为应该评估许多执行排序。在本文中,我们使用调度理论来构建一个模型,该模型考虑了设备功能,工作负载特征,约束和客观函数。在我们的模型中,GPU任务调度被重新重整为流店调度问题,允许我们申请和比较已在运营研究领域开发的知名启发式。此外,我们开发了一个新的启发式,专注于执行GPU命令,这实现了比以前的更好的调度结果。它利用精确的GPU命令执行模型,用于计算和数据传输,以执行更有利的调度决策。综合评价,呈现出这种新方法的适用性和稳健性,在三种不同的NVIDIA架构(开普勒,麦克斯韦和帕斯卡)中进行。结果证实,拟议的启发式达到了超过90%的实验中的最佳结果。此外,已经使用MPS(多进程服务),NVIDIA API进行了比较,这些API处理并发任务的执行,这表明我们的解决方案从1.15到1.20获得的速度增加。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号