首页> 外文会议>International symposium on computer architecture >Aergia: Exploiting Packet Latency Slack in On-Chip Networks
【24h】

Aergia: Exploiting Packet Latency Slack in On-Chip Networks

机译:Aergia:开发芯片网络中的数据包延迟松弛

获取原文

摘要

Traditional Network-on-Chips (NoCs) employ simple arbitration strategies, such as round-robin or oldest-first, to decide which packets should be prioritized in the network. This is suboptimal since different packets can have very different effects on system performance due to, e.g., different level of memory-level parallelism (MLP) of applications. Certain packets may be performance-critical because they cause the processor to stall, whereas others may be delayed for a number of cycles with no effect on application-level performance as their latencies are hidden by other outstanding packets' latencies. In this paper, we define slack as a key measure that characterizes the relative importance of a packet. Specifically, the slack of a packet is the number of cycles the packet can be delayed in the network with no effect on execution time. This paper proposes new router prioriti-zation policies that exploit the available slack of interfering packets in order to accelerate performance-critical packets and thus improve overall system performance. When two packets interfere with each other in a router, the packet with the lower slack value is prioritized. We describe mechanisms to estimate slack, prevent starvation, and combine slack-based prioritization with other recently proposed application-aware prioritization mechanisms. We evaluate slack-based prioritization policies on a 64-core CMP with an 8x8 mesh NoC using a suite of 35 diverse applications. For a representative set of case studies, our proposed policy increases average system throughput by 21.0% over the commonly-used round-robin policy. Averaged over 56 randomly-generated multiprogrammed workload mixes, the proposed policy improves system throughput by 10.3%, while also reducing application-level unfairness by 30.8%.
机译:传统的网络上筹码(NOCS)采用简单的仲裁策略,例如循环或最古老的策略,以确定应在网络中优先考虑哪些数据包。这是次优,因为不同的数据包可能对系统性能产生截然不同的影响,这是由于应用程序的不同级别的内存级并行性(MLP)。某些数据包可能是关键的,因为它们导致处理器失速,而其他数据包可能会延迟许多循环,而这些周期没有对应用程序级性能没有影响,因为它们的延迟被其他优秀的分组的延迟隐藏起来。在本文中,我们将slack定义为表征数据包相对重要性的关键措施。具体地,分组的松弛是循环的循环次数,该数据包可以在网络中延迟,没有对执行时间的影响。本文提出了新的路由器优先级zation策略,该zation策略利用干扰数据包的可用松弛,以便加速性能关键数据包,从而提高整体系统性能。当两个数据包在路由器中相互干扰时,优先级分组较低的松弛值。我们描述了估计松弛,防止饥饿的机制,以及与其他最近提出的应用知识的优先级机制相结合的基于稀释的优先级。我们使用35个不同的应用程序使用8x8网格NOC评估64核CMP上的Slack的优先级策略。对于代表性的案例研究,我们拟议的政策将在普通使用的循环政策上提高21.0%的平均系统吞吐量。拟议的政策平均超过56名随机产生的多分程工作量混合,提高了10.3%的系统吞吐量,同时也将应用级别不公平降低30.8%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号