首页> 外文会议>International symposium on progress in VLSI design and test >Network-on-chip: Current issues and challenges
【24h】

Network-on-chip: Current issues and challenges

机译:网络上网:当前的问题和挑战

获取原文

摘要

Due to the shrinking transistor sizes, the density of ICs roughly doubles every year as predicted by Moore's law. These advancements in the VLSI integration densities towards the nano scale era, witnessed a paradigm shift from computation centric designs to communication centric designs incorporating very large number of simple cores. Plenty of traditional interconnect schemes like point to point, buses and crossbars are available to interconnect small number of cores. While achieving fast and efficient communication with point to point communication schemes, wire density is a barrier for adapting them to many core architectures. Moreover, buses are simpler in design, they suffer from the scalability and arbitration issues along with bandwidth bottleneck as the number of cores increases. Similarly area and power requirements of a crossbar limits its applicability. Hence, in many core architectures like Chip Multiprocessors (CMP) and Multi processor System-on-Chip (MPSoCs), emerge the need of an efficient communication infrastructure as traditional solutions fails to handle the communication challenges. Network-on-Chip (NoC), a scalable and modular design approach, has been proposed as a promising alternative to traditional bus based architectures for inter-core communication. NoC has also been accepted in industy (Tilera's TILE-Gx72, TILE64TM [1] processors and Intel's terascale processor [2]. NoCs are an attractive alternative for the traditional shared-buses or dedicated wires due to many reasons. First, NoCs represent a scalable solution to on-chip communication paradigm, because they provide scalable bandwidth at low power and area overheads. Second, NoCs are very efficient in terms of use of wiring and multiplexing many traffic flows on the same channels providing quality of service and higher bandwidth. Finally, on-chip networks with regular topologies have short interconnects that can be optimized and reused using regular iterative blocks, thus making the- verification process easy. For on-chip networks, two-dimensional (2D) mesh is the most preferred topology choice due to its regularity, scalability, and perfect physical layout on an actual chip. This tutorial shall focus on NoC routing algorithms, their implementations and issues. The main parameters of the network which are affected by the routing algorithm include fault-tolerance, quality of service, communication performance (throughput and latency) and power consumption. The following are the main objective of this tutorial ; Introduction to NoC [3]: In this part, we briefly discuss about various design parameters of NoC such as topology, switching, flow control, routing and comparison with existing mechanisms. : Routing Taxonomy [4]: In this part, we present classification of various routing algorithms. : Deadlock and Livelock freedom in Routing: One of current issue in NoC routing is the use of acyclic channel dependency graph (ACDG) for deadlock freedom prohibiting certain routing turns. Thus, ACDG reduces the degree of adaptiveness. In this section, we discuss various turn models [5] and how these turn model can be improved to increase adaptivity while maintaining deadlock freedom. : Routing Implementations for NoC: Denser integration advancements make the chip more prone to failures (deep sub-micron effects, manufacturing effects etc). Furthermore these failures may disrupt the regularity of 2D meshes, leading to an irregular set of topologies generated from regular 2D meshes. Under this condition, solutions of regular 2D meshes may no longer work due to irregular topology. In this section, we discuss state-of-art routing implementation techniques [6]-[8] used for irregular 2D mesh under different failures. : Learning methods to handle congestion in Routing: Reinforcement Learning (RL) is a machine learning paradigm that has been widely applied in many areas. The Q-Learning has been used in NOC to learn the network traffic and make the routing decisions a
机译:由于晶体管尺寸缩小,每年IC的密度大致加倍,如摩尔定律所预测的。这些进步在VLSI集成密度朝着纳米规模时代,目睹了从计算中心设计的范式转变,以包括具有大量简单核心的通信中心设计。有大量传统的互连方案,如点,公共汽车和跨跨栏可以互连少量核心。在以点对点通信方案实现快速高效的沟通,导线密度是适应它们到许多核心架构的障碍。此外,随着核心的数量增加,总线在设计中具有更简单的设计,它们遭受可扩展性和仲裁问题以及带宽瓶颈。同样的横杆区域和功率要求限制了其适用性。因此,在许多芯片多处理器(CMP)和多处理器上片(MPSoC)中的许多核心架构中,由于传统解决方案无法处理通信挑战,因此出现了有效的通信基础设施的需要。芯片上(NOC),可扩展和模块化的设计方法,已被提出作为对核心间通信的传统总线架构的有前途的替代方案。 NOC也被Industy(Tilera的Tile-GX72,Tile64TM [1]处理器和英特尔的TeraScale处理器[2]。由于许多原因,Nocs是传统的共用公共汽车或专用电线的有吸引力的替代品。首先,NOC表示一个可扩展的片上通信范例解决方案,因为它们在低功率和面积开销时提供可扩展带宽。第二,NOC在使用接线和多路复用的许多交通流量的使用提供服务质量和更高的带宽时非常有效。最后,具有常规拓扑的片上网络具有短互连,可以使用常规迭代块进行优化和重复使用,从而使验证过程变得容易。对于片上网络,二维(2D)网格是最优选的拓扑选择由于其规律性,可扩展性和实际芯片的完美物理布局。本教程应专注于NoC路由算法,其实现和问题。主要参数受路由算法影响的网络的IRS包括容错,服务质量,通信性能(吞吐量和延迟)和功耗。以下是本教程的主要目标; NOC简介[3]:在本部分中,我们简要讨论了NOC的各种设计参数,如拓扑,切换,流量控制,路由和与现有机制的比较。 :路由分类学[4]:在这部分中,我们目前呈现各种路由算法的分类。 :Deadlock和Livelock在路由中自由:NoC路由中的当前问题之一是使用非循环通道依赖图(ACDG)以便死锁自由禁止某些路由转弯。因此,ACDG降低了适应性程度。在本节中,我们讨论各种转型模型[5]以及如何改善这些转向模型,以提高适应性,同时保持死锁自由。 :NOC的路由实现:更密度集成进步使芯片更容易出现故障(深次微长的次微生效应,制造效果等)。此外,这些故障可能会破坏2D网格的规律性,导致从常规2D网格产生的不规则拓扑组。在这种情况下,常规2D网格的解决方案可能由于不规则拓扑而不再工作。在本节中,我们讨论了在不同故障下用于不规则2D网格的最先进的路由实现技术[6] - [8]。 :用于处理路由中拥塞的学习方法:加固学习(RL)是一种机器学习范式,已广泛应用于许多领域。 Q-Learning已用于NOC以学习网络流量并使路由决策

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号