首页> 外文期刊>Microprocessors and microsystems >A NoC-based simulator for design and evaluation of deep neural networks
【24h】

A NoC-based simulator for design and evaluation of deep neural networks

机译:基于NOC的深度神经网络设计和评估模拟器

获取原文
获取原文并翻译 | 示例
       

摘要

The astonishing development in the field of artificial neural networks (ANN) has brought significant advancement in many application domains, such as pattern recognition, image classification, and computer vision. ANN imitates neuron behaviors and makes a decision or prediction by learning patterns and features from the given data set. To reach higher accuracies, neural networks are getting deeper, and consequently, the computation and storage demands on hardware platforms are steadily increasing. In addition, the massive data communication among neurons makes the interconnection more complex and challenging. To overcome these challenges, ASIC-based DNN accelerators are being designed which usually incorporate customized processing elements, fixed interconnection, and large off-chip memory storage. As a result, DNN computation involves large memory accesses due to frequent load/off-loading data, which significantly increases the energy consumption and latency. Also, the rigid architecture and interconnection among processing elements limit the efficiency of the platform to specific applications. In recent years, Network-on-Chip-based (NoC-based) DNN becomes an emerging design paradigm because the NoC interconnection can help to reduce the off-chip memory accesses while offers better scalability and flexibility. To evaluate the NoC-based DNN in the early design stage, we introduce a cycle-accurate NoC-based DNN simulator, called DNNoC-sim. To support various operations such as convolution and pooling in the modern DNN models, we first propose a DNN flattening technique to convert diverse DNN operation into MAC-like operations. In addition, we propose a DNN slicing method to evaluate the large-scale DNN models on a resource-constraint NoC platform. The evaluation results show a significant reduction in the off-chip memory accesses compared to the state-of-the-art DNN model. We also analyze the performance and discuss the trade-off between different design parameters. (c) 2020 Elsevier B.V. All rights reserved.
机译:人工神经网络(ANN)领域的令人惊讶的发展在许多应用领域中提出了显着的进步,例如模式识别,图像分类和计算机视觉。 Ann模仿神经元行为,并通过从给定数据集中学习模式和特征来做出决定或预测。为了达到更高的准确性,神经网络正在深入,因此,硬件平台上的计算和存储需求稳步增加。此外,神经元之间的大规模数据通信使得互连更复杂和具有挑战性。为了克服这些挑战,正在设计基于ASIC的DNN加速器,其通常包括定制的处理元件,固定互连和大型片外存储器存储器。结果,DNN计算涉及由于频繁的负载/非负载数据而导致的大存储器访问,这显着提高了能量消耗和延迟。而且,处理元件之间的刚性架构和互连将平台的效率限制为特定应用。近年来,基于网络的网络(基于NOC)DNN成为新兴的设计范式,因为NOC互连可以有助于减少片外存储器访问,同时提供更好的可扩展性和灵活性。为了评估早期设计阶段中的基于NOC的DNN,我们介绍了一种循环准确的基于NOC的DNN模拟器,称为DNNoC-SIM。为了支持诸如卷积和汇集的各种操作,如现代DNN模型,我们首先提出了一种DNN平整技术,将各种DNN操作转换为类似的MAC操作。此外,我们提出了一种DNN切片方法来评估资源约束NOC平台上的大规模DNN模型。与最先进的DNN模型相比,评估结果显示出在芯片内存访问的显着降低。我们还分析了性能并讨论了不同设计参数之间的权衡。 (c)2020 Elsevier B.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号