首页> 外文会议>Annual IEEE/ACM International Symposium on Microarchitecture >PermDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices
【24h】

PermDNN: Efficient Compressed DNN Architecture with Permuted Diagonal Matrices

机译:PermDNN:具有置换对角矩阵的高效压缩DNN体系结构

获取原文

摘要

Deep neural network (DNN) has emerged as the most important and popular artificial intelligent (AI) technique. The growth of model size poses a key energy efficiency challenge for the underlying computing platform. Thus, model compression becomes a crucial problem. However, the current approaches are limited by various drawbacks. Specifically, network sparsification approach suffers from irregularity, heuristic nature and large indexing overhead. On the other hand, the recent structured matrix-based approach (i.e., CirCNN) is limited by the relatively complex arithmetic computation (i.e., FFT), less flexible compression ratio, and its inability to fully utilize input sparsity. To address these drawbacks, this paper proposes PermDNN, a novel approach to generate and execute hardware-friendly structured sparse DNN models using permuted diagonal matrices. Compared with unstructured sparsification approach, PermDNN eliminates the drawbacks of indexing overhead, non-heuristic compression effects and time-consuming retraining. Compared with circulant structure-imposing approach, PermDNN enjoys the benefits of higher reduction in computational complexity, flexible compression ratio, simple arithmetic computation and full utilization of input sparsity. We propose PermDNN architecture, a multi-processing element (PE) fully-connected (FC) layer-targeted computing engine. The entire architecture is highly scalable and flexible, and hence it can support the needs of different applications with different model configurations. We implement a 32-PE design using CMOS 28nm technology. Compared with EIE, PermDNN achieves 3.3x~4.8x higher throughout, 5.9x~8.5x better area efficiency and 2.8x~4.0x better energy efficiency on different workloads. Compared with CirCNN, PermDNN achieves 11.51x higher throughput and 3.89x better energy efficiency.
机译:深度神经网络(DNN)已成为最重要和最受欢迎的人工智能(AI)技术。模型规模的增长给基础计算平台带来了关键的能源效率挑战。因此,模型压缩成为关键问题。然而,当前的方法受到各种缺点的限制。具体而言,网络稀疏化方法存在不规则性,启发式性质和较大的索引开销的问题。另一方面,最近的基于结构化矩阵的方法(即CirCNN)受到相对复杂的算术计算(即FFT),压缩率较不灵活以及无法充分利用输入稀疏性的限制。为了解决这些缺点,本文提出了PermDNN,这是一种使用置换对角矩阵来生成和执行硬件友好的结构化稀疏DNN模型的新颖方法。与非结构化稀疏化方法相比,PermDNN消除了索引开销,非启发式压缩效果和费时的重新训练的缺点。与循环结构强加方法相比,PermDNN具有以下优势:计算复杂度降低,压缩比灵活,算术计算简单以及输入稀疏性得到充分利用。我们提出了PermDNN体系结构,这是一种多处理元素(PE)全连接(FC)面向层的计算引擎。整个体系结构具有高度的可伸缩性和灵活性,因此可以使用不同的模型配置来支持不同应用程序的需求。我们使用CMOS 28nm技术实现32-PE设计。与EIE相比,PermDNN的整体工作效率提高了3.3倍至4.8倍,面积效率提高了5.9倍至8.5倍,能源效率提高了2.8倍至4.0倍。与CirCNN相比,PermDNN的吞吐量提高了11.51倍,能源效率提高了3.89倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号