...
首页> 外文期刊>IEEE Transactions on Computers >Evaluations on Deep Neural Networks Training Using Posit Number System
【24h】

Evaluations on Deep Neural Networks Training Using Posit Number System

机译:用标号系统对深神经网络训练的评估

获取原文
获取原文并翻译 | 示例

摘要

The training of Deep Neural Networks (DNNs) brings enormous memory requirements and computational complexity, which makes it a challenge to train DNN models on resource-constrained devices. Training DNNs with reduced-precision data representation is crucial to mitigate this problem. In this article, we conduct a thorough investigation on training DNNs with low-bit posit numbers, a Type-III universal number (Unum). Through a comprehensive analysis of quantization with various data formats, it is demonstrated that the posit format shows great potential to be employed in the training of DNNs. Moreover, a DNN training framework using 8-bit posit is proposed with a novel tensor-wise scaling scheme. The experiments show the same performance as the state-of-the-art (SOTA) across multiple datasets (MNIST, CIFAR-10, ImageNet, and Penn Treebank) and model architectures (LeNet-5, AlexNet, ResNet, MobileNet-V2, and LSTM). We further design an energy-efficient hardware prototype for our framework. Compared to the standard floating-point counterpart, our design achieves a reduction of 68, 51, and 75 percent in terms of area, power, and memory capacity, respectively.
机译:深度神经网络(DNN)的培训带来了巨大的内存要求和计算复杂性,这使得在资源受限设备上培训DNN模型的挑战。具有减少精确数据表示的培训DNN对于减轻此问题至关重要。在本文中,我们对培训DNN进行了彻底的调查,具有低比特标号,III型通用号码(UNUM)。通过以各种数据格式进行全面的量化分析,证明了在DNN的训练中显示出巨大的潜力。此外,提出了一种具有8位密度的DNN训练框架,具有新颖的张力明智的缩放方案。该实验表现出与多个数据集(MNIST,CIFAR-10,Imagenet和Penn TreeBank)和模型架构(Lenet-5,AlexNet,Reset,MobileNet-V2,和lstm)。我们进一步为我们的框架设计了节能硬件原型。与标准浮点对应相比,我们的设计分别实现了68,51和75%,分别在面积,功率和内存容量方面减少。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号