首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic
【2h】

Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic

机译:使用正算时深层神经网络中激活函数的快速逼近

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

With increasing real-time constraints being put on the use of Deep Neural Networks (DNNs) by real-time scenarios, there is the need to review information representation. A very challenging path is to employ an encoding that allows a fast processing and hardware-friendly representation of information. Among the proposed alternatives to the IEEE 754 standard regarding floating point representation of real numbers, the recently introduced Posit format has been theoretically proven to be really promising in satisfying the mentioned requirements. However, with the absence of proper hardware support for this novel type, this evaluation can be conducted only through a software emulation. While waiting for the widespread availability of the Posit Processing Units (the equivalent of the Floating Point Unit (FPU)), we can already exploit the Posit representation and the currently available Arithmetic-Logic Unit (ALU) to speed up DNNs by manipulating the low-level bit string representations of Posits. As a first step, in this paper, we present new arithmetic properties of the Posit number system with a focus on the configuration with 0 exponent bits. In particular, we propose a new class of Posit operators called L1 operators, which consists of fast and approximated versions of existing arithmetic operations or functions (e.g., hyperbolic tangent (TANH) and extended linear unit (ELU)) only using integer arithmetic. These operators introduce very interesting properties and results: (i) faster evaluation than the exact counterpart with a negligible accuracy degradation; (ii) an efficient ALU emulation of a number of Posits operations; and (iii) the possibility to vectorize operations in Posits, using existing ALU vectorized operations (such as the scalable vector extension of ARM CPUs or advanced vector extensions on Intel CPUs). As a second step, we test the proposed activation function on Posit-based DNNs, showing how 16-bit down to 10-bit Posits represent an exact replacement for 32-bit floats while 8-bit Posits could be an interesting alternative to 32-bit floats since their performances are a bit lower but their high speed and low storage properties are very appealing (leading to a lower bandwidth demand and more cache-friendly code). Finally, we point out how small Posits (i.e., up to 14 bits long) are very interesting while PPUs become widespread, since Posit operations can be tabulated in a very efficient way (see details in the text).
机译:随着实时场景对深度神经网络(DNN)的使用施加了越来越多的实时约束,因此有必要审查信息表示。一个非常具有挑战性的途径是采用一种编码,该编码允许快速处理和信息的硬件友好表示。在关于实数浮点表示的IEEE 754标准的替代方案中,最近引入的Posit格式在理论上已被证明可以满足上述要求。但是,由于没有针对这种新颖类型的适当硬件支持,因此只能通过软件仿真来进行此评估。在等待Posit处理单元(与浮点单元(FPU)等效)的广泛可用性的同时,我们已经可以利用Posit表示形式和当前可用的算术逻辑单元(ALU)通过操纵低电平来加快DNN的速度。级别的位字符串表示形式。作为本文的第一步,我们介绍了Posit数系统的新算术属性,重点是指数位为0的配置。特别是,我们提出了一类称为L1运算符的新型Posit运算符,它由仅使用整数算术的现有算术运算或功能(例如,双曲正切(TANH)和扩展线性单位(ELU))的快速近似形式组成。这些运算符引入了非常有趣的属性和结果:(i)比精确的运算符更快的评估,而精度下降可忽略不计; (ii)对多个Posits操作进行有效的ALU仿真; (iii)使用现有的ALU向量化操作(例如ARM CPU的可伸缩向量扩展或Intel CPU上的高级向量扩展)在Posits中向量化操作的可能性。第二步,我们在基于Posit的DNN上测试了建议的激活函数,显示了从16位到10位的Posits如何精确替代32位浮点数,而8位的Posits可能是32位浮点数的有趣替代方案。位浮点数是因为它们的性能略低,但是它们的高速和低存储特性非常吸引人(导致较低的带宽需求和对缓存更友好的代码)。最后,我们指出在PPU普及时,小的Posits(即长达14位的长度)非常有趣,因为可以以非常有效的方式将Posit操作制成表格(请参见本文中的详细信息)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号