首页> 外国专利> Very low precision floating point representation for deep learning acceleration

Very low precision floating point representation for deep learning acceleration

机译:精度很低的浮点表示形式,可促进深度学习

摘要

A specialized circuit is configured for floating point computations using numbers represented by a very low precision format (VLP format). The VLP format includes less than sixteen bits and is apportion into a sign bit, exponent bits (e), and mantissa bits (p). The configured specialized circuit is operated to store an approximation of a numeric value in the VLP format, where the approximation is represented as a function of a multiple of a fraction, where the fraction is an inverse of a number of discrete values that can be represented using only the mantissa bits.
机译:专用电路配置为使用非常低精度格式(VLP格式)表示的数字进行浮点计算。 VLP格式包括少于16位,并分配为符号位,指数位(e)和尾数位(p)。所配置的专用电路用于以VLP格式存储数值的近似值,其中该近似值表示为分数的倍数的函数,其中分数是可以表示的多个离散值的倒数仅使用尾数位。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号