首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >Positnn: Training Deep Neural Networks with Mixed Low-Precision Posit
【24h】

Positnn: Training Deep Neural Networks with Mixed Low-Precision Posit

机译:经纬度:培训深度神经网络,用低精度的密钥

获取原文

摘要

Low-precision formats have proven to be an efficient way to reduce not only the memory footprint but also the hardware resources and power consumption of deep learning computations. Under this premise, the posit numerical format appears to be a highly viable substitute for the IEEE floating-point, but its application to neural networks training still requires further research. Some preliminary results have shown that 8-bit (and even smaller) posits may be used for inference and 16-bit for training, while maintaining the model accuracy. The presented research aims to evaluate the feasibility to train deep convolutional neural networks using posits. For such purpose, a software framework was developed to use simulated posits and quires in end-to-end training and inference. This implementation allows using any bit size, configuration, and even mixed precision, suitable for different precision requirements in various stages. The obtained results suggest that 8-bit posits can substitute 32-bit floats during training with no negative impact on the resulting loss and accuracy.
机译:低精度格式已被证明是一种有效的方法,不仅可以减少内存占用空间,而且是深度学习计算的硬件资源和功耗。在此前提下,分电数字格式似乎是IEEE浮点的高度可行的替代品,但其对神经网络培训的应用仍需要进一步研究。一些初步结果表明,8位(甚至更小)的位置可用于推理和16位进行训练,同时保持模型精度。本研究旨在使用POSITS评估培训深层卷积神经网络的可行性。出于此目的,开发了一种软件框架,以在端到端培训和推理中使用模拟的POSITS和QUIRES。该实现允许使用任何位大小,配置甚至混合精度,适用于各个阶段的不同精度要求。所获得的结果表明,8位POSITS可以在训练期间替代32位浮点数,对于产生的损耗和准确性没有负面影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号