首页> 外文期刊>Optical memory & neural networks >Exponential Discretization of Weights of Neural Network Connections in Pre -Trained Neural Network. Part Ⅱ: Correlation Maximization
【24h】

Exponential Discretization of Weights of Neural Network Connections in Pre -Trained Neural Network. Part Ⅱ: Correlation Maximization

机译:预先训练神经网络中神经网络连接权重的指数离散化。第Ⅱ部分:相关最大化

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

In this article, we develop method of linear and exponential quantization of neural network weights. We improve it by means of maximizing correlations between the initial and quantized weights taking into account the weight density distribution in each layer. We perform the quantization after the neural network training without a subsequent post-training and compare our algorithm with linear and exponential quantization. The quality of the neural network VGG-16 is already satisfactory (top5 accuracy 76%) in the case of 3-bit exponential quantization. The ResNet50 and Xception neural networks show top5 accuracy at 4 bits 79% and 61 %, respectively.
机译:在本文中,我们开发了神经网络权重的线性和指数量化方法。我们通过在考虑每个层中的重量密度分布的初始和量化权重之间最大化相关性的相关性来改善它。我们在神经网络训练之后执行量化,而无需随后的后续训练,并将我们的算法与线性和指数量化进行比较。在3比特指数量化的情况下,神经网络VGG-16的质量已经令人满意(Top5精度76%)。 Reset50和Xcepion神经网络分别以4比特79%和61%显示TOP5精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号