【24h】

Tensorizing Neural Networks

机译:张紧神经网络

获取原文

摘要

Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further increase of the model size. In this paper we convert the dense weight matrices of the fully-connected layers to the Tensor Train format such that the number of parameters is reduced by a huge factor and at the same time the expressive power of the layer is preserved. In particular, for the Very Deep VGG networks we report the compression factor of the dense weight matrix of a fully-connected layer up to 200000 times leading to the compression factor of the whole network up to 7 times.
机译:深度神经网络目前在多个领域展示出最先进的性能。同时,此类模型对计算资源的要求很高。特别是,通常使用的全连接层需要大量内存,这使得在低端设备上使用模型变得困难,并且阻止了模型大小的进一步增加。在本文中,我们将全连接层的密集权重矩阵转换为Tensor Train格式,以使参数数量减少很多,同时又保留了该层的表达能力。特别是,对于甚深VGG网络,我们报告的全连接层密集权重矩阵的压缩系数高达200000倍,导致整个网络的压缩系数高达7倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号