首页> 外文会议>European Conference on Computer Vision >Stable Low-Rank Tensor Decomposition for Compression of Convolutional Neural Network
【24h】

Stable Low-Rank Tensor Decomposition for Compression of Convolutional Neural Network

机译:用于压缩卷积神经网络的稳定低级张量分解

获取原文

摘要

Most state-of-the-art deep neural networks are overparameterized and exhibit a high computational cost. A straightforward approach to this problem is to replace convolutional kernels with its low-rank tensor approximations, whereas the Canonical Polyadic tensor Decomposition is one of the most suited models. However, fitting the convolutional tensors by numerical optimization algorithms often encounters diverging components, i.e., extremely large rank-one tensors but canceling each other. Such degeneracy often causes the non-interpretable result and numerical instability for the neural network ne-tuning. This paper is the first study on degeneracy in the tensor decomposition of convolutional kernels. We present a novel method, which can stabilize the low-rank approximation of convolutional kernels and ensure efficient compression while preserving the high-quality performance of the neural networks. We evaluate our approach on popular CNN architectures for image classification and show that our method results in much lower accuracy degradation and provides consistent performance.
机译:大多数最先进的深神经网络都是过分的,并且表现出高计算成本。这种问题的直接方法是用其低级张量近似来取代卷积核,而规范多adiC张量分解是最适合的模型之一。然而,通过数值优化算法拟合卷积张量,通常遇到发散的组件,即极大的级别张量,但互相取消。这种退化常常导致神经网络NE-TUNING的不可解释的结果和数值不稳定性。本文是卷积粒张解粒子张量分解的第一步研究。我们提出了一种新型方法,可以稳定卷积粒的低秩近似,并确保有效的压缩,同时保持神经网络的高质量性能。我们评估了我们对图像分类的流行CNN架构的方法,并表明我们的方法能够降低较低的精度下降并提供一致的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号