首页> 外文会议>European conference on computer vision >Constrained Optimization Based Low-Rank Approximation of Deep Neural Networks
【24h】

Constrained Optimization Based Low-Rank Approximation of Deep Neural Networks

机译:基于约束优化的深层神经网络低秩逼近

获取原文

摘要

We present COBLA-Constrained Optimization Based Low-rank Approximation-a systematic method of finding an optimal low-rank approximation of a trained convolutional neural network, subject to constraints in the number of multiply-accumulate (MAC) operations and the memory footprint. COBLA optimally allocates the constrained computation resources into each layer of the approximated network. The singular value decomposition of the network weight is computed, then a binary masking variable is introduced to denote whether a particular singular value and the corresponding singular vectors are used in low-rank approximation. With this formulation, the number of the MAC operations and the memory footprint are represented as linear constraints in terms of the binary masking variables. The resulted 0-1 integer programming problem is approximately solved by sequential quadratic programming. COBLA does not introduce any hyperparameter. We empirically demonstrate that COBLA outperforms prior art using the SqueezeNet and VGG-16 architecture on the ImageNet dataset.
机译:我们提出了基于COBLA约束的优化的低秩逼近-一种找到训练的卷积神经网络的最优低秩逼近的系统方法,受制于乘法累加(MAC)操作数和内存占用量的约束。 COBLA将受约束的计算资源最佳地分配到近似网络的每一层中。计算网络权重的奇异值分解,然后引入二进制掩码变量以表示是否在低秩逼近中使用了特定的奇异值和相应的奇异矢量。利用这种公式,MAC操作的数量和内存占用量以二进制屏蔽变量的形式表示为线性约束。最终的0-1整数编程问题可以通过顺序二次编程近似解决。 COBLA不会引入任何超参数。我们凭经验证明,在ImageNet数据集上使用SqueezeNet和VGG-16体系结构,COBLA优于现有技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号