首页> 外文会议>European conference on computer vision >Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-regression
【24h】

Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-regression

机译:利用主导卷积核和知识预回归来加速卷积神经网络

获取原文

摘要

Aiming at accelerating the test time of deep convolutional neural networks (CNNs), we propose a model compression method that contains a novel dominant kernel (DK) and a new training method called knowledge pre-regression (KP). In the combined model DK~2PNet, DK is presented to significantly accomplish a low-rank decomposition of convolutional kernels, while KP is employed to transfer knowledge of intermediate hidden layers from a larger teacher network to its compressed student network on the basis of a cross entropy loss function instead of previous Euclidean distance. Compared to the latest results, the experimental results achieved on CIFAR-10, CIFAR-100, MNIST, and SVHN benchmarks show that our DK~2PNet method has the best performance in the light of being close to the state of the art accuracy and requiring dramatically fewer number of model parameters.
机译:为了加快深度卷积神经网络(CNN)的测试时间,我们提出了一种模型压缩方法,该方法包含一个新的显性核(DK)和一种称为知识预回归(KP)的新训练方法。在组合模型DK〜2PNet中,提出了DK以显着完成卷积核的低秩分解,而KP用于在交叉的基础上将中间隐藏层的知识从较大的教师网络转移到其压缩的学生网络。熵损失函数,而不是先前的欧几里得距离。与最新结果相比,在CIFAR-10,CIFAR-100,MNIST和SVHN基准测试中获得的实验结果表明,我们的DK〜2PNet方法在接近最先进的精度和要求的前提下具有最佳性能。大大减少了模型参数的数量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号