首页> 外文会议>European conference on computer vision >Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-regression
【24h】

Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-regression

机译:加速卷积神经网络,具有主导卷积核和知识预回归

获取原文

摘要

Aiming at accelerating the test time of deep convolutional neural networks (CNNs), we propose a model compression method that contains a novel dominant kernel (DK) and a new training method called knowledge pre-regression (KP). In the combined model DK~2PNet, DK is presented to significantly accomplish a low-rank decomposition of convolutional kernels, while KP is employed to transfer knowledge of intermediate hidden layers from a larger teacher network to its compressed student network on the basis of a cross entropy loss function instead of previous Euclidean distance. Compared to the latest results, the experimental results achieved on CIFAR-10, CIFAR-100, MNIST, and SVHN benchmarks show that our DK~2PNet method has the best performance in the light of being close to the state of the art accuracy and requiring dramatically fewer number of model parameters.
机译:旨在加速深度卷积神经网络(CNNS)的测试时间,我们提出了一种模型压缩方法,该方法包含一种新颖的主导内核(DK)和称为知识前回归(KP)的新培训方法。在组合模型DK〜2PNET中,提出了DK,以显着完成卷积核的低秩分解,而KP则用于根据十字架将中间隐藏层的知识从大师网络转移到其压缩学生网络熵损失功能而不是先前的欧几里德距离。与最新结果相比,CiFar-10,CiFar-100,Mnist和SVHN基准测试中实现的实验结果表明,我们的DK〜2PNET方法鉴于接近最先进的准确性和需要的光线具有最佳性能显着更少数量的模型参数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号