首页> 外国专利> PRUNING- AND DISTILLATION-BASED CONVOLUTIONAL NEURAL NETWORK COMPRESSION METHOD

PRUNING- AND DISTILLATION-BASED CONVOLUTIONAL NEURAL NETWORK COMPRESSION METHOD

机译:基于修剪和蒸馏的卷积神经网络压缩方法

摘要

A pruning- and distillation-based convolutional neural network compression method (400), comprising: pruning an original convolutional neural network model to obtain a pruned model (S401); performing fine adjustment on parameters of the pruned model (S403); using the original convolutional neural network model as a teacher network of a distillation algorithm, using the pruned model with the parameters having experienced fine adjustment as a student network of the distillation algorithm, and instructing the student network in training by the teacher network according to the distillation algorithm (S405); and using the student network trained according to the distillation algorithm as a compressed convolutional neural network model (S407). According to the method, by using two conventional network compression methods in combination, a convolutional neural network model is more effectively compressed.
机译:一种基于修剪和蒸馏的卷积神经网络压缩方法(400),包括:修剪原始的卷积神经网络模型以获得修剪的模型(S401);对修剪后的模型的参数进行微调(S403);使用原始的卷积神经网络模型作为蒸馏算法的教师网络,使用经过精细调整的参数的修剪模型作为蒸馏算法的学生网络,并根据该指令指导学生网络进行教师培训蒸馏算法(S405);并将根据蒸馏算法训练的学生网络用作压缩卷积神经网络模型(S407)。根据该方法,通过组合使用两种常规的网络压缩方法,可以更有效地压缩卷积神经网络模型。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号