首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >Multi-Layer Pruning Framework for Compressing Single Shot MultiBox Detector
【24h】

Multi-Layer Pruning Framework for Compressing Single Shot MultiBox Detector

机译:用于压缩单次拍摄多杆探测器的多层修剪框架

获取原文
获取外文期刊封面目录资料

摘要

We propose a framework for compressing state-of-the-art Single Shot MultiBox Detector (SSD). The framework addresses compression in the following stages: Sparsity Induction, Filter Selection, and Filter Pruning. In the Sparsity Induction stage, the object detector model is sparsified via an improved global threshold. In Filter Selection & Pruning stage, we select and remove filters using sparsity statistics of filter weights in two consecutive convolutional layers. This results in the model with the size smaller than most existing compact architectures. We evaluate the performance of our framework with multiple datasets and compare over multiple methods. Experimental results show that our method achieves state-of-the-art compression of 6.7X and 4.9X on PASCAL VOC dataset on models SSD300 and SSD512 respectively. We further show that the method produces maximum compression of 26X with SSD512 on German Traffic Sign Detection Benchmark (GTSDB). Additionally, we also empirically show our method's adaptability for classification based architecture VGG16 on datasets CIFAR and German Traffic Sign Recognition Benchmark (GTSRB) achieving a compression rate of 125X and 200X with the reduction in flops by 90.50% and 96.6% respectively with no loss of accuracy. In addition to this, our method does not require any special libraries or hardware support for the resulting compressed models.
机译:我们提出了一种压缩最先进的单次Multibox探测器(SSD)的框架。该框架在以下阶段提供压缩:稀疏性感应,滤波器选择和过滤器修剪。在稀疏性诱导阶段,物体检测器模型通过改进的全局阈值稀释。在过滤器选择和修剪阶段,我们在连续两个连续的卷积层中使用过滤器重量的稀疏统计来选择和移除过滤器。这导致尺寸小于大多数现有紧凑架构的模型。我们评估我们的框架与多个数据集的性能,并通过多种方法进行比较。实验结果表明,我们的方法分别在SSD300和SSD512模型上的Pascal VOC数据集上实现了最先进的6.7倍和4.9倍。我们进一步表明,该方法在德国交通标志检测基准(GTSDB)上使用SSD512产生了26倍的最大压缩。此外,我们还经验展示了我们的方法对数据集CiFar和德国交通标志识别基准(GTSRB)的基于分类的架构VGG16的适应性,以实现125倍和200倍的压缩率,减少了90.50 %和96.6 %没有精度丧失。除此之外,我们的方法还不需要任何特殊的库或硬件支持所产生的压缩模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号