首页> 外文会议>International Joint Conference on Neural Networks >Retraining Quantized Neural Network Models with Unlabeled Data
【24h】

Retraining Quantized Neural Network Models with Unlabeled Data

机译:使用未标记数据重新训练量化神经网络模型

获取原文

摘要

Running neural network models on edge devices is attracting much attention by neural network researchers since edge computing technology is becoming more powerful than ever. However, deploying large neural network models on edge devices is challenging due to the limitation in available computing resources and storage space. Therefore, model compression techniques have been recently studied to reduce the model size and fit models on resource-limited edge devices. Compressing neural network models reduces the size of a model, but also degrades the accuracy of the model since it reduces the precision of weights in the model. Consequently, a retraining method is required to recover the accuracy of compressed models. Most existing retraining methods require the original labeled training datasets to retrain the models, but labeling is a time-consuming process. In particular, we cannot always access the original labeled datasets because of privacy policies and license limitations. In this paper, we propose a method to retrain a compressed neural network model with an unlabeled dataset that is different from the original labeled dataset. We compress the neural network model using quantization to decrease the size of the model. Subsequently, the compressed model is retrained by our proposed retraining method without using a labeled dataset to recover the accuracy of the model. We compared the proposed retraining method against the conventional retraining. The proposed method reduced the size of VGG-16 and ResNet-50 by 81.10% and 52.45%, respectively without significant accuracy loss. In addition, our proposed retraining method is clearly faster than the conventional retraining method.
机译:由于边缘计算技术变得比以往任何时候都强大,因此在边缘设备上运行神经网络模型引起了神经网络研究人员的广泛关注。但是,由于可用计算资源和存储空间的限制,在边缘设备上部署大型神经网络模型具有挑战性。因此,最近已经研究了模型压缩技术,以减小模型大小并在资源受限的边缘设备上拟合模型。压缩神经网络模型会减小模型的大小,但会降低模型的权重精度,因此会降低模型的准确性。因此,需要一种重新训练方法来恢复压缩模型的准确性。大多数现有的再训练方法都需要原始标记的训练数据集来对模型进行再训练,但是标记是一个耗时的过程。特别是,由于隐私权政策和许可限制,我们不能总是访问原始标记的数据集。在本文中,我们提出了一种使用未标记数据集重新训练压缩神经网络模型的方法,该数据集与原始标记数据集不同。我们使用量化压缩神经网络模型以减小模型的大小。随后,通过我们提出的再训练方法对压缩的模型进行再训练,而无需使用标记的数据集来恢复模型的准确性。我们将提出的再训练方法与常规再训练进行了比较。所提出的方法将VGG-16和ResNet-50的尺寸分别减小了81.10%和52.45%,而没有明显的精度损失。另外,我们提出的再训练方法显然比常规再训练方法快。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号