...
首页> 外文期刊>Neural Networks: The Official Journal of the International Neural Network Society >Quantization Friendly MobileNet (QF-MobileNet) Architecture for Vision Based Applications on Embedded Platforms
【24h】

Quantization Friendly MobileNet (QF-MobileNet) Architecture for Vision Based Applications on Embedded Platforms

机译:量化友好的MobileNet(QF-Mobilenet)嵌入式平台上基于视觉应用的架构

获取原文
获取原文并翻译 | 示例
           

摘要

Deep Neural Networks (DNNs) have become popular for various applications in the domain of image and computer vision due to their well-established performance attributes. DNN algorithms involve powerful multilevel feature extractions resulting in an extensive range of parameters and memory footprints. However, memory bandwidth requirements, memory footprint and the associated power consumption of models are issues to be addressed to deploy DNN models on embedded platforms for real time vision-based applications. We present an optimized DNN model for memory and accuracy for vision-based applications on embedded platforms. In this paper we propose Quantization Friendly MobileNet (QF-MobileNet) architecture. The architecture is optimized for inference accuracy and reduced resource utilization. The optimization is obtained by addressing the redundancy and quantization loss of the existing baseline MobileNet architectures. We verify and validate the performance of the QF-MobileNet architecture for image classification task on the ImageNet dataset. The proposed model is tested for inference accuracy and resource utilization and compared to the baseline MobileNet architecture. The inference accuracy of the proposed QF-MobileNetV2 float model attained 73.36% and the quantized model has 69.51%. The MobileNetV3 float model attained an inference accuracy of 68.75% and the quantized model has 67.5% respectively. The proposed model saves 33% of time complexity for QF-MobileNetV2 and QF-MobileNetV3 models against the baseline models. The QF-MobileNet also showed optimized resource utilization with 32% fewer tunable parameters, 30% fewer MAC's operations per image and reduced inference quantization loss by approximately 5% compared to the baseline models. The model is ported onto the android application using TensorFlow API. The android application performs inference on the native devices viz. smartphones, tablets and handheld devices. Future work is focused on introducing channel-wise and layer-wise quantization schemes to the proposed model. We intend to explore quantization aware training of DNN algorithms to achieve optimized resource utilization and inference accuracy. (C) 2020 Elsevier Ltd. All rights reserved.
机译:None

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号