...
首页> 外文期刊>Concurrency and computation: practice and experience >Deflate-inflate: Exploiting hashing trick for bringing inference to the edge with scalable convolutional neural networks
【24h】

Deflate-inflate: Exploiting hashing trick for bringing inference to the edge with scalable convolutional neural networks

机译:放气 - 充气:利用带有可扩展卷积神经网络对边缘引起的散列技巧

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

With each passing year, the compelling need to bring deep learning computational models to the edge grows, as does the disparity in resource demand between these models and Internet of Things edge devices. This article employs an old trick from the book "deflate and inflate" to bridge this gap. The proposed system uses the hashing trick to deflate the model. A uniform hash function and a neighborhood function are used to inflate the model at runtime. The neighborhood function approximates the original parameter space better than the uniform hash function according to experimental results. Compared to existing techniques for distributing the VGG-16 model over the Fog-Edge platform, our deployment strategy has a 1.7x - 7.5x speedup with only 1-4 devices due to decreased memory access and better resource utilization.
机译:随着每年通过的,令人信服需要为边缘带来深度学习的计算模型,因此这些模型与东西互联网之间的资源需求差距。 本文雇用了一本书“放气并充气”的旧伎俩来弥合这一差距。 建议的系统使用散列诀窍来缩小模型。 统一的哈希函数和邻域函数用于在运行时膨胀模型。 邻域函数根据实验结果近似于均匀哈希函数的原始参数空间。 与现有技术相比,在雾边平台上分配VGG-16模型,我们的部署策略具有1.7倍 - 7.5倍的加速,仅由于内存访问减少和更好的资源利用率,仅具有1-4个设备。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号