首页> 外文会议>IEEE/ACM Symposium on Edge Computing >Poster: Scaling Up Deep Neural Network optimization for Edge Inference†
【24h】

Poster: Scaling Up Deep Neural Network optimization for Edge Inference†

机译:海报:缩放边缘推理深神经网络优化†

获取原文

摘要

Deep neural networks (DNNs) have been increasingly deployed on and integrated with edge devices, such as mobile phones, drones, robots and wearables. Compared to cloud-based inference, running DNN inference directly on edge devices (a.k. a. edge inference) has major advantages, including being free from the network connection requirement, saving bandwidths, and better protecting user privacy [1].
机译:深度神经网络(DNN)越来越多地部署并与边缘设备集成,例如移动电话,无人机,机器人和可穿戴设备。与基于云的推断相比,直接在边缘设备上运行DNN推理(A.K.A.ENDEFIGUTE)具有主要优点,包括免于网络连接要求,节省带宽,更好地保护用户隐私[1]。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号