首页> 外文期刊>Internet of Things Journal, IEEE >Toward Collaborative Inferencing of Deep Neural Networks on Internet-of-Things Devices
【24h】

Toward Collaborative Inferencing of Deep Neural Networks on Internet-of-Things Devices

机译:对互联网设备设备的深神经网络协同推动

获取原文
获取原文并翻译 | 示例
           

摘要

Recent advancements in deep neural networks (DNNs) have enabled us to solve traditionally challenging problems. To deploy a service based on DNNs, since DNNs are compute intensive, consumers need to rely on compute resources in the cloud. This approach, in addition to creating a dependency on the high-quality network infrastructure and data centers, raises new privacy concerns because of the sharing of private data. These concerns and challenges limit the widespread use of DNN-based applications, so many researchers and companies are trying to optimize DNNs for fast in-the-edge execution. Executing DNNs is further pushed to the edge with the widespread use of embedded processors and ubiquitous wireless networks in Internet-of-Things (IoT) devices. However, inadequate power and computing resources of edge devices, along with the small number of local requests, limit the use of prevalent optimization techniques such as batch processing. In this article, we enable the utilization of the aggregated computing power of several IoT devices by creating a local collaborative network for a subset of DNNs, visual-based applications. In this approach, IoT devices cooperate to conduct single-batch inferencing in real time while exploiting several new model-parallelism methods, which will be introduced in this article. Our approach enhances the collaborative system by creating a balanced and distributed processing pipeline while adjusting the tasks in real time. For experiments, we deploy a system with up to 10 Raspberry Pis and execute state-of-the-art visual models, such as AlexNet, VGG16, Xception, and C3D.
机译:深度神经网络(DNN)的最新进步使我们能够解决传统上挑战的问题。要基于DNN部署服务,由于DNN是计算密集型的,因此消费者需要依赖云中的计算资源。此方法除了创建高质量网络基础架构和数据中心的依赖之外,由于私有数据的共享,提高了新的隐私问题。这些担忧和挑战限制了基于DNN的应用程序的广泛使用,因此许多研究人员和公司正在尝试优化DNN,以便快速地确定。执行DNN进一步推动到边缘,通过广泛使用嵌入式处理器和普遍存在的无线网络在内部物联网(IOT)设备中。然而,边缘设备的功率和计算资源不足,以及少量的本地请求,限制使用普遍优化技术,例如批处理。在本文中,我们通过为DNNS,基于视觉应用的子集创建本地协同网络来实现多个IOT设备的聚合计算能力。在这种方法中,物联网设备在利用几种新的模型并行方法的过程中,在实时进行单批次推断,这将在本文中介绍。我们的方法通过在实时调整任务时创建平衡和分布式处理管道来增强协作系统。对于实验,我们部署了最多10个Raspberry PI的系统,并执行最先进的视觉模型,例如AlexNet,VGG16,Xcepion和C3D。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号