首页> 外文期刊>Internet of Things Journal, IEEE >Collaborative Computation Offloading and Resource Allocation in Multi-UAV-Assisted IoT Networks: A Deep Reinforcement Learning Approach
【24h】

Collaborative Computation Offloading and Resource Allocation in Multi-UAV-Assisted IoT Networks: A Deep Reinforcement Learning Approach

机译:多无人机辅助物联网网络中的协作计算卸载与资源分配:深度加强学习方法

获取原文
获取原文并翻译 | 示例
           

摘要

In the fifth-generation (5G) wireless networks, Edge-Internet-of-Things (EIoT) devices are envisioned to generate huge amounts of data. Due to the limitation of computation capacity and battery life of devices, all tasks cannot be processed by these devices. However, mobile-edge computing (MEC) is a very promising solution enabling offloading of tasks to nearby MEC servers to improve quality of service. Also, during emergency situations in areas where network failure exists, unmanned aerial vehicles (UAVs) can be deployed to restore the network by acting as Aerial Base Stations and computational nodes for the edge network. In this article, we consider a central network controller who trains observations and broadcasts the trained data to a multi-UAV cluster network. Each UAV cluster head acts as an agent and autonomously allocates resources to EIoT devices in a decentralized fashion. We propose model-free deep reinforcement learning (DRL)-based collaborative computation offloading and resource allocation (CCORA-DRL) scheme in an aerial to ground (A2G) network for emergency situations, which can control the continuous action space. Each agent learns efficient computation offloading policies independently in the network and checks the statuses of the UAVs through Jain's Fairness index. The objective is minimizing task execution delay and energy consumption and acquiring an efficient solution by adaptive learning from the dynamic A2G network. Simulation results reveal that our scheme through deep deterministic policy gradient, effectively learns the optimal policy, outperforming A3C, deep Q-network and greedy-based offloading for local computation in stochastic dynamic environments.
机译:在第五代(5G)无线网络中,设想边缘互联网(EIT)设备以产生大量数据。由于计算能力和设备电池寿命的限制,这些设备无法处理所有任务。但是,移动边缘计算(MEC)是一个非常有前景的解决方案,使您可以将任务卸载到附近的MEC服务器,以提高服务质量。此外,在存在网络故障的区域的紧急情况下,可以部署无人驾驶飞行器(UAV)以通过用作边缘网络的空中基站和计算节点来恢复网络。在本文中,我们考虑一个中央网络控制器,他列举观察并将训练的数据广播到多UAV群集网络。每个UAV群集头充当代理,并以分散的方式自主地将资源分配给Eiot设备。我们提出了无模型的深度加强学习(DRL)基础的协作计算卸载和资源分配(CCORA-DRL)方案,用于紧急情况,可以控制连续动作空间。每个代理在网络中独立地学习有效的计算卸载策略,并通过Jain的公平索引检查无人机的状态。目的是通过从动态A2G网络自适应学习来最小化任务执行延迟和能量消耗并获取有效的解决方案。仿真结果表明,我们的方案通过深度确定性政策梯度,有效地了解了随机动态环境中本地计算的最佳政策,优于基于A3C,深度Q网络和贪婪的卸载。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号