【24h】

Deep Reinforcement Learning-based Computation Offloading in Vehicular Networks

机译:基于深度加强学习的基于学习的资源计算卸载

获取原文

摘要

With the rapid development of 5G communications and the Internet of Things (IoT), vehicular networks have enriched people’s lives with abundant applications. Since most of such applications are computation-intensive and delay-sensitive, it is difficult to guarantee the requirements of low latency and low energy consumption by relying on vehicles only. In addition, low latency has posed great challenge to the cloud computing. Therefore, as a promising paradigm, Mobile Edge Computing (MEC) is developed for vehicular networks to relieve the pressure on vehicles, which means to offload tasks to edge servers. However, existing studies mainly consider a constant channel scenario and ignore load balancing of edge servers in the system. In this paper, deep reinforcement learning is adopted to build an intelligent offloading system, which can balance the load balancing in the time-varying channel scenario. First, we introduce a communication model and a calculation model. Then the offloading strategy is formulated as a joint optimization problem. Furthermore, a deep deterministic policy gradient (DDPG) algorithm based on priority experience replay in the distributed scheme, which considers the load balancing, is proposed. Finally, performance evaluations illustrate the effectiveness and superiority of the proposed algorithm.
机译:随着5G通信的快速发展和物联网(物联网),车辆网络已经丰富了人们的生活,拥有丰富的应用。由于大多数此类应用是计算密集型和延迟敏感的,因此通过仅依赖车辆,难以保证低延迟和低能耗的要求。此外,低延迟对云计算构成了巨大挑战。因此,作为一个有前途的范例,开发了移动边缘计算(MEC)以释放车辆上的压力,这意味着将任务卸载到边缘服务器。然而,现有研究主要考虑系统中的恒定频道场景并忽略边缘服务器的负载平衡。在本文中,采用深增强学习来构建智能卸载系统,可以平衡时变频道场景中的负载平衡。首先,我们介绍通信模型和计算模型。然后将卸载策略作为联合优化问题制定。此外,提出了一种基于优先级经验重放的深度确定性政策梯度(DDPG)算法,其考虑负载平衡的分布式方案。最后,性能评估说明了所提出的算法的有效性和优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号