首页> 外文会议>IEEE International Conference on Communication Technology >Multi-user Computation Offloading for Mobile Edge Computing: A Deep Reinforcement Learning and Game Theory Approach
【24h】

Multi-user Computation Offloading for Mobile Edge Computing: A Deep Reinforcement Learning and Game Theory Approach

机译:用于移动边缘计算的多用户计算卸载:深度增强学习和游戏理论方法

获取原文

摘要

At present, with the development of the Internet of Things (IoT) and the Internet of Everything (IoE), Mobile edge computing (MEC) is proposed to provide universal and flexible computing services at the edge of the wireless access network. To use the services provided by the MEC, how to make efficient and reasonable offloading decisions is very important. In this paper, we study the problem of interference-aware multi-user computation offloading in the MEC. For this problem, we formulate a multi-user computation offloading game model and analyze the existence of nash equilibrium. Then, we design the computation offloading algorithm (e.g., Nash-Q-learning) based on nash equilibrium and reinforcement learning to minimize system overhead. Furthermore, in order to avoid Nash-Q-learning suffering from dimensional disaster, we get a deep reinforcement learning algorithm (e.g., Nash-DQN) by adding the neural networks to Nash-Q-learning. The performance of the proposed algorithms has been compared with the other algorithms by simulation. The simulation results show that the performance of the proposed algorithms is superior to the other multi-user computation offloading algorithms.
机译:目前,随着事物互联网(IOT)和一切(IOE)的互联网,提出了移动边缘计算(MEC),以在无线接入网络的边缘提供通用和灵活的计算服务。要使用MEC提供的服务,如何进行高效且合理的卸载决策非常重要。在本文中,我们研究了MEC中干扰感知多用户计算的问题。对于此问题,我们制定了多用户计算卸载游戏模型,并分析了纳什均衡的存在。然后,我们基于纳什均衡和增强学习设计计算卸载算法(例如,NASH-Q学习),以最大限度地减少系统开销。此外,为了避免患有尺寸灾害的纳什Q学习,我们通过将神经网络添加到Nash-Q学习来获得深度加强学习算法(例如,NASH-DQN)。通过模拟将所提出的算法的性能与其他算法进行了比较。仿真结果表明,所提出的算法的性能优于其他多用户计算卸载算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号