首页> 外文会议>International conference on human centered computing >A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing
【24h】

A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing

机译:一种面向移动边缘计算的计算卸载的深度强化学习方法

获取原文

摘要

In order to improve the quality of service for users and reduce the energy consumption of the cloud computing environment, Mobile Edge Computing (MEC) is a promising paradigm by providing computing resources which is close to the end device in physical distance. Nevertheless, the computation offloading policy to satisfy the requirements of the service provider and consumer at the same time within a MEC system still remains challenging. In this paper, we propose an offloading decision policy with three-level structure for MEC system different from the traditional two-level architecture to formulate the offloading decision optimization problem by minimizing the total cost of energy consumption and delay time. Because the traditional optimization methods could not solve this dynamic system problem efficiently, Reinforcement Learning (RL) has been used in complex control systems in recent years. We design a deep reinforcement learning (DRL) approach to minimize the total cost by applying deep Q-learning algorithm to address the issues of too large system state dimension. The simulation results show that the proposed algorithm has nearly optimal performance than traditional methods.
机译:为了提高用户的服务质量并减少云计算环境的能耗,移动边缘计算(MEC)通过提供在物理距离上接近终端设备的计算资源,是一种很有前途的范例。然而,在MEC系统内同时满足服务提供商和消费者需求的计算卸载策略仍然具有挑战性。本文提出了一种不同于传统的两层体系结构的三层结构的MEC系统卸载决策策略,通过最小化能耗和延迟时间的总和来制定卸载决策优化问题。由于传统的优化方法无法有效地解决此动态系统问题,因此,近年来,强化学习(RL)已用于复杂的控制系统中。我们设计了一种深度强化学习(DRL)方法,通过应用深度Q学习算法来解决系统状态尺寸过大的问题,从而将总成本降至最低。仿真结果表明,与传统方法相比,该算法具有近乎最优的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号