首页> 外文期刊>Intelligent and Converged Networks >Deep reinforcement learning based computation offloading and resource allocation for low-latency fog radio access networks
【24h】

Deep reinforcement learning based computation offloading and resource allocation for low-latency fog radio access networks

机译:基于强化学习的计算卸载和资源分配低延迟雾无线电接入网络

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Fog Radio Access Networks (F-RANs) have been considered a groundbreaking technique to support the services of Internet of Things by leveraging edge caching and edge computing. However, the current contributions in computation offloading and resource allocation are inefficient; moreover, they merely consider the static communication mode, and the increasing demand for low latency services and high throughput poses tremendous challenges in F-RANs. A joint problem of mode selection, resource allocation, and power allocation is formulated to minimize latency under various constraints. We propose a Deep Reinforcement Learning (DRL) based joint computation offloading and resource allocation scheme that achieves a suboptimal solution in F-RANs. The core idea of the proposal is that the DRL controller intelligently decides whether to process the generated computation task locally at the device level or offload the task to a fog access point or cloud server and allocates an optimal amount of computation and power resources on the basis of the serving tier. Simulation results show that the proposed approach significantly minimizes latency and increases throughput in the system.
机译:雾无线接入网络(F-RANs)认为是一个开创性的技术支持物联网的服务利用边缘缓存和边缘计算。当前计算卸载的贡献和资源配置效率低下;此外,他们仅仅考虑静态通信方式,增加需求低延迟服务和高吞吐量的姿势在F-RANs巨大挑战。模式选择、资源分配和权力分配是制定减少延迟在各种约束。基于强化学习(DRL)关节计算卸载和资源分配方案,达到一个理想的解决方案F-RANs。DRL控制器智能地决定是否在本地流程生成的计算任务设备水平或卸载任务雾访问点或云服务器和分配一个最优数量的计算和电力资源的基础上服务层。结果表明,该方法显著减少延迟和增加系统的吞吐量。

著录项

相似文献

  • 外文文献
  • 中文文献
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号