首页> 外文会议>IEEE Global Communications Conference >A Reinforcement Learning Framework for QoS-Driven Radio Resource Scheduler
【24h】

A Reinforcement Learning Framework for QoS-Driven Radio Resource Scheduler

机译:QoS驱动无线电资源调度程序的加强学习框架

获取原文

摘要

In cellular communication systems, radio resources are allocated to users by the MAC scheduler, that typically runs at the base station (BS). The task of the scheduler is to meet the quality of service (QoS) requirements of each data flow while maximizing the system throughput and achieving a desired level of fairness amongst users. Traditional schedulers use handcrafted metrics and are meticulously tuned to achieve a delicate balance between multiple, often conflicting objectives. Diverse QoS requirements of 5G networks further complicate traditional schedulers. In this paper, we propose a novel reinforcement learning based scheduler that learns an allocation policy to simultaneously optimize multiple objectives. Our approach allows network operators to customize their requirements, by assigning priority values to QoS classes. In addition, we adopt a flexible neural-network architecture that can easily adapt to varying number of flows, drastically simplifying training, thus rendering it viable for practical implementation in constrained systems. We demonstrate, via simulations, that our algorithm outperforms conventional heuristics such as M-LWDF, EXP-RULE and LOG-RULE and is robust to changes in radio environment and traffic patterns.
机译:在蜂窝通信系统中,无线电资源由MAC调度器分配给用户,该用户通常在基站(BS)处运行。调度程序的任务是符合每个数据流的服务质量(QoS)要求,同时最大化系统吞吐量并在用户之间实现所需的公平程度。传统调度员使用手工制定的指标,并且精心调整,以在多个经常相互冲突的目标之间实现微妙的平衡。 5G网络的不同QoS要求进一步使传统调度员复杂化。在本文中,我们提出了一种新的基于加强学习的计划程序,该计划学习分配策略同时优化多个目标。我们的方法允许网络运营商通过将优先级值分配给QoS类来自定义其要求。此外,我们采用灵活的神经网络架构,可以轻松地适应不同数量的流量,彻底简化培训,从而使其在受约束系统中的实际实现中可行。我们证明,通过模拟,我们的算法优于传统的启发式如M-LWDF,EXP-RULE和LOG-规则,并且是稳健的无线电环境和流量模式的变化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号