首页> 外文期刊>Communications, IET >Optimised Q-learning for WiFi offloading in dense cellular networks
【24h】

Optimised Q-learning for WiFi offloading in dense cellular networks

机译:针对密集蜂窝网络中WiFi卸载的优化Q学习

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

WiFi traffic offloading is becoming especially appealing because of the upcoming ultra-dense cellular networks. However, WiFi offloading decision as well as WiFi-Access Points (W-AP) selection should be carefully studied in order not to affect the offloaded users' experience. Here, a new reinforcement-learning framework is presented. The authors propose a distributed Q-learning algorithm in which each cellular user learns about his local environment and selects the best base station (macro-BS or W-AP) after reaching convergence. They introduce a new reward parameter which takes into account the load of each detected W-AP, the duration of the vertical handover, the offered gain, as well as the achieved signal-to-interference-plus-noise ratio. With the Q-learning scheme, each user decides to join the WiFi offloading or not, depending on the received reward from his environment and from his previous learning. In addition, since the AP's load value is very important in the reward parameter, an optimal value of the weight given to the channel load is solved under quality of service constraint. Simulation results showed the effectiveness of the proposed Q-learning-based scheme when compared with common WiFi offloading scheme in terms of cellular residence time.
机译:由于即将到来的超密集蜂窝网络,WiFi流量卸载变得特别有吸引力。但是,应仔细研究WiFi卸载决策以及WiFi接入点(W-AP)选择,以免影响卸载用户的体验。在这里,提出了一个新的强化学习框架。作者提出了一种分布式Q学习算法,其中每个蜂窝用户都了解自己的本地环境,并在达到收敛后选择最佳基站(宏BS或W-AP)。他们引入了一个新的奖励参数,该参数考虑了每个检测到的W-AP的负载,垂直切换的持续时间,提供的增益以及已实现的信噪比。借助Q学习方案,每个用户都可以决定是否加入WiFi卸载,具体取决于他的环境和他先前的学习所获得的奖励。另外,由于AP的负载值在奖励参数中非常重要,因此在服务质量约束下解决了赋予信道负载的权重的最优值。仿真结果表明,与常见的WiFi卸载方案相比,基于Q学习的方案在蜂窝停留时间方面的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号