首页> 外文会议>IEEE International Conference on Network Infrastructure and Digital Content >A self-organizing resource allocation strategy based on Q-learning approach in ultra-dense networks
【24h】

A self-organizing resource allocation strategy based on Q-learning approach in ultra-dense networks

机译:超密集网络中基于Q学习方法的自组织资源分配策略

获取原文

摘要

In ultra-dense heterogeneous cellular networks, with the density of low power base stations (BSs) increasing, the inter-cell interference (ICI) can be extremely strong when all BSs reuse the same time-frequency resources. In this paper, after proving that allocating orthogonal (frequency) sub-bands to adjacent cells can perform better on throughput than reusing the whole bandwidth, we propose a multi-agent Q-learning based resources allocation (QLRA) approach as an enhanced solution to maximize the system performance. For the QLRA, we operate two learning paradigms: the distributed Q-learning (DQL) algorithm and the centralized Q-learning (CQL) algorithm. In the DQL scenario, all small cells learn independently without sharing any information. While in the CQL scenario, interaction between different agents is taken into consideration and resources are scheduled in a centralized way. Simulation results show that both QLRA scenarios can study an ideal resource allocation strategy automatically and achieve better performance on system throughput. Moreover, by scheduling resources in a centralized way, the CQL scenario can improve the system throughput furtherly.
机译:在超密集异构蜂窝网络中,随着低功率基站(BS)的密度增加,当所有BS重用相同的时频资源时,小区间干扰(ICI)可能会非常强。在本文中,在证明为相邻小区分配正交(频率)子带比重用整个带宽可以更好地提高吞吐量后,我们提出了一种基于多代理Q学习的资源分配(QLRA)方法,作为解决方案最大化系统性能。对于QLRA,我们使用两种学习范例:分布式Q学习(DQL)算法和集中式Q学习(CQL)算法。在DQL方案中,所有小型小区都可以独立学习而无需共享任何信息。在CQL方案中,考虑了不同代理之间的交互,并且以集中方式调度资源。仿真结果表明,两种QLRA方案都可以自动研究理想的资源分配策略,并在系统吞吐量上实现更好的性能。此外,通过集中调度资源,CQL场景可以进一步提高系统吞吐量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号