首页> 外文期刊>電子情報通信学会技術研究報告. 信号処理. Signal Processing >Learning-based Cell Selection for Open-access Femtocell Networks
【24h】

Learning-based Cell Selection for Open-access Femtocell Networks

机译:开放式接入毫微微小区网络的基于学习的小区选择

获取原文
获取原文并翻译 | 示例
       

摘要

In an open-access femtocell networks, nearby cellular users (Macro User: MU) may join one of the neighboring femtocells to enhance their capacity through a handover procedure. To avoid undesirable effects such as the ping-pong effect after a handover, the effectiveness of cell selection method must be ensured. Previous work related to such a problem is based on instantaneous measure of single or multiple metrics, e.g. capacity, received signal strength (RSS), load, etc. However, one problem with such approaches is that present measured performance does not necessarily reflect the future performance, thus the need for novel cell selection that can predict the horizon. In this report, we propose a Reinforcement Learning (RL) Q-learning algorithm as a model-free solution for the cell selection problem in a non-stationary femtocell network. The MU takes advantage of the RL algorithm, during a handover decision, to estimate the efficiency of neighboring femtocells through trial-and-error interaction with its environment. The simulation results show the benefits of using learning in terms of the gained capacity and the number of handovers with respect to different selection methods in the literature (least loaded (LL), random and capacity-based).
机译:在开放式毫微微小区网络中,附近的蜂窝用户(宏用户:MU)可以加入相邻的毫微微小区之一,以通过切换过程增强其容量。为了避免诸如切换后的乒乓效应之类的不良影响,必须确保小区选择方法的有效性。与该问题有关的先前工作是基于单个或多个度量的瞬时度量,例如然而,这种方法的一个问题是当前测得的性能并不一定反映未来的性能,因此需要能够预测水平的新颖小区选择。在此报告中,我们提出了强化学习(RL)Q学习算法,作为非静态毫微微小区网络中小区选择问题的无模型解决方案。 MU在切换决策过程中利用RL算法,通过与环境之间的反复试验来估计相邻毫微微小区的效率。仿真结果表明,相对于文献中的不同选择方法(最小负载(LL),随机和基于容量),在获得的容量和切换次数方面使用学习的好处。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号