首页> 外文会议>IEEE Vehicular Technology Conference >Distributed Q-learning for Interference Control in OFDMA-based Femtocell Networks
【24h】

Distributed Q-learning for Interference Control in OFDMA-based Femtocell Networks

机译:基于MMA的毫微微小区网络的干扰控制分布式Q学习

获取原文

摘要

This paper proposes a self-organized power allocation technique to solve the interference problem caused by a femtocell network operating in the same channel as an orthogonal frequency division multiple access cellular network. We model the femto network as a multi-agent system where the different femto base stations are the agents in charge of managing the radio resources to be allocated to their femtousers. We propose a form of real-time multi-agent reinforcement learning, known as decentralized Q-learning, to manage the interference generated to macro-users. By directly interacting with the surrounding environment in a distributed fashion, the multi-agent system is able to learn an optimal policy to solve the interference problem. Simulation results show that the introduction of the femto network increases the system capacity without decreasing the capacity of the macro network.
机译:本文提出了一种自组织的功率分配技术,解决了由在与正交频分的频划线多址蜂窝网络相同信道中操作的毫微微小区网络引起的干扰问题。我们将毫微微网络模拟为一个多代理系统,其中不同的毫微微基站是负责管理被广播资源的代理商被分配给他们的虚线。我们提出了一种形式的实时多代理强化学习,称为分散的Q学习,以管理为宏观用户生成的干扰。通过以分布式方式与周围环境的直接交互,多助手系统能够学习解决干扰问题的最佳政策。仿真结果表明,毫微微网络的引入增加了系统容量,而不会降低宏观网络的容量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号