...
首页> 外文期刊>IEEE Transactions on Vehicular Technology >Deep-Reinforcement-Learning-Based Optimization for Cache-Enabled Opportunistic Interference Alignment Wireless Networks
【24h】

Deep-Reinforcement-Learning-Based Optimization for Cache-Enabled Opportunistic Interference Alignment Wireless Networks

机译:基于深度学习的优化技术,用于启用缓存的机会性干扰对准无线网络

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Both caching and interference alignment (IA) are promising techniques for next-generation wireless networks. Nevertheless, most of the existing works on cache-enabled IA wireless networks assume that the channel is invariant, which is unrealistic considering the time-varying nature of practical wireless environments. In this paper, we consider realistic time-varying channels. Specifically, the channel is formulated as a finite-state Markov channel (FSMC). The complexity of the system is very high when we consider realistic FSMC models. Therefore, in this paper, we propose a novel deep reinforcement learning approach, which is an advanced reinforcement learning algorithm that uses a deep Q network to approximate the Q value-action function. We use Google TensorFlow to implement deep reinforcement learning in this paper to obtain the optimal IA user selection policy in cache-enabled opportunistic IA wireless networks. Simulation results are presented to show that the performance of cache-enabled opportunistic IA networks in terms of the network's sum rate and energy efficiency can be significantly improved by using the proposed approach.
机译:缓存和干扰对齐(IA)都是下一代无线网络的有前途的技术。尽管如此,有关启用缓存的IA无线网络的大多数现有工作都假定信道是不变的,考虑到实际无线环境的时变性质,这是不现实的。在本文中,我们考虑了现实的时变通道。具体而言,将通道公式化为有限状态马尔可夫通道(FSMC)。当我们考虑实际的FSMC模型时,系统的复杂性非常高。因此,在本文中,我们提出了一种新颖的深度强化学习方法,这是一种先进的强化学习算法,它使用深度Q网络来逼近Q值作用函数。在本文中,我们使用Google TensorFlow实施深度强化学习,以在启用缓存的机会性IA无线网络中获得最佳的IA用户选择策略。仿真结果表明,通过使用所提出的方法,可以大大提高启用缓存的机会性IA网络在网络总速率和能源效率方面的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号