...
首页> 外文期刊>Journal of information and computational science >Strategic Learning of Cross-layer Design for Channel Access and Transmission Rate Adaptation in Energy-constrained Cognitive Radio Networks
【24h】

Strategic Learning of Cross-layer Design for Channel Access and Transmission Rate Adaptation in Energy-constrained Cognitive Radio Networks

机译:能量受限的认知无线电网络中信道访问和传输速率适配的跨层设计策略学习

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, we investigate the cross-layer strategic design of joint channel access and transmission rate adaptation in Cognitive Radio (CR) networks. Our target is minimizing the cost function which jointly considers the energy consumption in physical layer and the packet loss in data link layer. If the dynamic, time-varying nature of CR environment is completely known, the problem can be formulated as a Markov Decision Process (MDP). However, for the unknown CR environment, CR users should apply the multi-agent Reinforcement Learning (RL) to design the strategy for channel access and transmission rate choice. The multi-agent reinforcement learning is decentralized applied in the framework of Correlated Equilibrium (CE)-Q learning and the convergence is guaranteed. Furthermore, by setting different values for the parameter in cost function, we can adjust the tradeoff between energy efficient and the packet loss rate. Simulation results show the performances of multi-agent RL approach that of the MDP solution and the parameter in cost function can efficiently adjust the tradeoff of energy consumption and the packet loss rate.
机译:在本文中,我们研究了认知无线电(CR)网络中联合信道访问和传输速率自适应的跨层策略设计。我们的目标是最小化成本函数,该函数共同考虑物理层的能耗和数据链路层的数据包丢失。如果完全了解CR环境的动态,时变性质,则可以将问题表述为马尔可夫决策过程(MDP)。但是,对于未知的CR环境,CR用户应应用多主体强化学习(RL)来设计信道访问和传输速率选择的策略。将多智能体强化学习在相关均衡(CE)-Q学习框架中进行分散应用,可以保证收敛性。此外,通过为成本函数中的参数设置不同的值,我们可以调整能源效率和丢包率之间的权衡。仿真结果表明,多智能体RL的性能与MDP解决方案相近,成本函数中的参数可以有效地调整能耗与丢包率之间的权衡。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号