...
首页> 外文期刊>IEEE Transactions on Vehicular Technology >Meta-Reinforcement Learning Based Resource Allocation for Dynamic V2X Communications
【24h】

Meta-Reinforcement Learning Based Resource Allocation for Dynamic V2X Communications

机译:基于META加强的动态V2X通信资源分配

获取原文
获取原文并翻译 | 示例
           

摘要

This paper studies the allocation of shared resources between vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) links in vehicle-to-everything (V2X) communications. In existing algorithms, dynamic vehicular environments and quantization of continuous power become the bottlenecks for providing an effective and timely resource allocation policy. In this paper, we develop two algorithms to deal with these difficulties. First, we propose a deep reinforcement learning (DRL)-based resource allocation algorithm to improve the performance of both V2I and V2V links. Specifically, the algorithm uses deep Q-network (DQN) to solve the sub-band assignment and deep deterministic policy-gradient (DDPG) to solve the continuous power allocation problem. Second, we propose a meta-based DRL algorithm to enhance the fast adaptability of the resource allocation policy in the dynamic environment. Numerical results demonstrate that the proposed DRL-based algorithm can significantly improve the performance compared to the DQN-based algorithm that quantizes continuous power. In addition, the proposed meta-based DRL algorithm can achieve the required fast adaptation in the new environment with limited experiences.
机译:本文研究了车辆 - 基础设施(V2I)和车辆到车辆(V2V)链路之间的共享资源分配了车辆 - 一切(V2X)通信。在现有算法中,动态车辆环境和连续功率的量化成为提供有效和及时的资源分配策略的瓶颈。在本文中,我们开发了两个算法来处理这些困难。首先,我们提出了一个深度加强学习(DRL)基于资源分配算法,以提高V2I和V2V链路的性能。具体而言,该算法使用深Q-Network(DQN)来解决子带分配和深度确定性策略梯度(DDPG)来解决连续功率分配问题。其次,我们提出了一种基于元的DRL算法,可以增强动态环境中资源分配策略的快速适应性。数值结果表明,与基于DQN的算法相比,所提出的基于DRL的算法可以显着提高性能,这些算法量化连续功率。此外,所提出的基于元的DRL算法可以在具有有限的经验中实现新环境中所需的快速调整。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号