首页> 外文期刊>Communications, IET >Deep Q-learning based resource allocation in industrial wireless networks for URLLC
【24h】

Deep Q-learning based resource allocation in industrial wireless networks for URLLC

机译:基于Q学习的Urllc工业无线网络资源分配

获取原文
获取原文并翻译 | 示例
       

摘要

Ultra-reliable low-latency communication (URLLC) is one of the promising services offered by fifth-generation technology for an industrial wireless network. Moreover, reinforcement learning is gaining attention due to its potential to learn from observed as well as unobserved results. Industrial wireless nodes (IWNs) may vary dynamically due to inner or external variables and thus require a depreciation of the dispensable redesign of the network resource allocation. Traditional methods are explicitly programmed, making it difficult for networks to dynamically react. To overcome such a scenario, deep Q-learning (DQL)-based resource allocation strategies as per the learning of the experienced trade-offs' and interdependencies in IWN is proposed. The proposed findings indicate that the algorithm can find the best performing measures to improve the allocation of resources. Moreover, DQL further reinforces to achieve better control to have ultra-reliable and low-latency IWN. Extensive simulations show that the suggested technique leads to the distribution of URLLC resources in fairness manner. In addition, the authors also assess the impact on resource allocation by the DQL's inherent learning parameters.
机译:超可靠的低延迟通信(URLLC)是工业无线网络第五代技术提供的有前途的服务之一。此外,由于其潜力从观察到的和未观察结果的潜力以及未观察结果,因此增强学习是关注的。工业无线节点(IWNS)可能因内部或外部变量而动态变化,因此需要折旧网络资源分配的可分除重新设计。传统方法明确编程,使网络难以动态地反应。为了克服这种情况,提出了根据经验丰富的权衡的学习的深度Q学习(DQL)基于资源分配策略,并提出了IWN中的相互依赖性。建议的发现表明,该算法可以找到最佳性能措施来改善资源分配。此外,DQL进一步增强以实现更好的控制,具有超可靠和低延迟的IWN。广泛的模拟表明,建议的技术导致了公平方式的URLLC资源的分布。此外,作者还评估DQL固有的学习参数对资源分配的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号