...
首页> 外文期刊>Computer Communications >Deep Reinforcement Learning-based resource allocation strategy for Energy Harvesting-Powered Cognitive Machine-to-Machine Networks
【24h】

Deep Reinforcement Learning-based resource allocation strategy for Energy Harvesting-Powered Cognitive Machine-to-Machine Networks

机译:基于深度加强学习的资源分配策略,用于能源收集动力的认知机器到机网络

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Machine-to-Machine (M2M) communication is a promising technology that may realize the Internet of Things (IoTs) in future networks. However, due to the features of massive devices and concurrent access requirement, it will cause performance degradation and enormous energy consumption. Energy Harvesting-Powered Cognitive M2M Networks (EH-CMNs) as an attractive solution is capable of alleviating the escalating spectrum deficient to guarantee the Quality of Service (QoS) meanwhile decreasing the energy consumption to achieve Green Communication (GC) became an important research topic. In this paper, we investigate the resource allocation problem for EH-CMNs underlaying cellular uplinks. We aim to maximize the energy efficiency of EH-CMNs with consideration of the QoS of Human-to-Human (H2H) networks and the available energy in EH-devices. In view of the characteristic of EH-CMNs, we formulate the problem to be a decentralized Discrete-time and Finite-state Markov Decision Process (DFMDP), in which each device acts as agent and effectively learns from the environment to make allocation decision without the complete and global network information. Owing to the complexity of the problem, we propose a Deep Reinforcement Learning (DRL)-based algorithm to solve the problem. Numerical results validate that the proposed scheme outperforms other schemes in terms of average energy efficiency with an acceptable convergence speed.
机译:机器到机器(M2M)通信是一个有希望的技术,可以在未来的网络中实现事物互联网(IOT)。但是,由于大量设备和并发访问要求的特点,它将导致性能下降和巨大的能耗。能源收集的技术认知M2M网络(EH-CMN)作为有吸引力的解决方案能够减轻易于缺陷的缺乏,以保证服务质量(QoS)与实现绿色通信(GC)的能源消耗降低成为一个重要的研究主题。在本文中,我们调查了EH-CMN欠光蜂窝上行链路的资源分配问题。我们的目标是考虑到人对人(H2H)网络的QoS和EH-Device中的可用能量来最大限度地提高EH-CMN的能效。鉴于EH-CMN的特征,我们制定了一个分散的离散时间和有限状态马尔可夫决策过程(DFMDP)的问题,其中每个设备充当代理并有效地从环境中学习以进行分配决定完整和全局网络信息。由于问题的复杂性,我们提出了一种深度加强学习(DRL)基于算法来解决问题。数值结果验证,所提出的方案在具有可接受的收敛速度的平均能效方面优于其他方案。

著录项

  • 来源
    《Computer Communications》 |2020年第7期|706-717|共12页
  • 作者单位

    Nanjing Forestry Univ Coll Informat Sci & Technol Nanjing 210037 Peoples R China|Univ New South Wales Sch Elect Engn & Telecommun Sydney NSW 2052 Australia;

    Nanjing Forestry Univ Coll Informat Sci & Technol Nanjing 210037 Peoples R China;

    Nanjing Forestry Univ Coll Informat Sci & Technol Nanjing 210037 Peoples R China;

    Univ Sheffield Dept Elect & Elect Engn Sheffield S10 2TN S Yorkshire England;

    Univ Teknol MARA Fac Comp & Math Sci Samarahan Campus Kota Samarahan 94300 Malaysia;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Energy Harvesting; M2M communication; Resource allocation; Deep Reinforcement Learning;

    机译:能量收集;M2M通信;资源分配;深增强学习;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号