首页> 外文会议>Conference on Global Oceans : Singapore – U.S. Gulf Coast >Deep Reinforcement Learning Based Energy Efficient Underwater Acoustic Communications
【24h】

Deep Reinforcement Learning Based Energy Efficient Underwater Acoustic Communications

机译:基于深度加强学习的节能水下声学通信

获取原文

摘要

Due to the unique channel characteristics and the difficulty of battery charging, energy efficient transmission is a critical topic in underwater acoustic communications (UACs). This paper considers the transmit frequency and power selection of a single link in the UAC, aiming to achieve best tradeoffs between packet delivery ratio and energy consumption, i.e., maximize the energy efficiency. Unlike traditional optimization approaches which rely on system statistics, we employ the deep reinforcement learning (DRL) technique to learn the optimal transmission strategy on the fly, without requiring any prior environmental information. Since two different actions need to be selected, traditional DRL algorithms are no longer applicable to our problem. Motivated by this, we put forth a new DRL algorithm that decides and evaluates two types of actions separately, referred to as two-action selection–based deep Q-network (TAS–DQN). The numerical results demonstrate that TAS–DQN outperforms Q–learning and original DQN, and achieves near optimal energy efficiency of the network.
机译:由于频道特性独特,电池充电难度,节能传输是水下声学通信(UAC)的关键话题。本文考虑了UAC中的单链路的发射频率和功率选择,旨在在数据包传递比和能量消耗之间获得最佳权衡,即,最大化能效。不同于依靠系统统计传统的优化方法,我们采用深强化学习(DRL)技术学习上飞的最优传输策略,而无需任何事先的环境信息。由于需要选择两种不同的操作,因此传统的DRL算法不再适用于我们的问题。由此激励,我们提出了一种新的DRL算法,该算法分别决定和评估两种类型的操作,称为基于两个动作选择的深Q网络(TAS-DQN)。数值结果表明,TAS-DQN优于Q-Learning和原始DQN,并达到了网络的最佳能量效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号