...
首页> 外文期刊>MATEC Web of Conferences >Reinforcement learning-based link adaptation in long delayed underwater acoustic channel
【24h】

Reinforcement learning-based link adaptation in long delayed underwater acoustic channel

机译:长时延水下声通道中基于强化学习的链路自适应

获取原文
           

摘要

In this paper, we apply reinforcement learning, a significant area of machine learning, to formulate an optimal self-learning strategy to interact in an unknown and dynamically variable underwater channel. The dynamic and volatile nature of the underwater channel environment makes it impossible to employ pre-knowledge. In order to select the optimal parameters to transfer data packets, by using reinforcement learning, this problem could be resolved, and better throughput could be achieved without any environmental pre-information. The slow sound velocity in an underwater scenario, means that the delay of transmitting packet acknowledgement back to sender from the receiver is material, deteriorating the convergence speed of the reinforcement learning algorithm. As reinforcement learning requires a timely acknowledgement feedback from the receiver, in this paper, we combine a juggling-like ARQ (Automatic Repeat Request) mechanism with reinforcement learning to minimize the long-delayed reward feedback problem. The simulation is accomplished by OPNET.
机译:在本文中,我们应用强化学习(机器学习的重要领域)来制定最优的自学习策略,以在未知且动态变化的水下通道中进行交互。水下通道环境的动态和易变的性质使得不可能使用预知识。为了选择最佳参数来传输数据包,通过使用强化学习,可以解决此问题,并且可以在没有任何环境预先信息的情况下实现更好的吞吐量。在水下场景中,声速较慢,这意味着从接收方将数据包确认发送回发送方的延迟非常大,从而降低了增强学习算法的收敛速度。由于强化学习需要接收者的及时确认反馈,因此在本文中,我们将杂耍式的ARQ(自动重复请求)机制与强化学习相结合,以最大程度地减少长期延迟的奖励反馈问题。该模拟由OPNET完成。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号