...
首页> 外文期刊>IEEE communications letters >Reward Function Learning for Q-learning-Based Geographic Routing Protocol
【24h】

Reward Function Learning for Q-learning-Based Geographic Routing Protocol

机译:基于Q学习的地理路由协议的奖励功能学习

获取原文
获取原文并翻译 | 示例

摘要

This letter proposes a new scheme that uses Reward Function Learning for Q-learning-based Geographic routing (RFLQGeo) to improve the performance and efficiency of unmanned robotic networks (URNs). High mobility of robotic nodes and changing environments pose challenges for geographic routing protocols; with multiple features simultaneously considered, routing becomes even harder. Q-learning-based geographic routing protocols (QGeo) with preconfigured reward function encumber the learning process and increase network communication overhead. To solve these problems, we design a routing scheme with an inverse reinforcement learning concept to learn the reward function in real time. We evaluate the performance of the RFLQGeo in comparison with other protocols. The results indicate that the RFLQGeo has a strong ability to organize multiple features, improve network performance, and reduce the communication overhead.
机译:这封信提出了一种将奖励功能学习用于基于Q学习的地理路由(RFLQGeo)的新方案,以提高无人机器人网络(URN)的性能和效率。机械手节点的高移动性和不断变化的环境给地理路由协议带来了挑战;同时考虑多个功能,路由变得更加困难。具有预配置奖励功能的基于Q学习的地理路由协议(QGeo)阻碍了学习过程并增加了网络通信开销。为了解决这些问题,我们设计了一种具有逆强化学习概念的路由方案,以实时学习奖励功能。与其他协议相比,我们评估了RFLQGeo的性能。结果表明,RFLQGeo具有强大的组织多种功能,提高网络性能和减少通信开销的能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号