首页> 外文期刊>IEEE transactions on systems, man, and cybernetics. Part A, Systems and humans >Using feedback in collaborative reinforcement learning to adaptively optimize MANET routing
【24h】

Using feedback in collaborative reinforcement learning to adaptively optimize MANET routing

机译:在协作强化学习中使用反馈来自适应优化MANET路由

获取原文
获取原文并翻译 | 示例
       

摘要

Designers face many system optimization problems when building distributed systems. Traditionally, designers have relied on optimization techniques that require either prior knowledge or centrally managed runtime knowledge of the system's environment, but such techniques are not viable in dynamic networks where topology, resource, and node availability are subject to frequent and unpredictable change. To address this problem, we propose collaborative reinforcement learning (CRL) as a technique that enables groups of reinforcement learning agents to solve system optimization problems online in dynamic, decentralized networks. We evaluate an implementation of CRL in a routing protocol for mobile ad hoc networks, called SAMPLE. Simulation results show how feedback in the selection of links by routing agents enables SAMPLE to adapt and optimize its routing behavior to varying network conditions and properties, resulting in optimization of network throughput. In the experiments, SAMPLE displays emergent properties such as traffic flows that exploit stable routes and reroute around areas of wireless interference or congestion. SAMPLE is an example of a complex adaptive distributed system.
机译:设计人员在构建分布式系统时面临许多系统优化问题。传统上,设计人员依赖于需要先验知识或需要对系统环境进行集中管理的运行时知识的优化技术,但是在拓扑,资源和节点可用性频繁且不可预测的变化的动态网络中,此类技术不可行。为了解决此问题,我们提出了协作强化学习(CRL),它是一种使强化学习代理组能够在动态,分散的网络中在线解决系统优化问题的技术。我们评估用于移动自组织网络的路由协议SAMPLE的CRL实现。仿真结果表明,路由代理对链路选择的反馈如何使SAMPLE适应并优化其路由行为以适应变化的网络条件和属性,从而优化网络吞吐量。在实验中,SAMPLE显示紧急属性,例如利用稳定路由并在无线干扰或拥塞区域周围重新路由的流量。 SAMPLE是复杂的自适应分布式系统的示例。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号