首页> 外文期刊>Journal of Intelligent & Robotic Systems: Theory & Application >Distributed Learning for Planning Under Uncertainty Problems with Heterogeneous Teams: Scaling Up the Multiagent Planning with Distributed Learning and Approximate Representations
【24h】

Distributed Learning for Planning Under Uncertainty Problems with Heterogeneous Teams: Scaling Up the Multiagent Planning with Distributed Learning and Approximate Representations

机译:异构团队在不确定性问题下进行规划的分布式学习:利用分布式学习和近似表示来扩展多主体规划

获取原文
获取原文并翻译 | 示例
       

摘要

This paper considers the problem of multiagent sequential decision making under uncertainty and incomplete knowledge of the state transition model. A distributed learning framework, where each agent learns an individual model and shares the results with the team, is proposed. The challenges associated with this approach include choosing the model representation for each agent and how to effectively share these representations under limited communication. A decentralized extension of the model learning scheme based on the Incremental Feature Dependency Discovery (Dec-iFDD) is presented to address the distributed learning problem. The representation selection problem is solved by leveraging iFDD's property of adjusting the model complexity based on the observed data. The model sharing problem is addressed by having each agent rank the features of their representation based on the model reduction error and broadcast the most relevant features to their teammates. The algorithm is tested on the multiagent block building and the persistent search and track missions. The results show that the proposed distributed learning scheme is particularly useful in heterogeneous learning setting, where each agent learns significantly different models. We show through large-scale planning under uncertainty simulations and flight experiments with state-dependent actuator and fuel-burn- rate uncertainty that our planning approach can outperform planners that do not account for heterogeneity between agents.
机译:本文考虑了不确定性和状态转移模型知识不完全的情况下的多主体顺序决策问题。提出了一个分布式学习框架,其中每个代理可以学习一个单独的模型并与团队共享结果。与这种方法相关的挑战包括为每个代理选择模型表示以及如何在有限的沟通下有效地共享这些表示。提出了基于增量特征依赖发现(Dec-iFDD)的模型学习方案的分散扩展,以解决分布式学习问题。通过利用iFDD根据观察到的数据调整模型复杂度的属性来解决表示选择问题。通过让每个座席根据模型归约误差对他们表示的特征进行排序,并将最相关的特征广播给队友,来解决模型共享问题。该算法已在多代理程序块构建以及持久搜索和跟踪任务上进行了测试。结果表明,所提出的分布式学习方案在异构学习环境中特别有用,在异构学习环境中,每个代理都学习明显不同的模型。通过在不确定性模拟和具有状态相关执行器和燃油燃烧率不确定性的飞行实验下进行的大规模计划,我们的计划方法可以胜过不考虑代理之间异质性的计划者。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号