首页> 外文学位 >Meta-level control in multi-agent systems.
【24h】

Meta-level control in multi-agent systems.

机译:多代理系统中的元级别控制。

获取原文
获取原文并翻译 | 示例

摘要

Sophisticated agents operating in open environments must make complex real-time control decisions on scheduling and coordination of domain activities. These decisions are made in the context of limited resources and uncertainty about the outcomes of activities. Many efficient architectures and algorithms that support these computation-intensive activities have been developed and studied. However, none of these architectures explicitly reason about the consumption of time and other resources by these activities, which may degrade an agent's performance. The problem of sequencing execution and computational activities without consuming too many resources in the process, is the meta-level control problem for a resource-bounded rational agent.; The focus of this research is to provide effective allocation of computation and unproved performance of individual agents in a cooperative multi-agent system. This is done by approximating the ideal solution to meta-level decisions made by these agents using reinforcement learning methods. A meta-level agent control architecture for meta-level reasoning with bounded computational overhead is described. This architecture supports decisions on when to accept, delay or reject a new task, when it is appropriate to negotiate with another agent, whether to renegotiate when a negotiation task fails, how much effort to put into scheduling when reasoning about a new task and whether to reschedule when actual execution performance deviates from expected performance. The major contributions of this work are: a resource-bounded framework that supports of the agent state which is used by hand-generated heuristic strategies to make meta-level control decisions; and a reinforcement learning based approach which automatically learns efficient meta-level control policies.
机译:在开放环境中运行的复杂代理必须对域活动的调度和协调做出复杂的实时控制决策。这些决定是在资源有限和活动结果不确定的情况下做出的。已经开发和研究了许多支持这些计算密集型活动的有效体系结构和算法。但是,这些架构都没有明确地说明这些活动消耗的时间和其他资源,这可能会降低代理的性能。在不消耗太多资源的情况下对执行和计算活动进行排序的问题是资源受限的理性主体的元级别控制问题。这项研究的重点是在协作多智能体系统中提供有效的计算分配和单个智能体未经验证的性能。这是通过使用强化学习方法对由这些代理做出的元级别决策的理想解决方案进行近似估算而完成的。描述了具有有限计算开销的用于元级推理的元级代理控制体系结构。该体系结构支持以下决定:何时接受,延迟或拒绝新任务,何时与其他代理进行协商,是否在协商任务失败时进行重新协商,在推理新任务时要投入多少精力以及是否进行计划当实际执行性能偏离预期性能时重新计划。这项工作的主要贡献是:一个支持代理状态的资源受限框架,手工生成的启发式策略使用它来制定元级别的控制决策;一种基于强化学习的方法,可自动学习有效的元级别控制策略。

著录项

  • 作者

    Raja, Anita.;

  • 作者单位

    University of Massachusetts Amherst.;

  • 授予单位 University of Massachusetts Amherst.;
  • 学科 Computer Science.; Artificial Intelligence.
  • 学位 Ph.D.
  • 年度 2003
  • 页码 175 p.
  • 总页数 175
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 自动化技术、计算机技术;人工智能理论;
  • 关键词

  • 入库时间 2022-08-17 11:45:31

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号