【24h】

Iterated Belief Change in Multi-Agent Systems

机译:多智能体系统中的迭代信念更改

获取原文
获取原文并翻译 | 示例

摘要

We give a model for iterated belief change in multi-agent systems. The formal tool we use for this is a combination of modal and dynamic logic. Two core notions in our model are the expansion of the knowledge and beliefs of an agent, and the processing of new information. An expansion has been defined as the change in the knowledge and beliefs of an agent when it decides to believe an incoming formula while holding on to its current propositional beliefs. To prevent our agents from forming inconsistent beliefs they do not expand with every piece of information they receive. Instead, our agents remember their original beliefs and every piece of information they receive. After every receipt of information they decide which (consistent) subset of the received information should be incorporated into their original beliefs. This procedure is called the processing of new information. We show that our model of belief update behaves in an intuitive way and that it is not sensitive to criticism on comparable models.
机译:我们给出了用于多智能体系统中的迭代信念更改的模型。我们为此使用的正式工具是模态和动态逻辑的结合。我们模型中的两个核心概念是代理的知识和信念的扩展以及新信息的处理。扩展被定义为当代理人在坚持其当前的命题信念的同时决定相信传入的公式时,其知识和信念的变化。为了防止我们的代理人形成不一致的信念,他们不会随着收到的每条信息而扩展。相反,我们的代理商会记住自己的原始信念以及所收到的每条信息。每次收到信息后,他们决定应将接收到的信息的哪个(一致)子集纳入其原始信念。此过程称为新信息处理。我们表明,我们的信念更新模型表现得很直观,并且对可比模型的批评并不敏感。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号