首页> 外文会议>International conference on computational collective intelligence >A Computational Trust Model with Trustworthiness against Liars in Multiagent Systems
【24h】

A Computational Trust Model with Trustworthiness against Liars in Multiagent Systems

机译:多代理系统中具有可信度的骗子计算信任模型

获取原文

摘要

Trust is considered as the crucial factor for agents in decision making to select the partners during their interaction in open distributed multiagent systems. Most of current trust models are the combination of experience trust and reference trust and make use of some propagation mechanism to enable agents to share his/her final trust with partners. These models are based on the assumption that all agents are reliable when they share their trust with others. However, they are no more longer appropriate to applications of multiagent systems, in which several concurrent agents may not be ready to share their information or may share the wrong data by lying to their partners. In this paper, we introduce a computational model of trust that is a combination of experience trust and reference trust. Furthermore, our model offers a mechanism to enable agents to judge the trustworthiness of referees when they refer the reference trust from their various partners that may be liars.
机译:信任被认为是代理在开放式分布式多代理系统中的交互过程中选择合作伙伴的决策中的关键因素。当前大多数信任模型都是经验信任和参考信任的组合,并利用某种传播机制使代理能够与合作伙伴共享他/她的最终信任。这些模型基于以下假设:所有代理与他人共享信任时都是可靠的。但是,它们不再适用于多代理系统的应用程序,在该系统中,多个并发代理可能未准备好共享其信息,或者可能会通过向其伙伴撒谎来共享错误的数据。在本文中,我们介绍了一种信任的计算模型,该模型是经验信任和参考信任的组合。此外,我们的模型提供了一种机制,使代理能够在他们从可能是说谎者的各个合作伙伴那里获得参考信任时,判断参考裁判的可信度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号