首页> 外文期刊>ACM transactions on autonomous and adaptive systems >Effective Usage of Computational Trust Models in Rational Environments
【24h】

Effective Usage of Computational Trust Models in Rational Environments

机译:合理环境中计算信任模型的有效使用

获取原文
获取原文并翻译 | 示例
       

摘要

Computational reputation-based trust models using statistical learning have been intensively studied for distributed systems where peers behave maliciously. However practical applications of such models in environments with both malicious and rational behaviors are still very little understood. In this article, we study the relation between their accuracy measures and their ability to enforce cooperation among participants and discourage selfish behaviors. We provide theoretical results that show the conditions under which cooperation emerges when using computational trust models with a given accuracy, and how cooperation can still be sustained while reducing the cost and accuracy of those models. Specifically, we propose a peer selection protocol that uses a computational trust model as a dishonesty detector to filter out unfair ratings. We prove that such a model with reasonable misclassification error bound in identifying malicious ratings can effectively build trust and cooperation in the system, considering rationality of participants. These results reveal two interesting observations. First, the key to the success of a reputation system in a rational environment is not a sophisticated trust-learning mechanism, but an effective identity-management scheme to prevent whitewashing behaviors. Second, given an appropriate identity-management mechanism, a reputation-based trust model with a moderate accuracy bound can be used to effectively enforce cooperation in systems with both rational and malicious participants. As a result, in heterogeneous environments where peers use different algorithms to detect misbehavior of potential partners, cooperation may still emerge. We verify and extend these theoretical results to a variety of settings involving honest, malicious, and strategic players through extensive simulation. These results will enable a much more targeted, cost-effective and realistic design for decentralized trust management systems, such as needed for peer-to-peer, electronic commerce, or community systems.
机译:对于同伴恶意行为的分布式系统,已经深入研究了使用统计学习的基于计算信誉的信任模型。但是,这种模型在具有恶意和理性行为的环境中的实际应用仍然知之甚少。在本文中,我们研究了他们的准确性测度与他们加强参与者之间的合作并阻止自私行为的能力之间的关系。我们提供的理论结果显示了在使用具有给定精度的计算信任模型时出现合作的条件,以及如何在降低这些模型的成本和准确性的同时仍可维持合作。具体来说,我们提出一种对等选择协议,该协议使用计算信任模型作为不诚实的检测器来过滤掉不公平的评级。我们证明,考虑到参与者的合理性,这种具有合理的错误分类错误的模型可以识别恶意评级,可以有效地建立系统中的信任与合作。这些结果揭示了两个有趣的发现。首先,在合理的环境中成功建立信誉系统的关键不是复杂的信任学习机制,而是防止粉饰行为的有效身份管理方案。其次,给定适当的身份管理机制,可以使用具有中等准确度范围的基于信誉的信任模型来有效地实施具有理性和恶意参与者的系统中的合作。结果,在异构环境中,对等方使用不同的算法来检测潜在合作伙伴的不当行为,合作仍然可能出现。我们通过广泛的模拟来验证这些理论结果并将其扩展到涉及诚实,恶意和战略参与者的各种环境。这些结果将为分散的信任管理系统(如对等系统,电子商务或社区系统所需)提供更具针对性,更具成本效益和更实际的设计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号