The fact that open multiagent systems are vulnerable with respect to malicious agents poses a great challenge: the detection and the prevention of undesirable behaviours. That is the reason why techniques such as trust and reputation mechanisms have been proposed. In this paper, we explore the cognitive science background which captures the notions of trust, reputation and confidence to provide a computational trust mechanism applied to negotiations within artificial societies. For this purpose, we formalize here these notions and we apply to them a particular argumentation technology for allowing agents to initiate, evaluate, reason, decide, and propagate reputation values.
展开▼