Recently, online reputation mechanisms have attracted much attention in many areas. They have been widely adopted and worked well, although their reliability is still a major concern. Because of online properties such as openness and anonymity, it is necessary to consider rating errors, noise and unfair lies. Furthermore, these disturbances (attacks) have a significant effect on multi-agent systems containing malicious agents who tell lies or engage in strategic manipulations. Current online reputation mechanisms are not sufficiently robust against such disturbances. In an attempt to solve this problem, we propose a stochastic approximation-based online reputation mechanism. Our mechanism assigns one global trustworthiness value to each agent and updates estimates of these values dynamically from mutual ratings of agents. Experimental results show that our mechanism is able to identify good and bad agents effectively under condition of the above disturbances and also trace the changes in agents' true trustworthiness values adaptively.
展开▼