Artificial societies-distributed systems of autonomous agents-are becoming increasingly important in open distributed environments, especially in e-commerce. Agents require trust and reputation concepts to identify communities of agents with which to interact reliably. We have noted in real environments that adversaries tend to focus on exploitation of the trust and reputation model. These vulnerabilities reinforce the need for new evaluation criteria for trust and reputation models called exploitation resistance which reflects the ability of a trust model to be unaffected by agents who try to manipulate the trust model. To examine whether a given trust and reputation model is exploitation-resistant, the researchers require a flexible, easy-to-use, and general framework. This framework should provide the facility to specify heterogeneous agents with different trust models and behaviors. This paper introduces a Distributed Analysis of Reputation and Trust (DART) framework. The environment of DART is decentralized and game-theoretic. Not only is the proposed environment model compatible with the characteristics of open distributed systems, but it also allows agents to have different types of interactions in this environment model. Besides direct, witness, and introduction interactions, agents in our environment model can have a type of interaction called a reporting interaction, which represents a decentralized reporting mechanism in distributed environments. The proposed environment model provides various metrics at both micro and macro levels for analyzing the implemented trust and reputation models. Using DART, researchers have empirically demonstrated the vulnerability of well-known trust models against both individual and group attacks.

Additional Metadata
Keywords multiagent systems, reputation, trust
Persistent URL dx.doi.org/10.1111/j.1467-8640.2012.00453.x
Journal Computational Intelligence
Citation
Salehi-Abari, A. (Amirali), & White, A. (2012). DART: A distributed analysis of reputation and trust framework. Computational Intelligence, 28(4), 642–682. doi:10.1111/j.1467-8640.2012.00453.x