Autonomous agents require trust and reputation concepts in order to identify communities of agents with which to interact reliably in ways analogous to humans. Agent societies are invariably heterogeneous, with multiple decision making policies and actions governing their behaviour. Through the introduction of naive agents, this paper shows empirically that while learning agents can identify malicious agents through direct interaction, naive agents compromise utility through their inability to discern malicious agents. Moreover, the impact of the proportion of naive agents on the society is analyzed. The paper demonstrates that there is a need for witness interaction trust to detect naive agents in addition to the need for direct interaction trust to detect malicious agents. By proposing a set of policies, the paper demonstrates how learning agents can isolate themselves from naive and malicious agents.

Additional Metadata
Persistent URL
Series Lecture Notes in Computer Science
Salehi-Abari, A. (Amirali), & White, A. (2010). The impact of naive agents in heterogeneous trust-aware societies. In Lecture Notes in Computer Science. doi:10.1007/978-3-642-13553-8_10