Learning Trust Over Directed Graphs in Multiagent Systems

Orhan Eren Akgün, Arif Kerem Dayı, Stephanie Gil, Angelia Nedić

Research output: Contribution to journalConference articlepeer-review

Abstract

We address the problem of learning the legitimacy of other agents in a multiagent network when an unknown subset is comprised of malicious actors. We specifically derive results for the case of directed graphs and where stochastic side information, or observations of trust, is available. We refer to this as “learning trust” since agents must identify which neighbors in the network are reliable, and we derive a learning protocol to achieve this. We also provide analytical results showing that under this protocol i) agents can learn the legitimacy of all other agents almost surely, and ii) the opinions of the agents converge in mean to the true legitimacy of all other agents in the network. Lastly, we provide numerical studies showing that our convergence results hold for various network topologies and variations in the number of malicious agents.

Original languageEnglish (US)
Pages (from-to)142-154
Number of pages13
JournalProceedings of Machine Learning Research
Volume211
StatePublished - 2023
Externally publishedYes
Event5th Annual Conference on Learning for Dynamics and Control, L4DC 2023 - Philadelphia, United States
Duration: Jun 15 2023Jun 16 2023

Keywords

  • Multiagent systems
  • adversarial learning
  • directed graphs
  • networked systems

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Learning Trust Over Directed Graphs in Multiagent Systems'. Together they form a unique fingerprint.

Cite this