Any functional human-AI-robot team consists of multiple stakeholders, as well as one or more artificial agents (e.g., AI agents and embodied robotic agents). Each stakeholder’s trust in the artificial agent matters because it not only impacts their performance on tasks with human teammates and artificial agents but also influences their trust in other stakeholders and how other stakeholders trust the artificial agents. Interpersonal trust and human-agent trust mutually influence each other. Traditional measures of trust in human-robot interactions have been focused on one end-user’s trust in one artificial agent rather than investigating the team level of trust that involves all relevant stakeholders and the interactions among these entities. Also, traditional measures of trust have been mostly static, unable to capture the distributed trust dynamics at a team level. To fill this gap, this chapter proposes a distributed dynamic team trust (D2T2) framework and potential measures for its applications in human-AI-robot teaming.

Original languageEnglish (US)
Title of host publicationTrust in Human-Robot Interaction
Number of pages19
ISBN (Electronic)9780128194720
StatePublished - Jan 1 2020


  • D2T2
  • Distributed
  • Dynamic
  • Human-AI-robot teaming
  • Human-robot interaction
  • Team trust

ASJC Scopus subject areas

  • General Psychology


Dive into the research topics of 'Distributed dynamic team trust in human, artificial intelligence, and robot teaming'. Together they form a unique fingerprint.

Cite this