TY - GEN
T1 - Trust-aware planning
T2 - 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023
AU - Zahedi, Zahra
AU - Verma, Mudit
AU - Sreedharan, Sarath
AU - Kambhampati, Subbarao
N1 - Funding Information:
In this paper, we presented a computational model that the robot can use to capture the evolution of human trust in iterated human-robot interaction settings which sheds new light on longitudinal human-robot interaction. This framework allows the robot to incorporate human trust into its planning process, thereby allowing it to be a more efective teammate. Thus our framework would allow an agent to model, foster, and maintain the trust of their fellow teammates. Thereby causing the agent to engage in trust engendering behavior earlier in the teaming life cycle and be able to leverage trust built over these earlier interactions to perform more efcient but potentially inexplicable behavior later on. As our experimental studies show, such an approach could result in a much more efcient system than one that always engages in explicable behavior. We see this framework as the frst step in building such a longitudinal trust reasoning framework. Thus a natural next step would be to consider POMDP versions of the framework, where the human’s trust level is a hidden variable that can only be indirectly assessed. Another line of work would be to study how the means of achieving a specifc explicability score could have an impact on the evolution of trust. For example, as far as the human is considered, do they care whether the perfectly explicable plan was one they were expecting to start with, or one that became perfectly explicable after an explanation. ACKNOWLEDGMENT This research is supported in part by ONR grants N00014-16-1-2892, N00014-18-1-2442, N00014-18-1-2840, N00014-9-1-2119, AFOSR grant FA9550-18-1-0067, DARPA SAIL-ON grant W911NF-19-2-0006, and a JP Morgan AI Faculty Research grant. We would like to thank Karthik Valmeekam for his help in editing the video.
Publisher Copyright:
© 2023 Association for Computing Machinery.
PY - 2023/3/13
Y1 - 2023/3/13
N2 - Trust between team members is an essential requirement for any successful cooperation. Thus, engendering and maintaining the fellow team members' trust becomes a central responsibility for any member trying to not only successfully participate in the task but to ensure the team achieves its goals. The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have diferent models about the task at hand and thus may have diferent expectations regarding the current course of action, thereby forcing the robot to focus on the costly explicable behavior. We propose a computational model for capturing and modulating trust in such iterated human-robot interaction settings, where the human adopts a supervisory role. In our model, the robot integrates human's trust and their expectations about the robot into its planning process to build and maintain trust over the interaction horizon. By establishing the required level of trust, the robot can focus on maximizing the team goal by eschewing explicit explanatory or explicable behavior without worrying about the human supervisor monitoring and intervening to stop behaviors they may not necessarily understand. We model this reasoning about trust levels as a meta reasoning process over individual planning tasks. We additionally validate our model through a human subject experiment.
AB - Trust between team members is an essential requirement for any successful cooperation. Thus, engendering and maintaining the fellow team members' trust becomes a central responsibility for any member trying to not only successfully participate in the task but to ensure the team achieves its goals. The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have diferent models about the task at hand and thus may have diferent expectations regarding the current course of action, thereby forcing the robot to focus on the costly explicable behavior. We propose a computational model for capturing and modulating trust in such iterated human-robot interaction settings, where the human adopts a supervisory role. In our model, the robot integrates human's trust and their expectations about the robot into its planning process to build and maintain trust over the interaction horizon. By establishing the required level of trust, the robot can focus on maximizing the team goal by eschewing explicit explanatory or explicable behavior without worrying about the human supervisor monitoring and intervening to stop behaviors they may not necessarily understand. We model this reasoning about trust levels as a meta reasoning process over individual planning tasks. We additionally validate our model through a human subject experiment.
KW - explainable AI
KW - explicable Planning
KW - trust-aware decision-making
KW - trustable AI
UR - http://www.scopus.com/inward/record.url?scp=85150366465&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85150366465&partnerID=8YFLogxK
U2 - 10.1145/3568162.3578628
DO - 10.1145/3568162.3578628
M3 - Conference contribution
AN - SCOPUS:85150366465
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 281
EP - 289
BT - HRI 2023 - Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
PB - IEEE Computer Society
Y2 - 13 March 2023 through 16 March 2023
ER -