TY - GEN
T1 - Towards Understanding User Preferences for Explanation Types in Model Reconciliation
AU - Zahedi, Zahra
AU - Olmo, Alberto
AU - Chakraborti, Tathagata
AU - Sreedharan, Sarath
AU - Kambhampati, Subbarao
N1 - Funding Information:
This research is supported in part by the AFOSR grant FA9550-18-1-0067, the ONR grants N00014161-2892, N00014-13-1-0176, N00014-13-1-0519, N00014-15-1-2027, and the NASA grant NNX17AD06G. * Authors marked with asterix contributed equally.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/3/22
Y1 - 2019/3/22
N2 - Recent work has formalized the explanation process in the context of automated planning as one of model reconciliation - i.e. a process by which the planning agent can bring the explainee's (possibly faulty) model of a planning problem closer to its understanding of the ground truth until both agree that its plan is the best possible. The content of explanations can thus range from misunderstandings about the agent's beliefs (state), desires (goals) and capabilities (action model). Though existing literature has considered different kinds of these model differences to be equivalent, literature on the explanations in social sciences has suggested that explanations with similar logical properties may often be perceived differently by humans. In this brief report, we explore to what extent humans attribute importance to different kinds of model differences that have been traditionally considered equivalent in the model reconciliation setting. Our results suggest that people prefer the explanations which are related to the effects of actions.
AB - Recent work has formalized the explanation process in the context of automated planning as one of model reconciliation - i.e. a process by which the planning agent can bring the explainee's (possibly faulty) model of a planning problem closer to its understanding of the ground truth until both agree that its plan is the best possible. The content of explanations can thus range from misunderstandings about the agent's beliefs (state), desires (goals) and capabilities (action model). Though existing literature has considered different kinds of these model differences to be equivalent, literature on the explanations in social sciences has suggested that explanations with similar logical properties may often be perceived differently by humans. In this brief report, we explore to what extent humans attribute importance to different kinds of model differences that have been traditionally considered equivalent in the model reconciliation setting. Our results suggest that people prefer the explanations which are related to the effects of actions.
UR - http://www.scopus.com/inward/record.url?scp=85064015308&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85064015308&partnerID=8YFLogxK
U2 - 10.1109/HRI.2019.8673097
DO - 10.1109/HRI.2019.8673097
M3 - Conference contribution
AN - SCOPUS:85064015308
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 648
EP - 649
BT - HRI 2019 - 14th ACM/IEEE International Conference on Human-Robot Interaction
PB - IEEE Computer Society
T2 - 14th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2019
Y2 - 11 March 2019 through 14 March 2019
ER -