TY - GEN
T1 - Balancing explicability and explanation in human-aware planning
AU - Sreedharan, Sarath
AU - Chakraborti, Tathagata
AU - Kambhampati, Subbarao
N1 - Funding Information:
This research is supported in part by the ONR grants N00014161-2892, N00014-13-1-0176, N00014-13-1-0519, N00014-15-1-2027, and the NASA grant NNX17AD06G. Chakraborti is also supported in part by the IBM Ph.D. Fellowship 2017.
Publisher Copyright:
Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2017
Y1 - 2017
N2 - Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process. This can involve generating plans that are explicable (Zhang et al. 2017) to a human observer as well as the ability to provide explanations (Chakraborti et al. 2017b) when such plans cannot be generated. This has led to the notion "multi-model planning" which aim to incorporate effects of human expectation in the deliberative process of a planner - either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a tradeoff during the plan generation process itself by means of a model-space search method MEGA. This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process. We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.
AB - Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process. This can involve generating plans that are explicable (Zhang et al. 2017) to a human observer as well as the ability to provide explanations (Chakraborti et al. 2017b) when such plans cannot be generated. This has led to the notion "multi-model planning" which aim to incorporate effects of human expectation in the deliberative process of a planner - either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a tradeoff during the plan generation process itself by means of a model-space search method MEGA. This in effect provides a comprehensive perspective of what it means for a decision making agent to be "human-aware" by bringing together existing principles of planning under the umbrella of a single plan generation process. We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor.
UR - http://www.scopus.com/inward/record.url?scp=85044453800&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85044453800&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85044453800
T3 - AAAI Fall Symposium - Technical Report
SP - 61
EP - 68
BT - FS-17-01
PB - AI Access Foundation
T2 - 2017 AAAI Fall Symposium
Y2 - 9 November 2017 through 11 November 2017
ER -