Designing agents capable of explaining complex sequential decisions remains a significant open problem in human-AI interaction. Recently, there has been a lot of interest in developing approaches for generating such explanations for various decision-making paradigms. One such approach has been the idea of explanation as model-reconciliation. The framework hypothesizes that one of the common reasons for a user's confusion could be the mismatch between the user's model of the agent's task model and the model used by the agent to generate the decisions. While this is a general framework, most works that have been explicitly built on this explanatory philosophy have focused on classical planning settings where the model of user's knowledge is available in a declarative form. Our goal in this paper is to adapt the model reconciliation approach to a more general planning paradigm and discuss how such methods could be used when user models are no longer explicitly available. Specifically, we present a simple and easy to learn labeling model that can help an explainer decide what information could help achieve model reconciliation between the user and the agent with in the context of planning with MDPs.

Original languageEnglish (US)
Title of host publicationProceedings of the 28th International Joint Conference on Artificial Intelligence, IJCAI 2019
EditorsSarit Kraus
PublisherInternational Joint Conferences on Artificial Intelligence
Number of pages8
ISBN (Electronic)9780999241141
StatePublished - 2019
Event28th International Joint Conference on Artificial Intelligence, IJCAI 2019 - Macao, China
Duration: Aug 10 2019Aug 16 2019

Publication series

NameIJCAI International Joint Conference on Artificial Intelligence
ISSN (Print)1045-0823


Conference28th International Joint Conference on Artificial Intelligence, IJCAI 2019

ASJC Scopus subject areas

  • Artificial Intelligence


Dive into the research topics of 'Model-free model reconciliation'. Together they form a unique fingerprint.

Cite this