Explanations as model reconciliation - A multi-agent perspective

Sarath Sreedharan, Tathagata Chakraborti, Subbarao Kambhampati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations


In this paper, we demonstrate how a planner (or a robot as an embodiment of it) can explain its decisions to multiple agents in the loop together considering not only the model that it used to come up with its decisions but also the (often misaligned) models of the same task that the other agents might have had. To do this, we build on our previous work on multimodel explanation generation (Chakraborti et al. 2017b) and extend it to account for settings where there is uncertainty of the robot's model of the explainee and/or there are multiple explainees with different models to explain to. We will illustrate these concepts in a demonstration on a robot involved in a typical search and reconnaissance scenario with another human teammate and an external human supervisor.

Original languageEnglish (US)
Title of host publicationFS-17-01
Subtitle of host publicationArtificial Intelligence for Human-Robot Interaction; FS-17-02: Cognitive Assistance in Government and Public Sector Applications; FS-17-03: Deep Models and Artificial Intelligence for Military Applications: Potentials, Theories, Practices, Tools and Risks; FS-17-04: Human-Agent Groups: Studies, Algorithms and Challenges; FS-17-05: A Standard Model of the Mind
PublisherAI Access Foundation
Number of pages7
ISBN (Electronic)9781577357940
StatePublished - 2017
Event2017 AAAI Fall Symposium - Arlington, United States
Duration: Nov 9 2017Nov 11 2017

Publication series

NameAAAI Fall Symposium - Technical Report
VolumeFS-17-01 - FS-17-05


Other2017 AAAI Fall Symposium
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Engineering(all)


Dive into the research topics of 'Explanations as model reconciliation - A multi-agent perspective'. Together they form a unique fingerprint.

Cite this