Abstract

The previous chapter sketches out some of the central ideas behind generating an explanation as model reconciliation, but it does so while making some strong assumptions. Particularly, the setting assumes that the human’s model of the robot is known exactly upfront. In this chapter, we will look at how we can relax this assumption and see how we can perform model reconciliation in scenarios where the robot has progressively less information about the human mental model. We will start by investigating how the robot can perform model reconciliation with incomplete model information. Next, we will look at cases where the robot doesn’t have a human mental model but can collect feedback from users.1 We will see how we can use such feedback to learn simple labeling models that suffice to generate explanations. We will also look at generating model reconciliation explanations by assuming the human has a simpler mental model, specifically one that is an abstraction of the original model and also see how this method can help reduce inferential burden on the human. Throughout this chapter, we will focus on generating MCE though most of the methods discussed here could also be extended to MME and contrastive versions of the explanations.

Original languageEnglish (US)
Title of host publicationSynthesis Lectures on Artificial Intelligence and Machine Learning
PublisherSpringer Nature
Pages81-94
Number of pages14
DOIs
StatePublished - 2022

Publication series

NameSynthesis Lectures on Artificial Intelligence and Machine Learning
ISSN (Print)1939-4608
ISSN (Electronic)1939-4616

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Acquiring Mental Models for Explanations'. Together they form a unique fingerprint.

Cite this