TY - CHAP
T1 - Explainable Human-AI Interaction
T2 - A Planning Perspective Sreedharan
AU - Sreedharan, Sarath
AU - Kulkarni, Anagha
AU - Kambhampati, Subbarao
N1 - Funding Information:
Much of the research reported here has been supported over the years by generous support from multiple federal funding agencies. We would like to particularly thank the Office of Naval Research–and the program managers Behzad Kamgar-Parsi, Tom McKenna, Jeff Morrison, Marc Steinberg, and John Tangney for their sustained support. Thanks are also due to Benjamin Knott, formerly of AFOSR, Laura Steckman of AFOSR, and Purush Iyer of Army Research Labs for their support and encouragement.
Publisher Copyright:
Copyright © 2022 by Morgan & Claypool.
PY - 2022/1/24
Y1 - 2022/1/24
N2 - From its inception, artificial intelligence (AI) has had a rather ambivalent relationship with humans-swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans. One critical requirement for such synergistic humanâAI interaction is that the AI systems' behavior be explainable to the humans in the loop. To do this effectively, AI agents need to go beyond planning with their own models of the world, and take into account the mental model of the human in the loop. At a minimum, AI agents need approximations of the human's task and goal models, as well as the human's model of the AI agent's task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), and be ready to provide customized explanations when needed.The authors draw from several years of research in their lab to discuss how an AI agent can use these mental models to either conform to human expectations or change those expectations through explanatory communication. While the focus of the book is on cooperative scenarios, it also covers how the same mental models can be used for obfuscation and deception. The book also describes several real-world application systems for collaborative decision-making that are based on the framework and techniques developed here. Although primarily driven by the authors' own research in these areas, every chapter will provide ample connections to relevant research from the wider literature. The technical topics covered in the book are self-contained and are accessible to readers with a basic background in AI. Table of Contents: Preface / Acknowledgments / Introduction / Measures of Interpretability / Explicable Behavior Generation / Legible Behavior / Explanation as Model Reconciliation / Acquiring Mental Models for Explanations / Balancing Communication and Behavior / Explaining in the Presence of Vocabulary Mismatch / Obfuscatory Behavior and Deceptive Communication / Applications / Conclusion / Bibliography / Authors' Biographies / Index
AB - From its inception, artificial intelligence (AI) has had a rather ambivalent relationship with humans-swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever-increasing pace, there is a greater need for AI systems to work synergistically with humans. One critical requirement for such synergistic humanâAI interaction is that the AI systems' behavior be explainable to the humans in the loop. To do this effectively, AI agents need to go beyond planning with their own models of the world, and take into account the mental model of the human in the loop. At a minimum, AI agents need approximations of the human's task and goal models, as well as the human's model of the AI agent's task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), and be ready to provide customized explanations when needed.The authors draw from several years of research in their lab to discuss how an AI agent can use these mental models to either conform to human expectations or change those expectations through explanatory communication. While the focus of the book is on cooperative scenarios, it also covers how the same mental models can be used for obfuscation and deception. The book also describes several real-world application systems for collaborative decision-making that are based on the framework and techniques developed here. Although primarily driven by the authors' own research in these areas, every chapter will provide ample connections to relevant research from the wider literature. The technical topics covered in the book are self-contained and are accessible to readers with a basic background in AI. Table of Contents: Preface / Acknowledgments / Introduction / Measures of Interpretability / Explicable Behavior Generation / Legible Behavior / Explanation as Model Reconciliation / Acquiring Mental Models for Explanations / Balancing Communication and Behavior / Explaining in the Presence of Vocabulary Mismatch / Obfuscatory Behavior and Deceptive Communication / Applications / Conclusion / Bibliography / Authors' Biographies / Index
KW - explainability
KW - human-AI interaction
KW - human-Aware AI systems
KW - human-Aware planning
KW - interpretability
KW - obfuscation
UR - http://www.scopus.com/inward/record.url?scp=85124097456&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124097456&partnerID=8YFLogxK
U2 - 10.2200/S01152ED1V01Y202111AIM050
DO - 10.2200/S01152ED1V01Y202111AIM050
M3 - Chapter
AN - SCOPUS:85124097456
T3 - Synthesis Lectures on Artificial Intelligence and Machine Learning
BT - Synthesis Lectures on Artificial Intelligence and Machine Learning
PB - Morgan and Claypool Publishers
ER -