Theory of Mind abilities of Large Language Models in Human-Robot Interaction: An Illusion?

Mudit Verma, Siddhant Bhambri, Subbarao Kambhampati

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Large Language Models (LLMs) have shown exceptional generative abilities in various natural language and generation tasks. However, possible anthropomorphization and leniency towards failure cases have propelled discussions on emergent abilities of LLMs especially on Theory of Mind (ToM) abilities in Large Language Models. While several false-belief tests exists to verify the ability to infer and maintain mental models of another entity, we study a special application of ToM abilities that has higher stakes and possibly irreversible consequences: Human Robot Interaction. In this work, we explore the task of Perceived Behavior Recognition, where a robot employs an LLM to assess the robot's generated behavior in a manner similar to human observer. We focus on four behavior types, namely - explicable, legible, predictable, and obfuscatory behavior which have been extensively used to synthesize interpretable robot behaviors. The LLMs goal is, therefore to be a human proxy to the agent, and to answer how a certain agent behavior would be perceived by the human in the loop, for example "Given a robot's behavior X, would the human observer find it explicable?". We conduct a human subject study to verify that the users are able to correctly answer such a question in the curated situations (robot setting and plan) across five domains. A first analysis of the belief test yields extremely positive results inflating ones expectations of LLMs possessing ToM abilities. We then propose and perform a suite of perturbation tests which breaks this illusion, i.e. Inconsistent Belief, Uninformative Context and Conviction Test. The high score of LLMs on vanilla prompts showcases its potential use in HRI settings, however to possess ToM demands invariance to trivial or irrelevant perturbations in the context which LLMs lack. We report our results on GPT-4 and GPT-3.5-turbo.

Original languageEnglish (US)
Title of host publicationHRI 2024 Companion - Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction
PublisherIEEE Computer Society
Pages36-45
Number of pages10
ISBN (Electronic)9798400703232
DOIs
StatePublished - Mar 11 2024
Event19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024 - Boulder, United States
Duration: Mar 11 2024Mar 15 2024

Publication series

NameACM/IEEE International Conference on Human-Robot Interaction
ISSN (Electronic)2167-2148

Conference

Conference19th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2024
Country/TerritoryUnited States
CityBoulder
Period3/11/243/15/24

Keywords

  • Large Language Models
  • Reasoning
  • Theory of Mind

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Theory of Mind abilities of Large Language Models in Human-Robot Interaction: An Illusion?'. Together they form a unique fingerprint.

Cite this