Compositional Instruction Following with Language Models and Reinforcement Learning

Vanya Cohen, Geraud Nangue Tasse, Nakul Gopalan, Steven James, Matthew Gombolay, Raymond Mooney, Benjamin Rosman

Research output: Contribution to journalArticlepeer-review

Abstract

Combining reinforcement learning with language grounding is challenging as the agent needs to explore the environment while simultaneously learning multiple language-conditioned tasks. To address this, we introduce a novel method: the compositionally-enabled reinforcement learning language agent (CERLLA). Our method reduces the sample complexity of tasks specified with language by leveraging compositional policy representations and a semantic parser trained using reinforcement learning and in-context learning. We evaluate our approach in an environment requiring function approximation and demonstrate compositional generalization to novel tasks. Our method significantly outperforms the previous best non-compositional baseline in terms of sample complexity on 162 tasks designed to test compositional generalization. Our model attains a higher success rate and learns in fewer steps than the non-compositional baseline. It reaches a success rate equal to an oracle policy’s upper-bound performance of 92%. With the same number of environment steps, the baseline only reaches a success rate of 80%.

Original languageEnglish (US)
JournalTransactions on Machine Learning Research
Volume2024
StatePublished - 2024
Externally publishedYes

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Compositional Instruction Following with Language Models and Reinforcement Learning'. Together they form a unique fingerprint.

Cite this