Mining Causal Signal Temporal Logic Formulas for Efficient Reinforcement Learning with Temporally Extended Tasks

Research output: Contribution to journalConference articlepeer-review

Abstract

Reinforcement Learning (RL) has emerged as a powerful paradigm for solving sequential decisionmaking problems. However, traditional RL methods often lack an understanding of the causal mechanisms that govern the dynamics of an environment. This limitation results in inefficiencies, challenges in generalization, and reduced interpretability. To address these challenges, we propose Signal Temporal Logic Causal Inference RL (STL-CIRL), a framework that mines interpretable causal specifications through Signal Temporal Logic and reinforcement learning, using counterexample-guided refinement to jointly optimize policies and causal formulas. We compare the performance of agents leveraging explicit causal knowledge with those relying solely on traditional RL approaches. Our results demonstrate the potential of causal reasoning to enhance the efficiency and robustness of RL for complex tasks.

Original languageEnglish (US)
JournalProceedings of Machine Learning Research
Volume288
StatePublished - 2025
Externally publishedYes
Event2025 International Conferenceon Neuro-Symbolic Systems - Philadelphia, United States
Duration: May 28 2025May 30 2025

Keywords

  • Causal Inference
  • Reinforcement Learning
  • Signal Temporal Logic

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Mining Causal Signal Temporal Logic Formulas for Efficient Reinforcement Learning with Temporally Extended Tasks'. Together they form a unique fingerprint.

Cite this