Abstract
Reinforcement Learning (RL) has emerged as a powerful paradigm for solving sequential decisionmaking problems. However, traditional RL methods often lack an understanding of the causal mechanisms that govern the dynamics of an environment. This limitation results in inefficiencies, challenges in generalization, and reduced interpretability. To address these challenges, we propose Signal Temporal Logic Causal Inference RL (STL-CIRL), a framework that mines interpretable causal specifications through Signal Temporal Logic and reinforcement learning, using counterexample-guided refinement to jointly optimize policies and causal formulas. We compare the performance of agents leveraging explicit causal knowledge with those relying solely on traditional RL approaches. Our results demonstrate the potential of causal reasoning to enhance the efficiency and robustness of RL for complex tasks.
| Original language | English (US) |
|---|---|
| Journal | Proceedings of Machine Learning Research |
| Volume | 288 |
| State | Published - 2025 |
| Externally published | Yes |
| Event | 2025 International Conferenceon Neuro-Symbolic Systems - Philadelphia, United States Duration: May 28 2025 → May 30 2025 |
Keywords
- Causal Inference
- Reinforcement Learning
- Signal Temporal Logic
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence