Advice-Guided Reinforcement Learning in a non-Markovian Environment

Daniel Neider, Jean Raphael Gaglione, Ivan Gavran, Ufuk Topcu, Bo Wu, Zhe Xu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations


We study a class of reinforcement learning tasks in which the agent receives its reward for complex, temporally-extended behaviors sparsely. For such tasks, the problem is how to augment the state-space so as to make the reward function Markovian in an efficient way. While some existing solutions assume that the reward function is explicitly provided to the learning algorithm (e.g., in the form of a reward machine), the others learn the reward function from the interactions with the environment, assuming no prior knowledge provided by the user. In this paper, we generalize both approaches and enable the user to give advice to the agent, representing the user’s best knowledge about the reward function, potentially fragmented, partial, or even incorrect. We formalize advice as a set of DFAs and present a reinforcement learning algorithm that takes advantage of such advice, with optimal convergence guarantee. The experiments show that using well-chosen advice can reduce the number of training steps needed for convergence to optimal policy, and can decrease the computation time to learn the reward function by up to two orders of magnitude.

Original languageEnglish (US)
Title of host publication35th AAAI Conference on Artificial Intelligence, AAAI 2021
PublisherAssociation for the Advancement of Artificial Intelligence
Number of pages8
ISBN (Electronic)9781713835974
StatePublished - 2021
Event35th AAAI Conference on Artificial Intelligence, AAAI 2021 - Virtual, Online
Duration: Feb 2 2021Feb 9 2021

Publication series

Name35th AAAI Conference on Artificial Intelligence, AAAI 2021


Conference35th AAAI Conference on Artificial Intelligence, AAAI 2021
CityVirtual, Online

ASJC Scopus subject areas

  • Artificial Intelligence


Dive into the research topics of 'Advice-Guided Reinforcement Learning in a non-Markovian Environment'. Together they form a unique fingerprint.

Cite this