TY - GEN
T1 - Active Task-Inference-Guided Deep Inverse Reinforcement Learning
AU - Memarian, Farzan
AU - Xu, Zhe
AU - Wu, Bo
AU - Wen, Min
AU - Topcu, Ufuk
N1 - Funding Information:
This research was supported partly by the grants ARL W911NF2020132, ONR N00014-20-1-2115, ARL ACC-APG-RTP W911NF Farzan Memarian, Zhe Xu and Bo Wu are with the Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, Austin, TX 78712, Ufuk Topcu is with the Department of Aerospace Engineering and Engineering Mechanics, and the Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, Austin, TX 78712, e-mails: farzan.memarian@utexas.edu, zhexu@utexas.edu, bwu3@utexas.edu, utopcu@utexas.edu. Min Wen is with Google LLC. minwen@google.com,
Publisher Copyright:
© 2020 IEEE.
PY - 2020/12/14
Y1 - 2020/12/14
N2 - We consider the problem of reward learning for temporally extended tasks. For reward learning, inverse reinforcement learning (IRL) is a widely used paradigm. Given a Markov decision process (MDP) and a set of demonstrations for a task, IRL learns a reward function that assigns a real-valued reward to each state of the MDP. However, for temporally extended tasks, the underlying reward function may not be expressible as a function of individual states of the MDP. Instead, the history of visited states may need to be considered to determine the reward at the current state. To address this issue, we propose an iterative algorithm to learn a reward function for temporally extended tasks. At each iteration, the algorithm alternates between two modules, a task inference module that infers the underlying task structure and a reward learning module that uses the inferred task structure to learn a reward function. The task inference module produces a series of queries, where each query is a sequence of subgoals. The demonstrator provides a binary response to each query by attempting to execute it in the environment and observing the environment's feedback. After the queries are answered, the task inference module returns an automaton encoding its current hypothesis of the task structure. The reward learning module augments the state space of the MDP with the states of the automaton. The module then proceeds to learn a reward function over the augmented state space using a novel deep maximum entropy IRL algorithm. This iterative process continues until it learns a reward function with satisfactory performance. The experiments show that the proposed algorithm significantly outperforms several IRL baselines on temporally extended tasks.
AB - We consider the problem of reward learning for temporally extended tasks. For reward learning, inverse reinforcement learning (IRL) is a widely used paradigm. Given a Markov decision process (MDP) and a set of demonstrations for a task, IRL learns a reward function that assigns a real-valued reward to each state of the MDP. However, for temporally extended tasks, the underlying reward function may not be expressible as a function of individual states of the MDP. Instead, the history of visited states may need to be considered to determine the reward at the current state. To address this issue, we propose an iterative algorithm to learn a reward function for temporally extended tasks. At each iteration, the algorithm alternates between two modules, a task inference module that infers the underlying task structure and a reward learning module that uses the inferred task structure to learn a reward function. The task inference module produces a series of queries, where each query is a sequence of subgoals. The demonstrator provides a binary response to each query by attempting to execute it in the environment and observing the environment's feedback. After the queries are answered, the task inference module returns an automaton encoding its current hypothesis of the task structure. The reward learning module augments the state space of the MDP with the states of the automaton. The module then proceeds to learn a reward function over the augmented state space using a novel deep maximum entropy IRL algorithm. This iterative process continues until it learns a reward function with satisfactory performance. The experiments show that the proposed algorithm significantly outperforms several IRL baselines on temporally extended tasks.
UR - http://www.scopus.com/inward/record.url?scp=85099882394&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099882394&partnerID=8YFLogxK
U2 - 10.1109/CDC42340.2020.9304190
DO - 10.1109/CDC42340.2020.9304190
M3 - Conference contribution
AN - SCOPUS:85099882394
T3 - Proceedings of the IEEE Conference on Decision and Control
SP - 1932
EP - 1938
BT - 2020 59th IEEE Conference on Decision and Control, CDC 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 59th IEEE Conference on Decision and Control, CDC 2020
Y2 - 14 December 2020 through 18 December 2020
ER -