TY - GEN
T1 - Guided search for task and motion plans using learned heuristics
AU - Chitnis, Rohan
AU - Hadfield-Menell, Dylan
AU - Gupta, Abhishek
AU - Srivastava, Siddharth
AU - Groshev, Edward
AU - Lin, Christopher
AU - Abbeel, Pieter
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/6/8
Y1 - 2016/6/8
N2 - Tasks in mobile manipulation planning often require thousands of individual motions to complete. Such tasks require reasoning about complex goals as well as the feasibility of movements in configuration space. In discrete representations, planning complexity is exponential in the length of the plan. In mobile manipulation, parameters for an action often draw from a continuous space, so we must also cope with an infinite branching factor. Task and motion planning (TAMP) methods integrate logical search over high-level actions with geometric reasoning to address this challenge. We present an algorithm that searches the space of possible task and motion plans and uses statistical machine learning to guide the search process. Our contributions are as follows: 1) we present a complete algorithm for TAMP; 2) we present a randomized local search algorithm for plan refinement that is easily formulated as a Markov decision process (MDP); 3) we apply reinforcement learning (RL) to learn a policy for this MDP; 4) we learn from expert demonstrations to efficiently search the space of high-level task plans, given options that address different (potential) infeasibilities; and 5) we run experiments to evaluate our system in a variety of simulated domains. We show significant improvements in performance over prior work.
AB - Tasks in mobile manipulation planning often require thousands of individual motions to complete. Such tasks require reasoning about complex goals as well as the feasibility of movements in configuration space. In discrete representations, planning complexity is exponential in the length of the plan. In mobile manipulation, parameters for an action often draw from a continuous space, so we must also cope with an infinite branching factor. Task and motion planning (TAMP) methods integrate logical search over high-level actions with geometric reasoning to address this challenge. We present an algorithm that searches the space of possible task and motion plans and uses statistical machine learning to guide the search process. Our contributions are as follows: 1) we present a complete algorithm for TAMP; 2) we present a randomized local search algorithm for plan refinement that is easily formulated as a Markov decision process (MDP); 3) we apply reinforcement learning (RL) to learn a policy for this MDP; 4) we learn from expert demonstrations to efficiently search the space of high-level task plans, given options that address different (potential) infeasibilities; and 5) we run experiments to evaluate our system in a variety of simulated domains. We show significant improvements in performance over prior work.
UR - http://www.scopus.com/inward/record.url?scp=84977584727&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84977584727&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2016.7487165
DO - 10.1109/ICRA.2016.7487165
M3 - Conference contribution
AN - SCOPUS:84977584727
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 447
EP - 454
BT - 2016 IEEE International Conference on Robotics and Automation, ICRA 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2016 IEEE International Conference on Robotics and Automation, ICRA 2016
Y2 - 16 May 2016 through 21 May 2016
ER -