TY - GEN
T1 - Learning probabilistic hierarchical task networks to capture user preferences
AU - Li, Nan
AU - Kambhampati, Subbarao
AU - Yoon, Sungwook
PY - 2009
Y1 - 2009
N2 - While much work on learning in planning focused on learning domain physics (i.e., action models), and search control knowledge, little attention has been paid towards learning user preferences on desirable plans. Hierarchical task networks (HTN) are known to provide an effective way to encode user prescriptions about what constitute good plans. However, manual construction of these methods is complex and error prone. In this paper, we propose a novel approach to learning probabilistic hierarchical task networks that capture user preferences by examining user-produced plans given no prior information about the methods (in contrast, most prior work on learning within the HTN framework focused on learning "method preconditions" - i.e., domain physics - assuming that the structure of the methods is given as input). We will show that this problem has close parallels to the problem of probabilistic grammar induction, and describe how grammar inductionmethods can be adapted to learn task networks. We will empirically demonstrate the effectiveness of our approach by showing that task networks we learn are able to generate plans with a distribution close to the distribution of the userpreferred plans.
AB - While much work on learning in planning focused on learning domain physics (i.e., action models), and search control knowledge, little attention has been paid towards learning user preferences on desirable plans. Hierarchical task networks (HTN) are known to provide an effective way to encode user prescriptions about what constitute good plans. However, manual construction of these methods is complex and error prone. In this paper, we propose a novel approach to learning probabilistic hierarchical task networks that capture user preferences by examining user-produced plans given no prior information about the methods (in contrast, most prior work on learning within the HTN framework focused on learning "method preconditions" - i.e., domain physics - assuming that the structure of the methods is given as input). We will show that this problem has close parallels to the problem of probabilistic grammar induction, and describe how grammar inductionmethods can be adapted to learn task networks. We will empirically demonstrate the effectiveness of our approach by showing that task networks we learn are able to generate plans with a distribution close to the distribution of the userpreferred plans.
UR - http://www.scopus.com/inward/record.url?scp=78751698487&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=78751698487&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:78751698487
SN - 9781577354260
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 1754
EP - 1759
BT - IJCAI-09 - Proceedings of the 21st International Joint Conference on Artificial Intelligence
PB - International Joint Conferences on Artificial Intelligence
T2 - 21st International Joint Conference on Artificial Intelligence, IJCAI 2009
Y2 - 11 July 2009 through 16 July 2009
ER -