Abstract
Robots working with humans in real environments need to plan in a large state–action space given a natural language command. Such a problem poses multiple challenges with respect to the size of the state–action space to plan over, the different modalities that natural language can provide to specify the goal condition, and the difficulty of learning a model of such an environment to plan over. In this thesis we would look at using hierarchical methods to learn and plan in these large state–action spaces. Further, we would look the using natural language to guide the construction and learning of hierarchies and reward functions.
Original language | English (US) |
---|---|
Pages | 532-536 |
Number of pages | 5 |
State | Published - 2018 |
Externally published | Yes |
Event | 2018 AAAI Spring Symposium - Palo Alto, United States Duration: Mar 26 2018 → Mar 28 2018 |
Conference
Conference | 2018 AAAI Spring Symposium |
---|---|
Country/Territory | United States |
City | Palo Alto |
Period | 3/26/18 → 3/28/18 |
ASJC Scopus subject areas
- Artificial Intelligence