Abstract
In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by "watching" unconstrained videos with high accuracy.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 29th AAAI Conference on Artificial Intelligence, AAAI 2015 and the 27th Innovative Applications of Artificial Intelligence Conference, IAAI 2015 |
Publisher | AI Access Foundation |
Pages | 3686-3692 |
Number of pages | 7 |
Volume | 5 |
ISBN (Electronic) | 9781577357032 |
State | Published - Jun 1 2015 |
Externally published | Yes |
Event | 29th AAAI Conference on Artificial Intelligence, AAAI 2015 and the 27th Innovative Applications of Artificial Intelligence Conference, IAAI 2015 - Austin, United States Duration: Jan 25 2015 → Jan 30 2015 |
Other
Other | 29th AAAI Conference on Artificial Intelligence, AAAI 2015 and the 27th Innovative Applications of Artificial Intelligence Conference, IAAI 2015 |
---|---|
Country/Territory | United States |
City | Austin |
Period | 1/25/15 → 1/30/15 |
ASJC Scopus subject areas
- Software
- Artificial Intelligence