TY - GEN
T1 - Automated gesture segmentation from dance sequences
AU - Kahol, Kanav
AU - Tripathi, Priyamvada
AU - Panchanathan, Sethuraman
PY - 2004/9/24
Y1 - 2004/9/24
N2 - Complex human motion (e.g. dance) sequences are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one choreographer to another. Dance sequences also exhibit a large vocabulary of gestures. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve, Bayesian classifier to derive choreographer profiles from empirical data that are used to predict how particular choreographers will segment gestures in other motion sequences. When the predictions were tested with a library of 45 3D motion capture sequences (with 185 distinct gestures) created by 5 different choreographers, they were found to be 93.3% accurate.
AB - Complex human motion (e.g. dance) sequences are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one choreographer to another. Dance sequences also exhibit a large vocabulary of gestures. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve, Bayesian classifier to derive choreographer profiles from empirical data that are used to predict how particular choreographers will segment gestures in other motion sequences. When the predictions were tested with a library of 45 3D motion capture sequences (with 185 distinct gestures) created by 5 different choreographers, they were found to be 93.3% accurate.
UR - http://www.scopus.com/inward/record.url?scp=4544385653&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=4544385653&partnerID=8YFLogxK
U2 - 10.1109/AFGR.2004.1301645
DO - 10.1109/AFGR.2004.1301645
M3 - Conference contribution
AN - SCOPUS:4544385653
SN - 0769521223
SN - 9780769521220
T3 - Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition
SP - 883
EP - 888
BT - Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004
T2 - Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004
Y2 - 17 May 2004 through 19 May 2004
ER -