Skip to main content

Research Repository

See what's under the surface

Multi-modality gesture detection and recognition with un-supervision, randomization and discrimination

Wu, Di; Chen, Guang; Clarke, Daniel; Weikersdorfer, David; Giuliani, Manuel; Gaschler, Andre; Knoll, Alois

Authors

Di Wu

Guang Chen

Daniel Clarke

David Weikersdorfer

Manuel Giuliani Manuel.Giuliani@uwe.ac.uk
Professor in Embedded Cognitive AI for Robotics

Andre Gaschler

Alois Knoll



Contributors

Lourdes Agapito
Editor

Michael Bronstein
Editor

Carsten Rother
Editor

Abstract

© Springer International Publishing Switzerland 2015. We describe in this paper our gesture detection and recognition system for the 2014 ChaLearn Looking at People (Track 3: Gesture Recognition) organized by ChaLearn in conjunction with the ECCV 2014 conference. The competition’s task was to learn a vacabulary of 20 types of Italian gestures and detect them in sequences. Our system adopts a multi-modality approach for detecting as well as recognizing the gestures. The goal of our approach is to identify semantically meaningful contents from dense sampling spatio-temporal feature space for gesture recognition. To achieve this, we develop three concepts under the random forest framework: un-supervision; discrimination; and randomization. Un-supervision learns spatio-temporal features from two channels (grayscale and depth) of RGB-D video in an unsupervised way. Discrimination extracts the information in dense sampling spatio-temporal space effectively. Randomization explores the dense sampling spatio-temporal feature space efficiently. An evaluation of our approach shows that we achieve a mean Jaccard Index of 0.6489, and a mean average accuracy of 90.3% over the test dataset.

Journal Article Type Conference Paper
Publication Date Jan 1, 2015
Journal Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Print ISSN 0302-9743
Electronic ISSN 1611-3349
Publisher Springer Verlag
Peer Reviewed Peer Reviewed
Volume 8925
Pages 608-622
APA6 Citation Wu, D., Chen, G., Clarke, D., Weikersdorfer, D., Giuliani, M., Gaschler, A., & Knoll, A. (2015). Multi-modality gesture detection and recognition with un-supervision, randomization and discrimination. Lecture Notes in Artificial Intelligence, 8925, 608-622. https://doi.org/10.1007/978-3-319-16178-5_43
DOI https://doi.org/10.1007/978-3-319-16178-5_43
Keywords multi-modality gesture, unsupervised learning, random forest, discriminative training
Publisher URL http://dx.doi.org/10.1007/978-3-319-16178-5_43
Additional Information Title of Conference or Conference Proceedings : ChaLearn Looking at People Workshop, European Conference on Computer Vision (ECCV2014)