Presentation + Paper
1 May 2017 Weighted fusion of depth and inertial data to improve view invariance for real-time human action recognition
Chen Chen, Huiyan Hao, Roozbeh Jafari, Nasser Kehtarnavaz
Author Affiliations +
Abstract
This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.
Conference Presentation
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chen Chen, Huiyan Hao, Roozbeh Jafari, and Nasser Kehtarnavaz "Weighted fusion of depth and inertial data to improve view invariance for real-time human action recognition", Proc. SPIE 10223, Real-Time Image and Video Processing 2017, 1022307 (1 May 2017); https://doi.org/10.1117/12.2261823
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Sensors

Data fusion

Feature extraction

Image fusion

Statistical analysis

Error control coding

RELATED CONTENT


Back to Top