A new model-based human body tracking framework with learned-based theory is proposed in this paper. This
framework introduces a likely model set-variable structure multiple models (LMS-VSMM) to track articulated human
motion in monocular images sequences. The key joint points are selected as image feature, which are detected
automatically and the undetected points are estimated with Particle filters, multiple motion models are learned from
CMU motion capture database with ridge regression method to direct tracking. In tracking, motion models currently in
effect switches from one to another in order to match the present human motion mode. The motion model is activated
according to the change in projection angle of kinematic chain, and topological and compatibility relationship among
them. It is terminated according to their model probabilities. And likely model set schemes of VSMM is used to estimate
the quaternion vectors of joints rotation. Experiments using two videos demonstrate this tracking framework is efficient
with respect to 3D pose and 2D projection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.