KEYWORDS: Cameras, Video, Video surveillance, 3D modeling, Network security, Detection and tracking algorithms, Head, Gesture recognition, Computer security, Video processing
This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use
in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.
KEYWORDS: Cameras, Video, Video processing, Skin, Imaging systems, Sensors, Signal processing, Detection and tracking algorithms, Image processing, Very large scale integration
Smart rooms provide advanced interfaces for networked information systems. Smart rooms include a variety of sensors that can analyze the behavior of persons in the room; these sensors allow people to issue commands without direct contact with equipment. Video is one important modality for smart room input--video analysis can be used for determining the presence of people in the room, gesture analysis, facial analysis, etc. This paper outlines the architecture of a real-time video analysis system for smart rooms. The system uses multiple cameras, each with its own video signal processor. We use algorithms that can be performed in real-time to capture basic information about the persons in the room.
This paper describes a relational graph matching with model-based segmentation for human detection. The matching result is used for the decision of human presence in the image as well as for posture recognition. We extend our previous work for rigid object detection in still images and video frames by modeling parts with superellipses and by using multi-dimensional Bayes classification in order to determine the non-rigid body parts under the assumption that the unary and binary (relational) features belonging to the corresponding parts are Gaussian distributed. The major contribution of the proposed method is to create automatically semantic segments from the combination of low level edge or region based segments using model-based segmentation. The generality of the reference model part attributes allows detection of human with different postures while the conditional rule generation decreases the rate of false alarms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.