Abnormal behavior detection in surveillance video is a pivotal part of the intelligent city. Most of the existing methods only consider how to detect anomalies, with less considering to explain the reason of the anomalies. In this work, we investigate an orthogonal perspective based on the reason of these abnormal behaviors. We propose a multivariate fusion method that analyzes each target through three branches: object, action and motion. The object branch focuses on the appearance information, the motion branch focuses on the distribution of the motion features, and the action branch focuses on the action category of the target. The information that these branches focus on is different, and they can complement each other and jointly detect abnormal behavior. The final abnormal score can then be obtained by combining the abnormal scores of the three branches. In the action branch, we also propose an action recognition module using inter-frame information to solve the multi-target and multi-action recognition in the surveillance video, which is not utilized before in the anomaly detection field. The proposed method outperforms the state-of-the-art methods and also can explain why the target is detected as an anomaly.
Depth estimation is always a hot topic in computer vision, which shows new vitality with the rise of light field camera. Nevertheless, occlusion is a tough problem, which degrades the precision of the acquired depth map. Although previous works have proposed some effective methods to solve this problem, regrettably they are deficient. In this paper, we extend previous single occlusion model into complex occlusion condition, adopt optical flow algorithm to get candidate occlusion points, combine multiple features to separate the angular patch, and employ more reasonable data cost to get the depth map. Because the proposed algorithm is more suitable for light field data, experimental results show that the proposed algorithm has a better performance than state-of-the-art algorithms on synthetic datasets and real world images captured by light field camera, especially for complex occlusion scenes.
Multi-object tracking is particularly challenging in many scenarios with similar appearance and frequent occlusions among targets. In this paper, we present an online detection-based multi-object tracking method. In each frame, kernerlized convolution filter are adopted to track isolated and un-occluded targets. To overcoming fixed scale in KCF, trackers are associated with detection responses. If a target is associated with a detection, then the target size is updated by the average of this detection size and the previous estimated size. When occlusions are detected, the multiple interaction among targets is formulated as an optimization problem and we explore two-layer hierarchical Particle Swarm Optimization algorithm for the optimal solution. The first layer is designed for the superficial targets which is visible. The second layer is designed for the bottom occluded targets which can guided by first visible layer and we propose to incorporate the attractive force into the particle evolution process. Experimental results on public datasets demonstrate that our proposed method alleviating drifting problem and effectively reduces ID switches and lost trajectories.
Multi-object tracking in semi-crowded environment is a difficult task. Traditional methods relying only on visual cues, like appearance and simple motion prediction fails because of frequent occlusions. In this paper, a new method based on formation stability is proposed to learn the interaction information among pedestrians. whatever the relationship of a pedestrian with others is, it will remain unchanged for a while. That is, when a pedestrian is occluded, the relationship with others will stay the same after the reappearance of the pedestrian. So trajectory clips caused by occlusions can be associated together. Effectiveness of this method is validated by experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.