Ghent University Academic Bibliography

Advanced

Silhouette coverage analysis for multi-modal video surveillance

Steven Verstockt UGent, Chris Poppe UGent, Pieterjan De Potter UGent, Charles Hollemeersch UGent, Sofie Van Hoecke UGent, Peter Lambert UGent and Rik Van de Walle UGent (2011) Progress in Electromagnetics Research Symposium. p.1279-1283
abstract
In order to improve the accuracy in video-based object detection, the proposed multi-modal video surveillance system takes advantage of the different kinds of information represented by visual, thermal and/or depth imaging sensors. The multi-modal object detector of the system can be split up in two consecutive parts: the registration and the coverage analysis. The multi-modal image registration is performed using a three step silhouette-mapping algorithm which detects the rotation, scale and translation between moving objects in the visual, (thermal) infrared and/or depth images. First, moving object silhouettes are extracted to separate the calibration objects, i.e., the foreground, from the static background. Key components are dynamic background subtraction, foreground enhancement and automatic thresholding. Then, 1D contour vectors are generated from the resulting multi-modal silhouettes using silhouette boundary extraction, cartesian to polar transform and radial vector analysis. Next, to retrieve the rotation angle and the scale factor between the multi-sensor image, these contours are mapped on each other using circular cross correlation and contour scaling. Finally, the translation between the images is calculated using maximization of binary correlation. The silhouette coverage analysis also starts with moving object silhouette extraction. Then, it uses the registration information, i.e., rotation angle, scale factor and translation vector, to map the thermal, depth and visual silhouette images on each other. Finally, the coverage of the resulting multi-modal silhouette map is computed and is analyzed over time to reduce false alarms and to improve object detection. Prior experiments on real-world multi-sensor video sequences indicate that automated multi-modal video surveillance is promising. This paper shows that merging information from multi-modal video further increases the detection results.
Please use this url to cite or link to this publication:
author
organization
year
type
conference
publication status
published
subject
keyword
Video Surveillance
in
Progress in Electromagnetics Research Symposium
issue title
PIERS 2011 MARRAKESH: PROGRESS IN ELECTROMAGNETICS RESEARCH SYMPOSIUM
pages
1279 - 1283
publisher
The Electromagnetics Academy
place of publication
Cambridge, MA, USA
conference name
Progress in Electromagnetics Research Symposium (PIERS)
conference location
Marrakesh, Marokko
conference start
2011-03-20
conference end
2011-03-23
Web of Science type
Proceedings Paper
Web of Science id
000332513000279
ISBN
9781934142165
language
English
UGent publication?
yes
classification
P1
copyright statement
I have transferred the copyright for this publication to the publisher
id
1207419
handle
http://hdl.handle.net/1854/LU-1207419
date created
2011-04-12 10:15:41
date last changed
2015-06-26 15:49:21
@inproceedings{1207419,
  abstract     = {In order to improve the accuracy in video-based object detection, the proposed multi-modal video surveillance system takes advantage of the different kinds of information represented by visual, thermal and/or depth imaging sensors.
 
The multi-modal object detector of the system can be split up in two consecutive parts: the registration and the coverage analysis. The multi-modal image registration is performed using a three step silhouette-mapping algorithm which detects the rotation, scale and translation between moving objects in the visual, (thermal) infrared and/or depth images. First, moving object silhouettes are extracted to separate the calibration objects, i.e., the foreground, from the static background. Key components are dynamic background subtraction, foreground enhancement and automatic thresholding. Then, 1D contour vectors are generated from the resulting multi-modal silhouettes using silhouette boundary extraction, cartesian to polar transform and radial vector analysis. Next, to retrieve the rotation angle and the scale factor between the multi-sensor image, these contours are mapped on each other using circular cross correlation and contour scaling. Finally, the translation between the images is calculated using maximization of binary correlation. 

The silhouette coverage analysis also starts with moving object silhouette extraction. Then, it uses the registration information, i.e., rotation angle, scale factor and translation vector, to map the thermal, depth and visual silhouette images on each other. Finally, the coverage of the resulting multi-modal silhouette map is computed and is analyzed over time to reduce false alarms and to improve object detection.
 
Prior experiments on real-world multi-sensor video sequences indicate that automated multi-modal video surveillance is promising. This paper shows that merging information from multi-modal video further increases the detection results.},
  author       = {Verstockt, Steven and Poppe, Chris and De Potter, Pieterjan and Hollemeersch, Charles and Van Hoecke, Sofie and Lambert, Peter and Van de Walle, Rik},
  booktitle    = {Progress in Electromagnetics Research Symposium},
  isbn         = {9781934142165},
  keyword      = {Video Surveillance},
  language     = {eng},
  location     = {Marrakesh, Marokko},
  pages        = {1279--1283},
  publisher    = {The Electromagnetics Academy},
  title        = {Silhouette coverage analysis for multi-modal video surveillance},
  year         = {2011},
}

Chicago
Verstockt, Steven, Chris Poppe, Pieterjan De Potter, Charles Hollemeersch, Sofie Van Hoecke, Peter Lambert, and Rik Van de Walle. 2011. “Silhouette Coverage Analysis for Multi-modal Video Surveillance.” In Progress in Electromagnetics Research Symposium, 1279–1283. Cambridge, MA, USA: The Electromagnetics Academy.
APA
Verstockt, S., Poppe, C., De Potter, P., Hollemeersch, C., Van Hoecke, S., Lambert, P., & Van de Walle, R. (2011). Silhouette coverage analysis for multi-modal video surveillance. Progress in Electromagnetics Research Symposium (pp. 1279–1283). Presented at the Progress in Electromagnetics Research Symposium (PIERS), Cambridge, MA, USA: The Electromagnetics Academy.
Vancouver
1.
Verstockt S, Poppe C, De Potter P, Hollemeersch C, Van Hoecke S, Lambert P, et al. Silhouette coverage analysis for multi-modal video surveillance. Progress in Electromagnetics Research Symposium. Cambridge, MA, USA: The Electromagnetics Academy; 2011. p. 1279–83.
MLA
Verstockt, Steven, Chris Poppe, Pieterjan De Potter, et al. “Silhouette Coverage Analysis for Multi-modal Video Surveillance.” Progress in Electromagnetics Research Symposium. Cambridge, MA, USA: The Electromagnetics Academy, 2011. 1279–1283. Print.