Temporal synchronization of video sequences in theory and in practice
In this work, we present a formalization of the video synchronization problem that exposes new variants of the problem that have been left unexplored to date. We also present a novel method to temporally synchronize multiple stationary video cameras with overlapping views that: 1) does not rely on certain scene properties, 2) suffices for all variants of the synchronization problem exposed by the theoretical disseration, and 3) does not rely on the trajectory correspondence problem to be solved apriori. The method uses a two stage approach that first approximates the synchronization by tracking moving objects and identifying inflection points. The method then proceeds to refine the estimate using a consensus based matching heuristic to find moving features that best agree with the pre-computed camera geometries from stationary image features. By using the fundamental matrix and the trifocal tensor in the second refinement step we are able to improve the estimation of the first step and handle a broader range of input scenarios and camera conditions.
|Computer vision, Video processing, Video synchronization|
|IEEE Workshop on Motion and Video Computing, MOTION 2005|
|Organisation||School of Computer Science|
Whitehead, A, Laganiere, R. (Robert), & Bose, P. (2007). Temporal synchronization of video sequences in theory and in practice. Presented at the IEEE Workshop on Motion and Video Computing, MOTION 2005. doi:10.1109/ACVMOT.2005.114