Spelling suggestions: "subject:"video synchronization"" "subject:"ideo synchronization""
1 |
TECHNIQUES FOR SYNCHRONIZING THERMAL ARRAY CHART RECORDERS TO VIDEOGaskill, David M. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / Video tape is becoming more and more popular for storing and analyzing missions. Video
tape is inexpensive, it can hold a two hour test, and it can be edited and manipulated by
easily available consumer electronics equipment. Standard technology allows each frame
to be time stamped with SMPTE code, so that any point in the mission can be displayed on
a CRT. To further correlate data from multiple acquisition systems, the SMPTE code can
be derived from IRIG using commercially available code converters.
Unfortunately, acquiring and storing analog data has not been so easy. Typically, analog
signals from various sensors are coded, transmitted, decoded and sent to a chart recorder.
Since chart recorders cannot normally store an entire mission internally, or time stamp
each data value, it is very difficult for an analyst to accurately correlate analog data to an
individual video frame. Normally the only method is to note the time stamp on the video
frame and unroll the chart to the appropriate second or minute, depending on the code
used, noted in the margin, and estimate the frame location as a percentage of the time code
period. This is very inconvenient if the telemetrist is trying to establish an on-line data
retreival system. To make matters worse, the methods of presentation are very different,
chart paper as opposed to a CRT, and require the analyst to shift focus constantly. For
these reasons, many telemetry stations do not currently have a workable plan to integrate
analog and video subsystems even though it is now generally agreed that such integration
is ultimately desirable.
|
2 |
Video sequence synchronizationWedge, Daniel John January 2008 (has links)
[Truncated abstract] Video sequence synchronization is necessary for any computer vision application that integrates data from multiple simultaneously recorded video sequences. With the increased availability of video cameras as either dedicated devices, or as components within digital cameras or mobile phones, a large volume of video data is available as input for a growing range of computer vision applications that process multiple video sequences. To ensure that the output of these applications is correct, accurate video sequence synchronization is essential. Whilst hardware synchronization methods can embed timestamps into each sequence on-the-fly, they require specialized hardware and it is necessary to set up the camera network in advance. On the other hand, computer vision-based software synchronization algorithms can be used to post-process video sequences recorded by cameras that are not networked, such as common consumer hand-held video cameras or cameras embedded in mobile phones, or to synchronize historical videos for which hardware synchronization was not possible. The current state-of-the-art software algorithms vary in their input and output requirements and camera configuration assumptions. ... Next, I describe an approach that synchronizes two video sequences where an object exhibits ballistic motions. Given the epipolar geometry relating the two cameras and the imaged ballistic trajectory of an object, the algorithm uses a novel iterative approach that exploits object motion to rapidly determine pairs of temporally corresponding frames. This algorithm accurately synchronizes videos recorded at different frame rates and takes few iterations to converge to sub-frame accuracy. Whereas the method presented by the first algorithm integrates tracking data from all frames to synchronize the sequences as a whole, this algorithm recovers the synchronization by locating pairs of temporally corresponding frames in each sequence. Finally, I introduce an algorithm for synchronizing two video sequences recorded by stationary cameras with unknown epipolar geometry. This approach is unique in that it recovers both the frame rate ratio and the frame offset of the two sequences by finding matching space-time interest points that represent events in each sequence; the algorithm does not require object tracking. RANSAC-based approaches that take a set of putatively matching interest points and recover either a homography or a fundamental matrix relating a pair of still images are well known. This algorithm extends these techniques using space-time interest points in place of spatial features, and uses nested instances of RANSAC to also recover the frame rate ratio and frame offset of a pair of video sequences. In this thesis, it is demonstrated that each of the above algorithms can accurately recover the frame rate ratio and frame offset of a range of real video sequences. Each algorithm makes a contribution to the body of video sequence synchronization literature, and it is shown that the synchronization problem can be solved using a range of approaches.
|
Page generated in 0.117 seconds