1 |
Affine Transform Motion Compensation for intermodal Cargo IdentificationSiplon, Jonathan Page 20 May 2005 (has links)
The volume of cargo flowing through todays transportation system is growing at an ever increasing rate. Recent studies show that 90% of all international cargo that enters the United States flows through our vast seaport system. When this cargo enters the US, time is of the essence to quickly obtain and verify its identity, screen it against an ever increasingly wide variety of security concerns, and ultimately correctly direct the cargo towards its final destination.
Over the past few years, new port and container security initiatives and regulations have generated huge interest in the need for accurate real-time identification and tracking of incoming and outgoing traffic of vehicles and cargo. On the contrary, the manually intensive identification and tracking processes, typically employed today, are inherently both inefficient and inadequate, and can be seen as a possible enabling factor for potential threats to our ports and therefore our national security. The contradiction between current and required processes coupled to the correlation with accelerated growth in container traffic, has clearly identified the need for a solution.
One heavily researched option is the utilization of video based systems implementing Optical Character Recognition (OCR) processes for automatically extracting the unique container identification code to expedite the flow of cargo through various points in the seaport. The actual current process of how this occurs along with the opportunities and challenges for adding such a technological solution will be investigated in great detail.
This thesis will investigate the feasibility of application of motion compensation algorithms as an enhancement to OCR systems specifically designed to address the challenges of OCR of cargo containers in a seaport environment. This motion compensation could offer a cost effective alternative to the sophisticated hardware systems currently being offered to US ports.
|
2 |
Multiple Global Affine Motion Models Used in Video CodingLi, Xiaohuan 05 March 2007 (has links)
With low bit rate scenarios, a hybrid video coder (e.g. AVC/H.264) tends to allocate greater portion of bits for motion vectors, while saving bits on residual errors. According to this fact, a coding scheme with non-normative global motion models in combination with conventional local motion vectors is proposed, which describes the motion of a frame by the affine motion parameter sets drawn by motion segmentation of the luminance channel. The motion segmentation task is capable of adapting the number of motion objects to the video contents. 6-D affine model sets are driven by linear regression from the scalable block-based motion fields estimated by the existent MPEG encoder. In cases that the number of motion objects exceeds a certain threshold, the global affine models are disabled. Otherwise the 4 scaling factors of the affine models are compressed by a vector quantizer, designed with a unique cache memory for efficient searching and coding. The affine motion information is written in the slice header as a syntax. The global motion information is used for compensating those macroblocks whose Lagrange cost is minimized by the AFFINE mode. The rate-distortion cost is computed by a modified Lagrange equation, which takes into consideration the perceptual discrimination of human vision in different areas.
Besides increasing the coding efficiency, the global affine model manifests the following two features that refine the compressed video quality. i) When the number of slices per frame is more than 1, the global affine motion model can enhance the error-resilience of the video stream, because the affine motion parameters are duplicated in the headers of different slices over the same frame. ii) The global motion model predicts a frame by warping the whole reference frame and this helps to decrease blocking artifacts in the compensation frame.
|
3 |
Video Stabilization and Target Localization Using Feature Tracking with Video from Small UAVsJohansen, David Linn 27 July 2006 (has links) (PDF)
Unmanned Aerial Vehicles (UAVs) equipped with lightweight, inexpensive cameras have grown in popularity by enabling new uses of UAV technology. However, the video retrieved from small UAVs is often unwatchable due to high frequency jitter. Beginning with an investigation of previous stabilization work, this thesis discusses the challenges of stabilizing UAV based video. It then presents a software based computer vision framework and discusses its use to develop a real-time stabilization solution. A novel approach of estimating intended video motion is then presented. Next, the thesis proceeds to extend previous target localization work by allowing the operator to easily identify targets—rather than relying solely on color segmentation—to improve reliability and applicability in real world scenarios. The resulting approach creates a low cost and easy to use solution for aerial video display and target localization.
|
Page generated in 0.0889 seconds