• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 23
  • 17
  • 15
  • 12
  • 10
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 209
  • 209
  • 67
  • 59
  • 57
  • 53
  • 40
  • 37
  • 33
  • 29
  • 28
  • 28
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Applying Optical Flow to Stereo Video Compression

Tsai, Cheng-Yuan 31 August 2004 (has links)
The topic of stereo video is getting more attention among these days due to its high quality of visual effect. However, the large volume of data is the problem of its application. The topic of this thesis is to investigate a compression technique by Wavelet compression on the stereo video data. There is much similarity between the parallax videos. This similarity is obtained by a motion compensation technique: the optical flow computing. Optical flow proposed by Horn and Schunck was originally developed in the field of computer vision for the application of moving detection. In this thesis we apply the optical flow to compress the similarity information between the parallax stereo video. On the other hand, the Wavelet transformation has been proved to be a successful technique for multiscale modeling. We therefore applying the Wavelet transform combined with the zerotree compression to compress the fields of optical flow. Experimental results in this thesis have demonstrated different effects in different situations.
12

Image Tracking Using Optical Flow Approach

Ho, Kun-Shen 27 June 2001 (has links)
Optical flow, caused by relative motion of the object and the viewer, is the distribution of apparent velocities of brightness pattern in an image. The advantage of the optical-flow-based visual servo method is that feature of the object does not need to be defined or known in advance. This research plans to build an image servo technique to deal with the problem of 3D relative motion of the viewer and the environment. The images are treated as input and output signals of the control system and are fed back to extract the relative velocity information between contiguous image patterns. Then the video camera will automatically follow the motion to maintain the target image unchanged.
13

Investigation of machine vision and path planning methods for use in an autonomous unmanned air vehicle

Williams, Matthew January 2000 (has links)
No description available.
14

A joint optical flow and principal component analyisis approach for motion detection from outdoor videos

Liu, Kui 06 August 2011 (has links)
Optical flow and its extensions have been widely used in motion detection and computer vision. In the study, principal component analysis (PCA) is applied to analyze optical flows for better motion detection performance. The joint optical flow and PCA approach can efficiently detect moving objects and suppress small turbulence. It is effective in both static and dynamic background. It is particularly useful for motion detection from outdoor videos with low quality and small moving objects. Experimental results demonstrate that this approach outperforms other existing methods by extracting the moving objects more completely with lower false alarms. Saving strategies are developed to reduce computational complexity of optical flow calculation and PCA. Graphic processing unit (GPU)-based parallel implementation is developed, which shows excellent speed up performance.
15

Image motion analysis using inertial sensors

Saunders, Thomas January 2015 (has links)
Understanding the motion of a camera from only the image(s) it captures is a di cult problem. At best we might hope to estimate the relative motion between camera and scene if we assume a static subject, but once we start considering scenes with dynamic content it becomes di cult to di↵erentiate between motion due to the observer or motion due to scene movement. In this thesis we show how the invaluable cues provided by inertial sensor data can be used to simplify motion analysis and relax requirements for several computer vision problems. This work was funded by the University of Bath.
16

Polynomial Expansion-Based Displacement Calculation on FPGA / Polynomexpansions-baserad förskjutningsberäkning på FPGA

Ehrenstråhle, Carl January 2016 (has links)
This thesis implements a system for calculating the displacement between two consecutive video frames. The displacement is calculated using a polynomial expansion-based algorithm. A unit tested bottoms-up approach is successfully used to design and implement the system. The designed and implemented system is thoroughly elaborated upon. The chosen algorithm and its computational details are presented to provide context to the implemented system. Some of the major issues and their impact on the system are discussed.
17

Nonrigid surface tracking, analysis and evaluation

Li, Wenbin January 2014 (has links)
Estimating the dense image motion or optical flow on a real-world nonrigid surface is a fundamental research issue in computer vision, and is applicable to a wide range of fields, including medical imaging, computer animation and robotics. However, nonrigid surface tracking is a difficult challenge because complex nonrigid deformation, accompanied by image blur and natural noise, may lead to severe intensity changes to pixels through an image sequence. This violates the basic intensity constancy assumption of most visual tracking methods. In this thesis, we show that local geometric constraints and long term feature matching techniques can improve local motion preservation, and reduce error accumulation in optical flow estimation. We also demonstrate that combining RGB data with additional information from other sensing channels, can improve tracking performance in blurry scenes as well as allow us to create nonrigid ground truth from real world scenes. First, we introduce a local motion constraint based on a laplacian mesh representation of nonrigid surfaces. This additional constraint term encourages local smoothness whilst simultaneously preserving nonrigid deformation. The results show that our method outperforms most global constraint based models on several popular benchmarks. Second, we observe that the inter-frame blur in general video sequences is near linear, and can be roughly represented by 3D camera motion. To recover dense correspondences from a blurred scene, we therefore design a mechanical device to track camera motion and formulate this as a directional constraint into the optical flow framework. This improves optical flow in blurred scenes. Third, inspired by recent developments in long term feature matching, we introduce an optimisation framework for dense long term tracking -- applicable to any existing optical flow method -- using anchor patches. Finally, we observe that traditional nonrigid surface analysis suffers from a lack of suitable ground truth datasets given real-world noise and long image sequences. To address this, we construct a new ground truth by simultaneously capturing both normal RGB and near-infrared images. The latter spectrum contains dense markers, visible only in the infrared, and represents ground truth positions. Our benchmark contains many real-world scenes and properties absent in existing ground truth datasets.
18

A Tiny Diagnostic Dataset and Diverse Modules for Learning-Based Optical Flow Estimation

Xie, Shuang 18 September 2019 (has links)
Recent work has shown that flow estimation from a pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNN). However, the basic straightforward CNN methods estimate optical flow with motion and occlusion boundary blur. To tackle this problem, we propose a tiny diagnostic dataset called FlowClevr to quickly evaluate various modules that can use to enhance standard CNN architectures. Based on the experiments of the FlowClevr dataset, we find that a deformable module can improve model prediction accuracy by around 30% to 100% in most tasks and more significantly reduce boundary blur. Based on these results, we are able to design modifications to various existing network architectures improving their performance. Compared with the original model, the model with the deformable module clearly reduces boundary blur and achieves a large improvement on the MPI sintel dataset, an omni-directional stereo (ODS) and a novel omni-directional optical flow dataset.
19

Motion Field and Optical Flow: Qualitative Properties

Verri, Alessandro, Poggio, Tomaso 01 December 1986 (has links)
In this paper we show that the optical flow, a 2D field that can be associated with the variation of the image brightness pattern, and the 2D motion field, the projection on the image plane of the 3D velocity field of a moving scene, are in general different, unless very special conditions are satisfied. The optical flow, therefore, is ill-suited for computing structure from motion and for reconstructing the 3D velocity field, problems that require an accurate estimate of the 2D motion field. We then suggest a different use of the optical flow. We argue that stable qualitative properties of the 2D motion field give useful information about the 3D velocity field and the 3D structure of the scene, and that they can usually be obtained from the optical flow. To support this approach we show how the (smoothed) optical flow and 2D motion field, interpreted as vector fields tangent to flows of planar dynamical systems, may have the same qualitative properties from the point of view of the theory of structural stability of dynamical systems.
20

The Smoothest Velocity Field and Token Matching

Yuille, A.L. 01 August 1983 (has links)
This paper presents some mathematical results concerning the measurement of motion of contours. A fundamental problem of motion measurement in general is that the velocity field is not determined uniquely from the changing intensity patterns. Recently Hildreth & Ullman have studied a solution to this problem based on an Extremum Principle [Hildreth (1983), Ullman & Hildreth (1983)]. That is, they formulate the measurement of motion as the computation of the smoothest velocity field consistent with the changing contour. We analyse this Extremum principle and prove that it is closely related to a matching scheme for motion measurement which matches points on the moving contour that have similar tangent vectors. We then derive necessary and sufficient conditions for the principle to yield the correct velocity field. These results have possible implications for the design of computer vision systems, and for the study of human vision.

Page generated in 0.029 seconds