• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 23
  • 17
  • 15
  • 13
  • 12
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 220
  • 220
  • 72
  • 62
  • 60
  • 54
  • 41
  • 37
  • 36
  • 32
  • 28
  • 27
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Polynomial Expansion-Based Displacement Calculation on FPGA / Polynomexpansions-baserad förskjutningsberäkning på FPGA

Ehrenstråhle, Carl January 2016 (has links)
This thesis implements a system for calculating the displacement between two consecutive video frames. The displacement is calculated using a polynomial expansion-based algorithm. A unit tested bottoms-up approach is successfully used to design and implement the system. The designed and implemented system is thoroughly elaborated upon. The chosen algorithm and its computational details are presented to provide context to the implemented system. Some of the major issues and their impact on the system are discussed.
22

Nonrigid surface tracking, analysis and evaluation

Li, Wenbin January 2014 (has links)
Estimating the dense image motion or optical flow on a real-world nonrigid surface is a fundamental research issue in computer vision, and is applicable to a wide range of fields, including medical imaging, computer animation and robotics. However, nonrigid surface tracking is a difficult challenge because complex nonrigid deformation, accompanied by image blur and natural noise, may lead to severe intensity changes to pixels through an image sequence. This violates the basic intensity constancy assumption of most visual tracking methods. In this thesis, we show that local geometric constraints and long term feature matching techniques can improve local motion preservation, and reduce error accumulation in optical flow estimation. We also demonstrate that combining RGB data with additional information from other sensing channels, can improve tracking performance in blurry scenes as well as allow us to create nonrigid ground truth from real world scenes. First, we introduce a local motion constraint based on a laplacian mesh representation of nonrigid surfaces. This additional constraint term encourages local smoothness whilst simultaneously preserving nonrigid deformation. The results show that our method outperforms most global constraint based models on several popular benchmarks. Second, we observe that the inter-frame blur in general video sequences is near linear, and can be roughly represented by 3D camera motion. To recover dense correspondences from a blurred scene, we therefore design a mechanical device to track camera motion and formulate this as a directional constraint into the optical flow framework. This improves optical flow in blurred scenes. Third, inspired by recent developments in long term feature matching, we introduce an optimisation framework for dense long term tracking -- applicable to any existing optical flow method -- using anchor patches. Finally, we observe that traditional nonrigid surface analysis suffers from a lack of suitable ground truth datasets given real-world noise and long image sequences. To address this, we construct a new ground truth by simultaneously capturing both normal RGB and near-infrared images. The latter spectrum contains dense markers, visible only in the infrared, and represents ground truth positions. Our benchmark contains many real-world scenes and properties absent in existing ground truth datasets.
23

A Tiny Diagnostic Dataset and Diverse Modules for Learning-Based Optical Flow Estimation

Xie, Shuang 18 September 2019 (has links)
Recent work has shown that flow estimation from a pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNN). However, the basic straightforward CNN methods estimate optical flow with motion and occlusion boundary blur. To tackle this problem, we propose a tiny diagnostic dataset called FlowClevr to quickly evaluate various modules that can use to enhance standard CNN architectures. Based on the experiments of the FlowClevr dataset, we find that a deformable module can improve model prediction accuracy by around 30% to 100% in most tasks and more significantly reduce boundary blur. Based on these results, we are able to design modifications to various existing network architectures improving their performance. Compared with the original model, the model with the deformable module clearly reduces boundary blur and achieves a large improvement on the MPI sintel dataset, an omni-directional stereo (ODS) and a novel omni-directional optical flow dataset.
24

Visual-inertial tracking using Optical Flow measurements

Larsson, Olof January 2010 (has links)
<p> </p><p>Visual-inertial tracking is a well known technique to track a combination of a camera and an inertial measurement unit (IMU). An issue with the straight-forward approach is the need of known 3D points. To by-pass this, 2D information can be used without recovering depth to estimate the position and orientation (pose) of the camera. This Master's thesis investigates the feasibility of using Optical Flow (OF) measurements and indicates the benifits using this approach.</p><p>The 2D information is added using OF measurements. OF describes the visual flow of interest points in the image plane. Without the necessity to estimate depth of these points, the computational complexity is reduced. With the increased 2D information, the 3D information required for the pose estimate decreases.</p><p>The usage of 2D points for the pose estimation has been verified with experimental data gathered by a real camera/IMU-system. Several data sequences containing different trajectories are used to estimate the pose. It is shown that OF measurements can be used to improve visual-inertial tracking with reduced need of 3D-point registrations.</p>
25

Motion Field and Optical Flow: Qualitative Properties

Verri, Alessandro, Poggio, Tomaso 01 December 1986 (has links)
In this paper we show that the optical flow, a 2D field that can be associated with the variation of the image brightness pattern, and the 2D motion field, the projection on the image plane of the 3D velocity field of a moving scene, are in general different, unless very special conditions are satisfied. The optical flow, therefore, is ill-suited for computing structure from motion and for reconstructing the 3D velocity field, problems that require an accurate estimate of the 2D motion field. We then suggest a different use of the optical flow. We argue that stable qualitative properties of the 2D motion field give useful information about the 3D velocity field and the 3D structure of the scene, and that they can usually be obtained from the optical flow. To support this approach we show how the (smoothed) optical flow and 2D motion field, interpreted as vector fields tangent to flows of planar dynamical systems, may have the same qualitative properties from the point of view of the theory of structural stability of dynamical systems.
26

The Smoothest Velocity Field and Token Matching

Yuille, A.L. 01 August 1983 (has links)
This paper presents some mathematical results concerning the measurement of motion of contours. A fundamental problem of motion measurement in general is that the velocity field is not determined uniquely from the changing intensity patterns. Recently Hildreth & Ullman have studied a solution to this problem based on an Extremum Principle [Hildreth (1983), Ullman & Hildreth (1983)]. That is, they formulate the measurement of motion as the computation of the smoothest velocity field consistent with the changing contour. We analyse this Extremum principle and prove that it is closely related to a matching scheme for motion measurement which matches points on the moving contour that have similar tangent vectors. We then derive necessary and sufficient conditions for the principle to yield the correct velocity field. These results have possible implications for the design of computer vision systems, and for the study of human vision.
27

Learning Object-Independent Modes of Variation with Feature Flow Fields

Miller, Erik G., Tieu, Kinh, Stauffer, Chris P. 01 September 2001 (has links)
We present a unifying framework in which "object-independent" modes of variation are learned from continuous-time data such as video sequences. These modes of variation can be used as "generators" to produce a manifold of images of a new object from a single example of that object. We develop the framework in the context of a well-known example: analyzing the modes of spatial deformations of a scene under camera movement. Our method learns a close approximation to the standard affine deformations that are expected from the geometry of the situation, and does so in a completely unsupervised (i.e. ignorant of the geometry of the situation) fashion. We stress that it is learning a "parameterization", not just the parameter values, of the data. We then demonstrate how we have used the same framework to derive a novel data-driven model of joint color change in images due to common lighting variations. The model is superior to previous models of color change in describing non-linear color changes due to lighting.
28

Optical Flow Based Structure from Motion

Zucchelli, Marco January 2002 (has links)
No description available.
29

Incorporating Omni-Directional Image and the Optical Flow Technique into Movement Estimation

Chou, Chia-Chih 30 July 2007 (has links)
From the viewpoint of applications, conventional cameras are usually limited in their fields of view. The omni-directional camera has a full range in all directions, which gains the complete field of view. In the past, a moving object can be detected, only when the camera is static or moving with a known speed. If those methods are employed to mobile robots or vehicles, it will be difficult to determine the motion of moving objects observed by the camera. In this paper, we assume the omni-directional camera is mounted on a moving platform, which travels with a planar motion. The region of floor in the omni-directional image and the brightness constraint equation are applied to estimate the ego-motion. The depth information is acquired from the floor image to solve the problem that cannot be obtained by single camera systems. Using the estimated ego-motion, the optical flow caused by the floor motion can be computed. Therefore, comparing its direction with the direction of the optical flow on the image leads to detection of a moving object. Due to the depth information, even if the camera is in the condition that combining translational and rotational motions, a moving object can still be accurately identified.
30

Visual-inertial tracking using Optical Flow measurements

Larsson, Olof January 2010 (has links)
Visual-inertial tracking is a well known technique to track a combination of a camera and an inertial measurement unit (IMU). An issue with the straight-forward approach is the need of known 3D points. To by-pass this, 2D information can be used without recovering depth to estimate the position and orientation (pose) of the camera. This Master's thesis investigates the feasibility of using Optical Flow (OF) measurements and indicates the benifits using this approach. The 2D information is added using OF measurements. OF describes the visual flow of interest points in the image plane. Without the necessity to estimate depth of these points, the computational complexity is reduced. With the increased 2D information, the 3D information required for the pose estimate decreases. The usage of 2D points for the pose estimation has been verified with experimental data gathered by a real camera/IMU-system. Several data sequences containing different trajectories are used to estimate the pose. It is shown that OF measurements can be used to improve visual-inertial tracking with reduced need of 3D-point registrations.

Page generated in 0.083 seconds