• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 23
  • 17
  • 15
  • 12
  • 10
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 209
  • 209
  • 67
  • 59
  • 57
  • 53
  • 40
  • 37
  • 33
  • 29
  • 28
  • 28
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Learning Object-Independent Modes of Variation with Feature Flow Fields

Miller, Erik G., Tieu, Kinh, Stauffer, Chris P. 01 September 2001 (has links)
We present a unifying framework in which "object-independent" modes of variation are learned from continuous-time data such as video sequences. These modes of variation can be used as "generators" to produce a manifold of images of a new object from a single example of that object. We develop the framework in the context of a well-known example: analyzing the modes of spatial deformations of a scene under camera movement. Our method learns a close approximation to the standard affine deformations that are expected from the geometry of the situation, and does so in a completely unsupervised (i.e. ignorant of the geometry of the situation) fashion. We stress that it is learning a "parameterization", not just the parameter values, of the data. We then demonstrate how we have used the same framework to derive a novel data-driven model of joint color change in images due to common lighting variations. The model is superior to previous models of color change in describing non-linear color changes due to lighting.
22

Optical Flow Based Structure from Motion

Zucchelli, Marco January 2002 (has links)
No description available.
23

Incorporating Omni-Directional Image and the Optical Flow Technique into Movement Estimation

Chou, Chia-Chih 30 July 2007 (has links)
From the viewpoint of applications, conventional cameras are usually limited in their fields of view. The omni-directional camera has a full range in all directions, which gains the complete field of view. In the past, a moving object can be detected, only when the camera is static or moving with a known speed. If those methods are employed to mobile robots or vehicles, it will be difficult to determine the motion of moving objects observed by the camera. In this paper, we assume the omni-directional camera is mounted on a moving platform, which travels with a planar motion. The region of floor in the omni-directional image and the brightness constraint equation are applied to estimate the ego-motion. The depth information is acquired from the floor image to solve the problem that cannot be obtained by single camera systems. Using the estimated ego-motion, the optical flow caused by the floor motion can be computed. Therefore, comparing its direction with the direction of the optical flow on the image leads to detection of a moving object. Due to the depth information, even if the camera is in the condition that combining translational and rotational motions, a moving object can still be accurately identified.
24

Stabilization of handheld firearms using image analysis / Stabilisering av handeldvapen med bildanalys

Lindstedt, Alexander January 2012 (has links)
When firing a handheld weapon, the shooter tries to aim at the point where he wants the bullet to hit. However, due to imperfections in the human body, this can be quite hard. The weapon moves relative to the target and the shooter has to use precise timing to fire the shot exactly when the weapon points to the intended target position. This can be very hard, especially when shooting at long range using a magnifying rifle scope. In this thesis, a solution to this problem using image analysis is described and tested. Using a digital video camera and software, the system helps the shooter to fire at the appropriate time. The system is designed to operate in real-time conditions on a PC. The tests carried out have shown that the solution is promising and helps to achieve better accuracy. However it needs to be optimized to run smoothly on a smaller scale embedded system. / Då en skytt avfyrar ett handhållet vapen försöker skytten sikta mot den punkt där han vill att kulan ska träffa. Eftersom den mänskliga kroppen inte är helt stabil kommer vapnet att röra sig runt denna punkt och skytten måste försöka avfyra skottet precis vid den tidpunkt då vapnet pekar mot rätt punkt. Detta är särskilt svårt vid stora avstånd, då små vinkelskillnader i vapnets pipa ger större utslag med ökande avstånd till målet.  I denna uppsats beskrivs och utvärderas ett system konstruerat för att minimera inverkan av de ofrivilliga rörelserna. Systemet använder sig av en videokamera monterad i siktet och en dator med mjukvara som utför analys och behandling av videoströmmen för att avgöra när vapnet bör avfyras. Tanken är att i ett färdigt system implementera algoritmen i ett portabelt inbyggt system som kan monteras i kikarsiktet tillsammans med kameran. Mjukvaran kan sedan styra avfyrningen elektroniskt efter att skytten gett sitt godkännande genom att lägga tryck på avtryckaren. Testen som genomförts visar att angreppssättet är lovande. Systemet fick i samtliga fall bättre resultat än då skyttarna avfyrade skott manuellt.
25

Visual Tracking for a Moving Object Using Optical Flow Technique

Ching, Ya-Hsin 25 June 2003 (has links)
When an object makes a motion of continuous variation, its projection on a plane brings a succession of image and the motion between the video camera and the object causes displacement of image pixels. The relative motion of the displacement is called optical flow. The advantage of using the optical flow approach is that it is not required to know characteristics of the object and the environment at that time. So this method is suitable for tracking problems in unknown environment. It has been indicated that the optical flow based on the whole image cannot always be correct enough for control purpose where motion or feature occur. This thesis first uses digital image technique to subtract two continuous images, and extract the region where the motion actually occurs. Then, optical flow is calculated based on image information in this area. In this way, it cannot only raise the tracking speed, but also reduce the effect of the incorrect optical flow value. As a result, both tracking accuracy and speed can be greatly improved.
26

A Servo Tracking System for Translating Images

Ho, Chung-Hsing 26 June 2003 (has links)
The brightness variance, caused by relative velocity of the camera and environment in a sequence of images, is called optical flow. The advantage of the optical-flow-based visual servo method is that feature of the object dose not need to be known in advance. Therefore, it can be applied for positioning and tracking implement tasks. The purpose of this thesis is to implement the image servo technique and the sliding-mode control method to track an unknown image pattern in three dimensional motion. The goal of tracking is to maintain identical image captured by the camera based on the relative movement calculated from the optical flow.
27

Optical Flow in the Hexagonal Image Framework

Tsai, Yi-lun 02 September 2009 (has links)
The optical flow has been one of the common approaches for image tracking. Its advantage is that no prior knowledge for image features is required. Since movement information can be obtained based on brightness data only, this method is suitable for tracking tasks of unknown objects. Besides, insects are always masters in chasing and catching preys in the natural world due to their unique compound eye structure. If the edge of the compound eye can be applied to tracking of moving objects, it is highly expected that the tracking performance will be greatly improved. Conventional images are built on a Cartesian reference system, which is quite different from the hexagonal framework for the compound eye of insects. This thesis explores the distinction of the hexagonal image framework by incorporating the hexagonal concept into the optical flow technology. Consequently, the reason behind why the compound eye is good at tracking moving objects can be revealed. According to simulation results for test images with different features, the hexagonal optical flow method appears to be superior to the traditional optical flow method in the Cartesian reference system.
28

Optical Flow Based Structure from Motion

Zucchelli, Marco January 2002 (has links)
No description available.
29

Optical flow templates for mobile robot environment understanding

Roberts, Richard Joseph William 08 June 2015 (has links)
In this work we develop optical flow templates. In doing so, we introduce a practical tool for inferring robot egomotion and semantic superpixel labeling using optical flow in imaging systems with arbitrary optics. In order to do this we develop valuable understanding of geometric relationships and mathematical methods that are useful in interpreting optical flow to the robotics and computer vision communities. This work is motivated by what we perceive as directions for advancing the current state of the art in obstacle detection and scene understanding for mobile robots. Specifically, many existing methods build 3D point clouds, which are not directly useful for autonomous navigation and require further processing. Both the step of building the point clouds and the later processing steps are challenging and computationally intensive. Additionally, many current methods require a calibrated camera, which introduces calibration challenges and places limitations on the types of camera optics that may be used. Wide-angle lenses, systems with mirrors, and multiple cameras all require different calibration models and can be difficult or impossible to calibrate at all. Finally, current pixel and superpixel obstacle labeling algorithms typically rely on image appearance. While image appearance is informative, image motion is a direct effect of the scene structure that determines whether a region of the environment is an obstacle. The egomotion estimation and obstacle labeling methods we develop here based on optical flow templates require very little computation per frame and do not require building point clouds. Additionally, they do not require any specific type of camera optics, nor a calibrated camera. Finally, they label obstacles using optical flow alone without image appearance. In this thesis we start with optical flow subspaces for egomotion estimation and detection of “motion anomalies”. We then extend this to multiple subspaces and develop mathematical reasoning to select between them, comprising optical flow templates. Using these we classify environment shapes and label superpixels. Finally, we show how performing all learning and inference directly from image spatio-temporal gradients greatly improves computation time and accuracy.
30

Physics-driven variational methods for computer vision and shape-based imaging

Mueller, Martin F. 21 September 2015 (has links)
In this dissertation, novel variational optical-flow and active-contour methods are investigated to address challenging problems in computer vision and shape-based imaging. Starting from traditional applications of these methods in computer vision, such as object segmentation, tracking, and detection, this research subsequently applies similar active contour techniques to the realm of shape-based imaging, which is an image reconstruction technique estimating object shapes directly from physical wave measurements. In particular, the first and second part of this thesis deal with the following two physically inspired computer vision applications. Optical Flow for Vision-Based Flame Detection: Fire motion is estimated using optimal mass transport optical flow, whose motion model is inspired by the physical law of mass conservation, a governing equation for fire dynamics. The estimated motion fields are used to first detect candidate regions characterized by high motion activity, which are then tracked over time using active contours. To classify candidate regions, a neural net is trained on a set of novel motion features, which are extracted from optical flow fields of candidate regions. Coupled Photo-Geometric Object Features: Active contour models for segmentation in thermal videos are presented, which generalize the well-known Mumford-Shah functional. The diffusive nature of heat processes in thermal imagery motivates the use of Mumford-Shah-type smooth approximations for the image radiance. Mumford-Shah's isotropic smoothness constraint is generalized to anisotropic diffusion in this dissertation, where the image gradient is decomposed into components parallel and perpendicular to level set curves describing the object's boundary contour. In a limiting case, this anisotropic Mumford-Shah segmentation energy yields a one-dimensional ``photo-geometric'' representation of an object which is invariant to translation, rotation and scale. These properties allow the photo-geometric object representation to be efficiently used as a radiance feature; a recognition-segmentation active contour energy, whose shape and radiance follow a training model obtained by principal component analysis of a training set's shape and radiance features, is finally applied to tracking problems in thermal imagery. The third part of this thesis investigates a physics-driven active contour approach for shape-based imaging. Adjoint Active Contours for Shape-Based Imaging: The goal of this research is to estimate both location and shape of buried objects from surface measurements of waves scattered from the object. These objects' shapes are described by active contours: A misfit energy quantifying the discrepancy between measured and simulated wave amplitudes is minimized with respect to object shape using the adjoint state method. The minimizing active contour evolution requires numerical forward scattering solutions, which are obtained by way of the method of fundamental solutions, a meshfree collocation method. In combination with active contours being implemented as level sets, one obtains a completely meshfree algorithm; a considerable advantage over previous work in this field. With future applications in medical and geophysical imaging in mind, the method is formulated for acoustic and elastodynamic wave processes in the frequency domain.

Page generated in 0.0463 seconds