• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 23
  • 17
  • 15
  • 13
  • 12
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 225
  • 225
  • 75
  • 63
  • 60
  • 55
  • 43
  • 37
  • 37
  • 33
  • 30
  • 28
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Generalized Area Tracking Using Complex Discrete Wavelet Transform: The Complex Wavelet Tracker

Yilmaz, Sener 01 July 2007 (has links) (PDF)
In this work, a new method is proposed that can be used for area tracking. This method is based on the Complex Discrete Wavelet Transform (CDWT) developed by Magarey and Kingsbury. The CDWT has its advantages over the traditional Discrete Wavelet Transform such as approximate shift invariance, improved directional selectivity, and robustness to noise and illumination changes. The proposed method is a generalization of the CDWT based motion estimation method developed by Magarey and Kingsbury. The Complex Wavelet Tracker extends the original method to estimate the true motion of regions according to a parametric motion model. In this way, rotation, scaling, and shear type of motions can be handled in addition to pure translation. Simulations have been performed on the proposed method including both quantitative and qualitative tests. Quantitative tests are performed on synthetically created test sequences and results have been compared to true data. The performance is compared with intensity-based methods. Qualitative tests are performed on real sequences and evaluations are presented empirically. The results are compared with intensity-based methods. It is observed that the proposed method is very accurate in handling affine deformations for long term sequences and is robust to different target signatures and illumination changes. The accuracy of the proposed method is compatible with intensity-based methods. In addition to this, it can handle a wider range of cases and is robust to illuminaton changes compared to intensity-based methods. The method can be implemented in real-time and could be a powerful replacement of current area trackers.
122

Vision-assisted Object Tracking

Ozertem, Kemal Arda 01 February 2012 (has links) (PDF)
In this thesis, a video tracking method is proposed that is based on both computer vision and estimation theory. For this purpose, the overall study is partitioned into four related subproblems. The first part is moving object detection / for moving object detection, two different background modeling methods are developed. The second part is feature extraction and estimation of optical flow between video frames. As the feature extraction method, a well-known corner detector algorithm is employed and this extraction is applied only at the moving regions in the scene. For the feature points, the optical flow vectors are calculated by using an improved version of Kanade Lucas Tracker. The resulting optical flow field between consecutive frames is used directly in proposed tracking method. In the third part, a particle filter structure is build to provide tracking process. However, the particle filter is improved by adding optical flow data to the state equation as a correction term. In the last part of the study, the performance of the proposed approach is compared against standard implementations particle filter based trackers. Based on the simulation results in this study, it could be argued that insertion of vision-based optical flow estimation to tracking formulation improves the overall performance.
123

Change detection models for mobile cameras

Kit, Dmitry Mark 05 July 2012 (has links)
Change detection is an ability that allows intelligent agents to react to unexpected situations. This mechanism is fundamental in providing more autonomy to robots. It has been used in many different fields including quality control and network intrusion. In the visual domain, however, most research has been confined to stationary cameras and only recently have researchers started to shift to mobile cameras. \ We propose a general framework for building internal spatial models of the visual experiences. These models are used to retrieve expectations about visual inputs which can be compared to the actual observation in order to identify the presence of changes. Our framework leverages the tolerance to small view changes of optic flow and color histogram representations and a self-organizing map to build a compact memory of camera observations. The effectiveness of the approach is demonstrated in a walking simulation, where spatial information and color histograms are combined to detect changes in a room. The location signal allows the algorithm to query the self-organizing map for the expected color histogram and compare it to the current input. Any deviations can be considered changes and are then localized on the input image. Furthermore, we show how detecting a vehicle entering or leaving the camera's lane can be reduced to a change detection problem. This simplifies the problem by removing the need to track or even know about other vehicles. Matching Pursuit is used to learn a compact dictionary to describe the observed experiences. Using this approach, changes are detected when the learned dictionary is unable to reconstruct the current input. The human experiments presented in this dissertation support the idea that humans build statistical models that evolve with experience. We provide evidence that not only does this experience improve people's behavior in 3D environments, but also enables them to detect chromatic changes. Mobile cameras are now part of our everyday lives, ranging from built-in laptop cameras to cell phone cameras. The vision of this research is to enable these devices with change detection mechanisms to solve a large class of problems. Beyond presenting a foundation that effectively detects changes in environments, we also show that the algorithms employed are computationally inexpensive. The practicality of this approach is demonstrated by a partial implementation of the algorithm on commodity hardware such as Android mobile devices. / text
124

Optical flow estimation with subgrid model for study of turbulent flow

Cassisa, Cyril 07 April 2011 (has links) (PDF)
The objective of this thesis is to study the evolution of scalar field carried by a flow from a temporal image sequence. The estimation of the velocity field of turbulent flow is of major importance for understanding the physical phenomenon. Up to now the problem of turbulence is generally ignored in the flow equation of existing methods. An information given by image is discrete at pixel size. Depending on the turbulent rate of the flow, pixel and time resolutions may become too large to neglect the effect of sub-pixel small-scales on the pixel velocity field. For this, we propose a flow equation defined by a filtered concentration transport equation where a classic turbulent sub-grid eddy viscosity model is introduced in order to account for this effect. To formulate the problem, we use a Markovian approach. An unwarping multiresolution by pyramidal decomposition is proposed which reduces the number of operations on images. The optimization coupled with a multigrid approach allows to estimate the optimal 2D real velocity field. Our approach is tested on synthetic andreal image sequences (PIV laboratory experiment and remote sensing data of dust storm event) with high Reynolds number. Comparisons with existing approaches are very promising.
125

Face Tracking Using Optical Flow : Real-Time Optical Flow Enhanced AdaBoost Cascade Face Tracker

Ranftl, Andreas January 2014 (has links)
This master thesis deals with real-time algorithms and techniques for face detection and facetracking in videos. A new approach is presented where optical flow information is incorporatedinto the Viola-Jones face detection algorithm, allowing the algorithm to update the expectedposition of detected faces in the next frame. This continuity between video frames is not exploitedby the original algorithm from Viola and Jones, in which face detection is static asinformation from previous frames is not considered.In contrast to the Viola-Jones face detector and also to the Kanade-Lucas-Tomasi tracker, theproposed face tracker preserves information about near-positives.In general terms the developed algorithm builds a likelihood map from results of the Viola-Jones algorithm, then computes the optical flow between two consecutive frames and finallyinterpolates the likelihood map in the next frame by the computed flow map. Faces get extractedfrom the likelihood map using image segmentation techniques. Compared to the Viola-Jonesalgorithm an increase in stability as well as an improvement of the detection rate is achieved.Firstly, the real-time face detection algorithm from Viola and Jones is discussed. Secondly theauthor presents methods which are suitable for tracking faces. The theoretical overview leadsto the description of the proposed face tracking algorithm. Both principle and implementationare discussed in detail. The software is written in C++ using the Open Computer Vision Libraryas well as the Matlab MEX interface.The resulting face tracker was tested on the Boston Head Tracking Database for which groundtruth information is available. The proposed face tracking algorithm outperforms the Viola-Jones face detector in terms of average detection rate and temporal consistency.
126

Moving Object Detction In 2d And 3d Scenes

Sirtkaya, Salim 01 September 2004 (has links) (PDF)
This thesis describes the theoretical bases, development and testing of an integrated moving object detection framework in 2D and 3D scenes. The detection problem is analyzed in stationary and non-stationary camera sequences and different algorithms are developed for each case. Two methods are proposed in stationary camera sequences: background extraction followed by differencing and thresholding, and motion detection using optical flow field calculated by &ldquo / Kanade-Lucas Feature Tracker&rdquo / . For non-stationary camera sequences, different algorithms are developed based on the scene structure and camera motion characteristics. In planar scenes where the scene is flat or distant from the camera and/or when camera makes rotations only, a method is proposed that uses 2D parametric registration based on affine parameters of the dominant plane for independently moving object detection. A modified version of the 2D parametric registration approach is used when the scene is not planar but consists of a few number of planes at different depths, and camera makes translational motion. Optical flow field segmentation and sequential registration are the key points for this case. For 3D scenes, where the depth variation within the scene is high, a parallax rigidity based approach is developed for moving object detection. All these algorithms are integrated to form a unified independently moving object detector that works in stationary and non-stationary camera sequences and with different scene and camera motion structures. Optical flow field estimation and segmentation is used for this purpose.
127

Improved detection and tracking of objects in surveillance video

Denman, Simon Paul January 2009 (has links)
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very dicult for a human op- erator to eectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identication at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the eective use of more advanced technolo- gies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identication. Before an object can be tracked, it must be detected. Motion segmentation tech- niques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erro- neous motion caused by noise and lighting eects, or due to the detection routines being unable to split occluded regions into their component objects. Particle l- ters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (of- ten manual) detection to initialise the lter. Particle lters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle lter. A novel hybrid motion segmentation / optical ow algorithm, capable of simulta- neously extracting multiple layers of foreground and optical ow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical ow is capable of extracting a mov- ing object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and signi- cant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle lter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benet from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle lter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking sys- tems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classication in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a signicant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi- automated video processing and therefore improve security in areas under surveil- lance.
128

Analysis of Optical Flow for Indoor Mobile Robot Obstacle Avoidance.

Tobias Low Unknown Date (has links)
This thesis investigates the use of visual-motion information sampled through optical flow for the task of indoor obstacle avoidance on autonomous mobile robots. The methods focus on the practical use of optical flow and visual motion information in performing the obstacle avoidance task in real indoor environments. The methods serve to identify visual-motion properties that must be used in synergy with visual-spatial properties toward the goal of a complete robust visual-only obstacle avoidance system, as is evidently seen within nature. A review of vision-based obstacle avoidance techniques shows that early research mainly focused on visual-spatial techniques, which heavily rely on various assumptions of their environments to function successfully. On the other hand, more current research that looks toward the use of visual-motion information (sampled through optical flow) tends to focus on using optical flow in a subsidiary manner, and does not completely take advantage of the information encoded within an optical flow field. In the light of the current research limitations, this thesis describes two different approaches and evaluates their use of optical flow to perform the obstacle avoidance task. The first approach begins with the construction of a conventional range map using optical flow that stems from the structure-from-motion domain and the theory that optical flow encodes 3D environmental information under certain conditions. The second approach investigates optical flow in a causal mechanistic manner using machine learning of motor responses directly from optical flow - motivated from physical and behavioural evidence observed in biological creatures. Specifically, the second approach is designed with three main objectives in mind: 1) to investigate whether optical flow can be learnt for obstacle avoidance; 2) to create a system capable of repeatable obstacle avoidance performance in real-life environments; and 3) to analyse the system to determine what optical flow properties are actually being used for the motor control task. The range-map reconstruction results have demonstrated some good distance estimations through the use of a feature-based optical flow algorithm. However, the number of flow points were too sparse to provide adequate obstacle detection. Results froma differential-based optical flow algorithm helped to increase the density of flow points, but highlighted the high sensitivity of the optical flow field to the rotational errors and outliers that plague the majority of frames under real-life robot situations. Final results demonstrated that current optical flow algorithms are ill-suited to estimate obstacle distances consistently, as range-estimation techniques require an extremely accurate optical flow field with adequate density and coverage for success. This is a difficult problem within the optical flow estimation domain itself. In the machine learning approach, an initial study to examine whether optical flow can be machine learnt for obstacle avoidance and control in a simple environment was successful. However,there were certain problems. Several critical issues which arise with the use of a machine learning approach were highlighted. These included sample set completeness, sample set biases, and control system instability. Consequently, an extended neural network was proposed that had several improvements made to overcome the initial problems. Designing an automated system for gathering training data helped to eliminate most of the sample set problems. Key changes in the neural network architecture, optical flow filters, and navigation technique vastly improved the control system stability. As a result, the extended neural network system was able to successfully perform multiple obstacle avoidance loops in both familiar and unfamiliar real-life environments without collisions. The lap times of the machine learning approach were comparable to those of the laser-based navigation technique. The the machine learning approach was 13% slower in the familiar and 25% slower in the unfamiliar environment. Furthermore, through analysis of the neural network approach, flow magnitudes were revealed to be learnt for range information in an absolute manner, while flow directions were used to detect the focus of expansion (FOE) in order to predict critical collision situations and improve control stability. In addition, the precision of the flow fields was highlighted as an important requirement, as opposed to the high accuracy of flow vectors. For robot control purposes, image-processing techniques such as region finding and object boundary detection were employed to detect changes between optical flow vectors in the image space.
129

Analysis of Optical Flow for Indoor Mobile Robot Obstacle Avoidance.

Tobias Low Unknown Date (has links)
This thesis investigates the use of visual-motion information sampled through optical flow for the task of indoor obstacle avoidance on autonomous mobile robots. The methods focus on the practical use of optical flow and visual motion information in performing the obstacle avoidance task in real indoor environments. The methods serve to identify visual-motion properties that must be used in synergy with visual-spatial properties toward the goal of a complete robust visual-only obstacle avoidance system, as is evidently seen within nature. A review of vision-based obstacle avoidance techniques shows that early research mainly focused on visual-spatial techniques, which heavily rely on various assumptions of their environments to function successfully. On the other hand, more current research that looks toward the use of visual-motion information (sampled through optical flow) tends to focus on using optical flow in a subsidiary manner, and does not completely take advantage of the information encoded within an optical flow field. In the light of the current research limitations, this thesis describes two different approaches and evaluates their use of optical flow to perform the obstacle avoidance task. The first approach begins with the construction of a conventional range map using optical flow that stems from the structure-from-motion domain and the theory that optical flow encodes 3D environmental information under certain conditions. The second approach investigates optical flow in a causal mechanistic manner using machine learning of motor responses directly from optical flow - motivated from physical and behavioural evidence observed in biological creatures. Specifically, the second approach is designed with three main objectives in mind: 1) to investigate whether optical flow can be learnt for obstacle avoidance; 2) to create a system capable of repeatable obstacle avoidance performance in real-life environments; and 3) to analyse the system to determine what optical flow properties are actually being used for the motor control task. The range-map reconstruction results have demonstrated some good distance estimations through the use of a feature-based optical flow algorithm. However, the number of flow points were too sparse to provide adequate obstacle detection. Results froma differential-based optical flow algorithm helped to increase the density of flow points, but highlighted the high sensitivity of the optical flow field to the rotational errors and outliers that plague the majority of frames under real-life robot situations. Final results demonstrated that current optical flow algorithms are ill-suited to estimate obstacle distances consistently, as range-estimation techniques require an extremely accurate optical flow field with adequate density and coverage for success. This is a difficult problem within the optical flow estimation domain itself. In the machine learning approach, an initial study to examine whether optical flow can be machine learnt for obstacle avoidance and control in a simple environment was successful. However,there were certain problems. Several critical issues which arise with the use of a machine learning approach were highlighted. These included sample set completeness, sample set biases, and control system instability. Consequently, an extended neural network was proposed that had several improvements made to overcome the initial problems. Designing an automated system for gathering training data helped to eliminate most of the sample set problems. Key changes in the neural network architecture, optical flow filters, and navigation technique vastly improved the control system stability. As a result, the extended neural network system was able to successfully perform multiple obstacle avoidance loops in both familiar and unfamiliar real-life environments without collisions. The lap times of the machine learning approach were comparable to those of the laser-based navigation technique. The the machine learning approach was 13% slower in the familiar and 25% slower in the unfamiliar environment. Furthermore, through analysis of the neural network approach, flow magnitudes were revealed to be learnt for range information in an absolute manner, while flow directions were used to detect the focus of expansion (FOE) in order to predict critical collision situations and improve control stability. In addition, the precision of the flow fields was highlighted as an important requirement, as opposed to the high accuracy of flow vectors. For robot control purposes, image-processing techniques such as region finding and object boundary detection were employed to detect changes between optical flow vectors in the image space.
130

A comparison of image processing algorithms for edge detection, corner detection and thinning

Parekh, Siddharth Avinash January 2004 (has links)
Image processing plays a key role in vision systems. Its function is to extract and enhance pertinent information from raw data. In robotics, processing of real-time data is constrained by limited resources. Thus, it is important to understand and analyse image processing algorithms for accuracy, speed, and quality. The theme of this thesis is an implementation and comparative study of algorithms related to various image processing techniques like edge detection, corner detection and thinning. A re-interpretation of a standard technique, non-maxima suppression for corner detectors was attempted. In addition, a thinning filter, Hall-Guo, was modified to achieve better results. Generally, real time data is corrupted with noise. This thesis also incorporates few smoothing filters that help in noise reduction. Apart from comparing and analysing algorithms for these techniques, an attempt was made to implement correlation-based optic flow

Page generated in 0.058 seconds