• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 23
  • 17
  • 15
  • 13
  • 12
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 222
  • 222
  • 74
  • 63
  • 60
  • 55
  • 42
  • 37
  • 36
  • 33
  • 30
  • 28
  • 27
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Change detection models for mobile cameras

Kit, Dmitry Mark 05 July 2012 (has links)
Change detection is an ability that allows intelligent agents to react to unexpected situations. This mechanism is fundamental in providing more autonomy to robots. It has been used in many different fields including quality control and network intrusion. In the visual domain, however, most research has been confined to stationary cameras and only recently have researchers started to shift to mobile cameras. \ We propose a general framework for building internal spatial models of the visual experiences. These models are used to retrieve expectations about visual inputs which can be compared to the actual observation in order to identify the presence of changes. Our framework leverages the tolerance to small view changes of optic flow and color histogram representations and a self-organizing map to build a compact memory of camera observations. The effectiveness of the approach is demonstrated in a walking simulation, where spatial information and color histograms are combined to detect changes in a room. The location signal allows the algorithm to query the self-organizing map for the expected color histogram and compare it to the current input. Any deviations can be considered changes and are then localized on the input image. Furthermore, we show how detecting a vehicle entering or leaving the camera's lane can be reduced to a change detection problem. This simplifies the problem by removing the need to track or even know about other vehicles. Matching Pursuit is used to learn a compact dictionary to describe the observed experiences. Using this approach, changes are detected when the learned dictionary is unable to reconstruct the current input. The human experiments presented in this dissertation support the idea that humans build statistical models that evolve with experience. We provide evidence that not only does this experience improve people's behavior in 3D environments, but also enables them to detect chromatic changes. Mobile cameras are now part of our everyday lives, ranging from built-in laptop cameras to cell phone cameras. The vision of this research is to enable these devices with change detection mechanisms to solve a large class of problems. Beyond presenting a foundation that effectively detects changes in environments, we also show that the algorithms employed are computationally inexpensive. The practicality of this approach is demonstrated by a partial implementation of the algorithm on commodity hardware such as Android mobile devices. / text
122

Optical flow estimation with subgrid model for study of turbulent flow

Cassisa, Cyril 07 April 2011 (has links) (PDF)
The objective of this thesis is to study the evolution of scalar field carried by a flow from a temporal image sequence. The estimation of the velocity field of turbulent flow is of major importance for understanding the physical phenomenon. Up to now the problem of turbulence is generally ignored in the flow equation of existing methods. An information given by image is discrete at pixel size. Depending on the turbulent rate of the flow, pixel and time resolutions may become too large to neglect the effect of sub-pixel small-scales on the pixel velocity field. For this, we propose a flow equation defined by a filtered concentration transport equation where a classic turbulent sub-grid eddy viscosity model is introduced in order to account for this effect. To formulate the problem, we use a Markovian approach. An unwarping multiresolution by pyramidal decomposition is proposed which reduces the number of operations on images. The optimization coupled with a multigrid approach allows to estimate the optimal 2D real velocity field. Our approach is tested on synthetic andreal image sequences (PIV laboratory experiment and remote sensing data of dust storm event) with high Reynolds number. Comparisons with existing approaches are very promising.
123

Face Tracking Using Optical Flow : Real-Time Optical Flow Enhanced AdaBoost Cascade Face Tracker

Ranftl, Andreas January 2014 (has links)
This master thesis deals with real-time algorithms and techniques for face detection and facetracking in videos. A new approach is presented where optical flow information is incorporatedinto the Viola-Jones face detection algorithm, allowing the algorithm to update the expectedposition of detected faces in the next frame. This continuity between video frames is not exploitedby the original algorithm from Viola and Jones, in which face detection is static asinformation from previous frames is not considered.In contrast to the Viola-Jones face detector and also to the Kanade-Lucas-Tomasi tracker, theproposed face tracker preserves information about near-positives.In general terms the developed algorithm builds a likelihood map from results of the Viola-Jones algorithm, then computes the optical flow between two consecutive frames and finallyinterpolates the likelihood map in the next frame by the computed flow map. Faces get extractedfrom the likelihood map using image segmentation techniques. Compared to the Viola-Jonesalgorithm an increase in stability as well as an improvement of the detection rate is achieved.Firstly, the real-time face detection algorithm from Viola and Jones is discussed. Secondly theauthor presents methods which are suitable for tracking faces. The theoretical overview leadsto the description of the proposed face tracking algorithm. Both principle and implementationare discussed in detail. The software is written in C++ using the Open Computer Vision Libraryas well as the Matlab MEX interface.The resulting face tracker was tested on the Boston Head Tracking Database for which groundtruth information is available. The proposed face tracking algorithm outperforms the Viola-Jones face detector in terms of average detection rate and temporal consistency.
124

Moving Object Detction In 2d And 3d Scenes

Sirtkaya, Salim 01 September 2004 (has links) (PDF)
This thesis describes the theoretical bases, development and testing of an integrated moving object detection framework in 2D and 3D scenes. The detection problem is analyzed in stationary and non-stationary camera sequences and different algorithms are developed for each case. Two methods are proposed in stationary camera sequences: background extraction followed by differencing and thresholding, and motion detection using optical flow field calculated by &ldquo / Kanade-Lucas Feature Tracker&rdquo / . For non-stationary camera sequences, different algorithms are developed based on the scene structure and camera motion characteristics. In planar scenes where the scene is flat or distant from the camera and/or when camera makes rotations only, a method is proposed that uses 2D parametric registration based on affine parameters of the dominant plane for independently moving object detection. A modified version of the 2D parametric registration approach is used when the scene is not planar but consists of a few number of planes at different depths, and camera makes translational motion. Optical flow field segmentation and sequential registration are the key points for this case. For 3D scenes, where the depth variation within the scene is high, a parallax rigidity based approach is developed for moving object detection. All these algorithms are integrated to form a unified independently moving object detector that works in stationary and non-stationary camera sequences and with different scene and camera motion structures. Optical flow field estimation and segmentation is used for this purpose.
125

Improved detection and tracking of objects in surveillance video

Denman, Simon Paul January 2009 (has links)
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very dicult for a human op- erator to eectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identication at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the eective use of more advanced technolo- gies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identication. Before an object can be tracked, it must be detected. Motion segmentation tech- niques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erro- neous motion caused by noise and lighting eects, or due to the detection routines being unable to split occluded regions into their component objects. Particle l- ters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (of- ten manual) detection to initialise the lter. Particle lters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle lter. A novel hybrid motion segmentation / optical ow algorithm, capable of simulta- neously extracting multiple layers of foreground and optical ow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical ow is capable of extracting a mov- ing object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and signi- cant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle lter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benet from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle lter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking sys- tems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classication in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a signicant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi- automated video processing and therefore improve security in areas under surveil- lance.
126

Analysis of Optical Flow for Indoor Mobile Robot Obstacle Avoidance.

Tobias Low Unknown Date (has links)
This thesis investigates the use of visual-motion information sampled through optical flow for the task of indoor obstacle avoidance on autonomous mobile robots. The methods focus on the practical use of optical flow and visual motion information in performing the obstacle avoidance task in real indoor environments. The methods serve to identify visual-motion properties that must be used in synergy with visual-spatial properties toward the goal of a complete robust visual-only obstacle avoidance system, as is evidently seen within nature. A review of vision-based obstacle avoidance techniques shows that early research mainly focused on visual-spatial techniques, which heavily rely on various assumptions of their environments to function successfully. On the other hand, more current research that looks toward the use of visual-motion information (sampled through optical flow) tends to focus on using optical flow in a subsidiary manner, and does not completely take advantage of the information encoded within an optical flow field. In the light of the current research limitations, this thesis describes two different approaches and evaluates their use of optical flow to perform the obstacle avoidance task. The first approach begins with the construction of a conventional range map using optical flow that stems from the structure-from-motion domain and the theory that optical flow encodes 3D environmental information under certain conditions. The second approach investigates optical flow in a causal mechanistic manner using machine learning of motor responses directly from optical flow - motivated from physical and behavioural evidence observed in biological creatures. Specifically, the second approach is designed with three main objectives in mind: 1) to investigate whether optical flow can be learnt for obstacle avoidance; 2) to create a system capable of repeatable obstacle avoidance performance in real-life environments; and 3) to analyse the system to determine what optical flow properties are actually being used for the motor control task. The range-map reconstruction results have demonstrated some good distance estimations through the use of a feature-based optical flow algorithm. However, the number of flow points were too sparse to provide adequate obstacle detection. Results froma differential-based optical flow algorithm helped to increase the density of flow points, but highlighted the high sensitivity of the optical flow field to the rotational errors and outliers that plague the majority of frames under real-life robot situations. Final results demonstrated that current optical flow algorithms are ill-suited to estimate obstacle distances consistently, as range-estimation techniques require an extremely accurate optical flow field with adequate density and coverage for success. This is a difficult problem within the optical flow estimation domain itself. In the machine learning approach, an initial study to examine whether optical flow can be machine learnt for obstacle avoidance and control in a simple environment was successful. However,there were certain problems. Several critical issues which arise with the use of a machine learning approach were highlighted. These included sample set completeness, sample set biases, and control system instability. Consequently, an extended neural network was proposed that had several improvements made to overcome the initial problems. Designing an automated system for gathering training data helped to eliminate most of the sample set problems. Key changes in the neural network architecture, optical flow filters, and navigation technique vastly improved the control system stability. As a result, the extended neural network system was able to successfully perform multiple obstacle avoidance loops in both familiar and unfamiliar real-life environments without collisions. The lap times of the machine learning approach were comparable to those of the laser-based navigation technique. The the machine learning approach was 13% slower in the familiar and 25% slower in the unfamiliar environment. Furthermore, through analysis of the neural network approach, flow magnitudes were revealed to be learnt for range information in an absolute manner, while flow directions were used to detect the focus of expansion (FOE) in order to predict critical collision situations and improve control stability. In addition, the precision of the flow fields was highlighted as an important requirement, as opposed to the high accuracy of flow vectors. For robot control purposes, image-processing techniques such as region finding and object boundary detection were employed to detect changes between optical flow vectors in the image space.
127

Analysis of Optical Flow for Indoor Mobile Robot Obstacle Avoidance.

Tobias Low Unknown Date (has links)
This thesis investigates the use of visual-motion information sampled through optical flow for the task of indoor obstacle avoidance on autonomous mobile robots. The methods focus on the practical use of optical flow and visual motion information in performing the obstacle avoidance task in real indoor environments. The methods serve to identify visual-motion properties that must be used in synergy with visual-spatial properties toward the goal of a complete robust visual-only obstacle avoidance system, as is evidently seen within nature. A review of vision-based obstacle avoidance techniques shows that early research mainly focused on visual-spatial techniques, which heavily rely on various assumptions of their environments to function successfully. On the other hand, more current research that looks toward the use of visual-motion information (sampled through optical flow) tends to focus on using optical flow in a subsidiary manner, and does not completely take advantage of the information encoded within an optical flow field. In the light of the current research limitations, this thesis describes two different approaches and evaluates their use of optical flow to perform the obstacle avoidance task. The first approach begins with the construction of a conventional range map using optical flow that stems from the structure-from-motion domain and the theory that optical flow encodes 3D environmental information under certain conditions. The second approach investigates optical flow in a causal mechanistic manner using machine learning of motor responses directly from optical flow - motivated from physical and behavioural evidence observed in biological creatures. Specifically, the second approach is designed with three main objectives in mind: 1) to investigate whether optical flow can be learnt for obstacle avoidance; 2) to create a system capable of repeatable obstacle avoidance performance in real-life environments; and 3) to analyse the system to determine what optical flow properties are actually being used for the motor control task. The range-map reconstruction results have demonstrated some good distance estimations through the use of a feature-based optical flow algorithm. However, the number of flow points were too sparse to provide adequate obstacle detection. Results froma differential-based optical flow algorithm helped to increase the density of flow points, but highlighted the high sensitivity of the optical flow field to the rotational errors and outliers that plague the majority of frames under real-life robot situations. Final results demonstrated that current optical flow algorithms are ill-suited to estimate obstacle distances consistently, as range-estimation techniques require an extremely accurate optical flow field with adequate density and coverage for success. This is a difficult problem within the optical flow estimation domain itself. In the machine learning approach, an initial study to examine whether optical flow can be machine learnt for obstacle avoidance and control in a simple environment was successful. However,there were certain problems. Several critical issues which arise with the use of a machine learning approach were highlighted. These included sample set completeness, sample set biases, and control system instability. Consequently, an extended neural network was proposed that had several improvements made to overcome the initial problems. Designing an automated system for gathering training data helped to eliminate most of the sample set problems. Key changes in the neural network architecture, optical flow filters, and navigation technique vastly improved the control system stability. As a result, the extended neural network system was able to successfully perform multiple obstacle avoidance loops in both familiar and unfamiliar real-life environments without collisions. The lap times of the machine learning approach were comparable to those of the laser-based navigation technique. The the machine learning approach was 13% slower in the familiar and 25% slower in the unfamiliar environment. Furthermore, through analysis of the neural network approach, flow magnitudes were revealed to be learnt for range information in an absolute manner, while flow directions were used to detect the focus of expansion (FOE) in order to predict critical collision situations and improve control stability. In addition, the precision of the flow fields was highlighted as an important requirement, as opposed to the high accuracy of flow vectors. For robot control purposes, image-processing techniques such as region finding and object boundary detection were employed to detect changes between optical flow vectors in the image space.
128

A comparison of image processing algorithms for edge detection, corner detection and thinning

Parekh, Siddharth Avinash January 2004 (has links)
Image processing plays a key role in vision systems. Its function is to extract and enhance pertinent information from raw data. In robotics, processing of real-time data is constrained by limited resources. Thus, it is important to understand and analyse image processing algorithms for accuracy, speed, and quality. The theme of this thesis is an implementation and comparative study of algorithms related to various image processing techniques like edge detection, corner detection and thinning. A re-interpretation of a standard technique, non-maxima suppression for corner detectors was attempted. In addition, a thinning filter, Hall-Guo, was modified to achieve better results. Generally, real time data is corrupted with noise. This thesis also incorporates few smoothing filters that help in noise reduction. Apart from comparing and analysing algorithms for these techniques, an attempt was made to implement correlation-based optic flow
129

Εύρεση θέσης αυτοκινήτου με ψηφιακή επεξεργασία σήματος βίντεο

Παγώνης, Μελέτιος 04 May 2011 (has links)
Σκοπός της παρούσας εργασίας είναι η μελέτη, η ανάπτυξη καθώς και η μερική εφαρμογή κάποιων μεθόδων για την ανίχνευση θέσης κάποιου οχήματος. Ιδιαίτερη βάση δόθηκε στη μελέτη και την ανάλυση της οπτικής ροής που θεωρείται βασική συγκριτικά με τις υπόλοιπες μεθόδους.Τέλος αναλύεται και μια μέθοδος κατάτμησης εικόνων. / The goal of this thesis is to study, develop and implement some methods of car detection. Particular emphasis is given to the analysis of optical flow, which is considered to be critical compared to other methods. Finally an analysis of a method for image segmentation is being developed.
130

Signal- och bildbehandling på moderna grafikprocessorer

Pettersson, Erik January 2005 (has links)
En modern grafikprocessor är oerhört kraftfull och har en prestanda som potentiellt sett är många gånger högre än för en modern mikroprocessor. I takt med att grafikprocessorn blivit alltmer programmerbar har det blivit möjligt att använda den för beräkningstunga tillämpningar utanför dess normala användningsområde. Inom det här arbetet utreds vilka möjligheter och begränsningar som uppstår vid användandet av grafikprocessorer för generell programmering. Arbetet inriktas främst mot signal- och bildbehandlingstillämpningar men mycket av principerna är tillämpliga även inom andra områden. Ett ramverk för bildbehandling implementeras och några algoritmer inom bildanalys realiseras och utvärderas, bland annat stereoseende och beräkning av optiskt flöde. Resultaten visar på att vissa tillämpningar kan uppvisa en avsevärd prestandaökning i en grafikprocessor jämfört med i en mikroprocessor men att andra tillämpningar kan vara ineffektiva eller mycket svåra att implementera. / The modern graphical processing unit, GPU, is an extremely powerful unit, potentially many times more powerful than a modern microprocessor. Due to its increasing programmability it has recently become possible to use it in computation intensive applications outside its normal usage. This work investigates the possibilities and limitations of general purpose programming on GPUs. The work mainly concentrates on signal and image processing although much of the principles are applicable to other areas as well. A framework for image processing on GPUs is implemented and a few computer vision algorithms are implemented and evaluated, among them stereo vision and optical flow. The results show that some applications can gain a substantial speedup when implemented correctly in the GPU but others can be inefficent or extremly hard to implement.

Page generated in 0.0858 seconds