• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 348
  • 42
  • 20
  • 13
  • 10
  • 8
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 541
  • 541
  • 253
  • 210
  • 173
  • 134
  • 113
  • 111
  • 108
  • 89
  • 87
  • 80
  • 75
  • 74
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Detecting And Tracking Moving Objects With An Active Camera In Real Time

Karakas, Samet 01 September 2011 (has links) (PDF)
Moving object detection techniques can be divided into two categories based on the type of the camera which is either static or active. Methods of static cameras can detect moving objects according to the variable regions on the video frame. However, the same method is not suitable for active cameras. The task of moving object detection for active cameras generally needs more complex algorithms and unique solutions. The aim of this thesis work is real time detection and tracking of moving objects with an active camera. For this purpose, feature based algorithms are implemented due to the computational efficiency of these kinds of algorithms and SURF (Speeded Up Robust Features) is mainly used for these algorithms. An algorithm is developed in C++ environment and OpenCV library is frequently used. The developed algorithm is capable of detecting and tracking moving objects by using a PTZ (Pan-Tilt-Zoom) camera at a frame rate of approximately 5 fps and with a resolution of 640x480.
252

Vision-assisted Object Tracking

Ozertem, Kemal Arda 01 February 2012 (has links) (PDF)
In this thesis, a video tracking method is proposed that is based on both computer vision and estimation theory. For this purpose, the overall study is partitioned into four related subproblems. The first part is moving object detection / for moving object detection, two different background modeling methods are developed. The second part is feature extraction and estimation of optical flow between video frames. As the feature extraction method, a well-known corner detector algorithm is employed and this extraction is applied only at the moving regions in the scene. For the feature points, the optical flow vectors are calculated by using an improved version of Kanade Lucas Tracker. The resulting optical flow field between consecutive frames is used directly in proposed tracking method. In the third part, a particle filter structure is build to provide tracking process. However, the particle filter is improved by adding optical flow data to the state equation as a correction term. In the last part of the study, the performance of the proposed approach is compared against standard implementations particle filter based trackers. Based on the simulation results in this study, it could be argued that insertion of vision-based optical flow estimation to tracking formulation improves the overall performance.
253

Global Appearance Based Airplane Detection From Satellite Imagery

Arslan, Duygu 01 August 2012 (has links) (PDF)
There is a rising interest in geospatial object detection due to not only the complexity of manual processing of such huge amount of data provided by high resolution satellite imagery but also for military application needs. A fundamental and yet state-of-the art approach for object detection is based on methods that utilize the global appearance. In such a holistic approach, the information of the object class is aimed to be modeled as a whole in the learning phase. And during the classification, a decision is taken at each window of the test image. In this thesis, two different discriminative methods are investigated for airplane detection from satellite images. In the first method, Haar-like features are used as weak classifiers for the airplane class representation. Then the AdaBoost learning algorithm is used to select the critical visual features that represent the airplanes best. Finally, a cascade of classifiers is constructed in order to speed-up the classifier. In the second method, a computationally efficient appearance-based algorithm for airplane detection is presented. An operator exploiting the edge information via gray level differences between the target and its background is constructed with Haar-like polygon regions using the shape information of the airplane as an invariant. The airplanes matching the operator are supposed to yield higher responses around the centroid of the object. Fast evaluation of the operator is achieved by means of integral image. The proposed algorithm has promising results in terms of accuracy in detecting aircraft type geospatial objects from satellite imagery.
254

Parametric kernels for structured data analysis

Shin, Young-in 04 May 2015 (has links)
Structured representation of input physical patterns as a set of local features has been useful for a veriety of robotics and human computer interaction (HCI) applications. It enables a stable understanding of the variable inputs. However, this representation does not fit the conventional machine learning algorithms and distance metrics because they assume vector inputs. To learn from input patterns with variable structure is thus challenging. To address this problem, I propose a general and systematic method to design distance metrics between structured inputs that can be used in conventional learning algorithms. Based on the observation of the stability in the geometric distributions of local features over the physical patterns across similar inputs, this is done combining the local similarities and the conformity of the geometric relationship between local features. The produced distance metrics, called “parametric kernels”, are positive semi-definite and require almost linear time to compute. To demonstrate the general applicability and the efficacy of this approach, I designed and applied parametric kernels to handwritten character recognition, on-line face recognition, and object detection from laser range finder sensor data. Parametric kernels achieve recognition rates competitive to state-of-the-art approaches in these tasks. / text
255

Semi-Supervised Learning for Object Detection

Rosell, Mikael January 2015 (has links)
Many automotive safety applications in modern cars make use of cameras and object detection to analyze the surrounding environment. Pedestrians, animals and other vehicles can be detected and safety actions can be taken before dangerous situations arise. To detect occurrences of the different objects, these systems are traditionally trained to learn a classification model using a set of images that carry labels corresponding to their content. To obtain high performance with a variety of object appearances, the required amount of data is very large. Acquiring unlabeled images is easy, while the manual work of labeling is both time-consuming and costly. Semi-supervised learning refers to methods that utilize both labeled and unlabeled data, a situation that is highly desirable if it can lead to improved accuracy and at the same time alleviate the demand of labeled data. This has been an active area of research in the last few decades, but few studies have investigated the performance of these algorithms in larger systems. In this thesis, we investigate if and how semi-supervised learning can be used in a large-scale pedestrian detection system. With the area of application being automotive safety, where real-time performance is of high importance, the work is focused around boosting classifiers. Results are presented on a few publicly available UCI data sets and on a large data set for pedestrian detection captured in real-life traffic situations. By evaluating the algorithms on the pedestrian data set, we add the complexity of data set size, a large variety of object appearances and high input dimension. It is possible to find situations in low dimensions where an additional set of unlabeled data can be used successfully to improve a classification model, but the results show that it is hard to efficiently utilize semi-supervised learning in large-scale object detection systems. The results are hard to scale to large data sets of higher dimensions as pair-wise computations are of high complexity and proper similarity measures are hard to find.
256

Moving Object Identification And Event Recognition In Video Surveillamce Systems

Orten, Burkay Birant 01 August 2005 (has links) (PDF)
This thesis is devoted to the problems of defining and developing the basic building blocks of an automated surveillance system. As its initial step, a background-modeling algorithm is described for segmenting moving objects from the background, which is capable of adapting to dynamic scene conditions, as well as determining shadows of the moving objects. After obtaining binary silhouettes for targets, object association between consecutive frames is achieved by a hypothesis-based tracking method. Both of these tasks provide basic information for higher-level processing, such as activity analysis and object identification. In order to recognize the nature of an event occurring in a scene, hidden Markov models (HMM) are utilized. For this aim, object trajectories, which are obtained through a successful track, are written as a sequence of flow vectors that capture the details of instantaneous velocity and location information. HMMs are trained with sequences obtained from usual motion patterns and abnormality is detected by measuring the distance to these models. Finally, MPEG-7 visual descriptors are utilized in a regional manner for object identification. Color structure and homogeneous texture parameters of the independently moving objects are extracted and classifiers, such as Support Vector Machine (SVM) and Bayesian plug-in (Mahalanobis distance), are utilized to test the performance of the proposed person identification mechanism. The simulation results with all the above building blocks give promising results, indicating the possibility of constructing a fully automated surveillance system for the future.
257

Multiple hypothesis tracking for multiple visual targets

Turker, Burcu 01 April 2010 (has links) (PDF)
Visual target tracking problem consists of two topics: Obtaining targets from camera measurements and target tracking. Even though it has been studied for more than 30 years, there are still some problems not completely solved. Especially in the case of multiple targets, association of measurements to targets, creation of new targets and deletion of old ones are among those. What is more, it is very important to deal with the occlusion and crossing targets problems suitably. We believe that a slightly modified version of multiple hypothesis tracking can successfully deal with most of the aforementioned problems with sufficient success. Distance, track size, track color, gate size and track history are used as parameters to evaluate the hypotheses generated for measurement to track association problem whereas size and color are used as parameters for occlusion problem. The overall tracker has been fine tuned over some scenarios and it has been observed that it performs well over the testing scenarios as well. Furthermore the performance of the tracker is analyzed according to those parameters in both association and occlusion handling situations.
258

Object Detection in Infrared Images using Deep Convolutional Neural Networks

Jangblad, Markus January 2018 (has links)
In the master thesis about object detection(OD) using deep convolutional neural network(DCNN), the area of OD is being tested when being applied to infrared images(IR). In this thesis the, goal is to use both long wave infrared(LWIR) images and short wave infrared(SWIR) images taken from an airplane in order to train a DCNN to detect runways, Precision Approach Path Indicator(PAPI) lights, and approaching lights. The purpose for detecting these objects in IR images is because IR light transmits better than visible light under certain weather conditions, for example, fog. This system could then help the pilot detect the runway in bad weather. The RetinaNet model architecture was used and modified in different ways to find the best performing model. The models contain parameters that are found during the training process but some parameters, called hyperparameters, need to be determined in advance. A way to automatically find good values of these hyperparameters was also tested. In hyperparameter optimization, the Bayesian optimization method proved to create a model with equally good performance as the best performance acieved by the author using manual hyperparameter tuning. The OD system was implemented using Keras with Tensorflow backend and received a high perfomance (mAP=0.9245) on the test data. The system manages to detect the wanted objects in the images but is expected to perform worse in a general situation since the training data and test data are very similar. In order to further develop this system and to improve performance under general conditions more data is needed from other airfields and under different weather conditions.
259

Détection de personnes pour des systèmes de videosurveillance multi-caméra intelligents / People detection methods for intelligent multi-Camera surveillance systems

Mehmood, Muhammad Owais 28 September 2015 (has links)
La détection de personnes dans les vidéos est un défi bien connu du domaine de la vision par ordinateur avec un grand nombre d'applications telles que le développement de systèmes de surveillance visuels. Même si les détecteurs monoculaires sont plus simples à mettre en place, ils sont dans l’incapacité de gérer des scènes complexes avec des occultations, une grande densité de personnes ou des scènes avec beaucoup de profondeur de champ menant à une grande variabilité dans la taille des personnes. Dans cette thèse, nous étudions la détection de personnes multi-vues et notamment l'utilisation de cartes d'occupation probabilistes créées en fusionnant les différentes vues grâce à la connaissance de la géométrie du système. La détection à partir de ces cartes d'occupation amène cependant des fausses détections (appelées « fantômes ») dues aux différentes projections. Nous proposons deux nouvelles techniques afin de remédier à ce phénomène et améliorer la détection des personnes. La première utilise une déconvolution par un noyau dont la forme varie spatialement tandis que la seconde est basée sur un principe de validation d’hypothèse. Ces deux approches n'utilisent volontairement pas l'information temporelle qui pourra être réintroduite par la suite dans des algorithmes de suivi. Les deux approches ont été validées dans des conditions difficiles présentant des occultations, une densité de personnes plus ou moins élevée et de fortes variations dans les réponses colorimétriques des caméras. Une comparaison avec d'autres méthodes de l’état de l'art a également été menée sur trois bases de données publiques, validant les méthodes proposées pour la surveillance d'une gare et d'un aéroport / People detection is a well-studied open challenge in the field of Computer Vision with applications such as in the visual surveillance systems. Monocular detectors have limited ability to handle occlusion, clutter, scale, density. Ubiquitous presence of cameras and computational resources fuel the development of multi-camera detection systems. In this thesis, we study the multi-camera people detection; specifically, the use of multi-view probabilistic occupancy maps based on the camera calibration. Occupancy maps allow multi-view geometric fusion of several camera views. Detection with such maps create several false detections and we study this phenomenon: ghost pruning. Further, we propose two novel techniques in order to improve multi-view detection based on: (a) kernel deconvolution, and (b) occupancy shape modeling. We perform non-temporal, multi-view reasoning in occupancy maps to recover accurate positions of people in challenging conditions such as of occlusion, clutter, lighting, and camera variations. We show improvements in people detections across three challenging datasets for visual surveillance including comparison with state-of-the-art techniques. We show the application of this work in exigent transportation scenarios i.e. people detection for surveillance at a train station and at an airport
260

Automatic Waterjet Positioning Vision System

Dziak, Damian, Jachimczyk, Bartosz, Jagusiak, Tomasz January 2012 (has links)
The goals of this work are a design and implementation of a new vision system, integrated with the waterjet machine. This system combines two commercial webcams applied on an industrial dedicated platform. A main purpose of the vision system is to detect the position and rotation of a workpiece placed on the machine table. The used object recognition algorithm consists of edge detection, standard math processing functions and noise filters. The Hough transform technique is used to extract lines and their intersections of a workpiece. Metric rectification method is used, in order to obtain a top view of the workspace and to adjust an image coordinate system, accordingly to the waterjet machine coordinates. In situ calibration procedures of the booth webcams are developed and implemented. Experimental results of the proposed new vision system prototype confirm required performance and precision of the element detection.

Page generated in 0.0719 seconds