• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 3
  • Tagged with
  • 13
  • 13
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Illumination compensation in video surveillance analysis

Bales, Michael Ryan 30 March 2011 (has links)
Problems in automated video surveillance analysis caused by illumination changes are explored, and solutions are presented. Controlled experiments are first conducted to measure the responses of color targets to changes in lighting intensity and spectrum. Surfaces of dissimilar color are found to respond significantly differently. Illumination compensation model error is reduced by 70% to 80% by individually optimizing model parameters for each distinct color region, and applying a model tuned for one region to a chromatically different region increases error by a factor of 15. A background model--called BigBackground--is presented to extract large, stable, chromatically self-similar background features by identifying the dominant colors in a scene. The stability and chromatic diversity of these features make them useful reference points for quantifying illumination changes. The model is observed to cover as much as 90% of a scene, and pixels belonging to the model are 20% more stable on average than non-member pixels. Several illumination compensation techniques are developed to exploit BigBackground, and are compared with several compensation techniques from the literature. Techniques are compared in terms of foreground / background classification, and are applied to an object tracking pipeline with kinematic and appearance-based correspondence mechanisms. Compared with other techniques, BigBackground-based techniques improve foreground classification by 25% to 43%, improve tracking accuracy by an average of 20%, and better preserve object appearance for appearance-based trackers. All algorithms are implemented in C or C++ to support the consideration of runtime performance. In terms of execution speed, the BigBackground-based illumination compensation technique is measured to run on par with the simplest compensation technique used for comparison, and consistently achieves twice the frame rate of the two next-fastest techniques.
2

Utilisation du contexte pour la détection et le suivi d'objets en vidéosurveillance / Using the context for objects detection and tracking in videosurveillance

Rogez, Matthieu 09 June 2015 (has links)
Les caméras de surveillance sont de plus en plus fréquemment présentes dans notre environnement (villes, supermarchés, aéroports, entrepôts, etc.). Ces caméras sont utilisées, entre autres, afin de pouvoir détecter des comportements suspects (intrusion par exemple) ou de reconnaître une catégorie d'objets ou de personnes (détection de genre, détection de plaques d'immatriculation par exemple). D'autres applications concernent également l'établissement de statistiques de fréquentation ou de passage (comptage d'entrée/sortie de personnes ou de véhicules) ou bien le suivi d'un ou plusieurs objets se déplaçant dans le champ de vision de la caméra (trajectoires d'objets, analyse du comportement des clients dans un magasin). Compte tenu du nombre croissant de caméras et de la difficulté à réaliser ces traitements manuellement, un ensemble de méthodes d'analyse vidéo ont été développées ces dernières années afin de pouvoir automatiser ces tâches. Dans cette thèse, nous nous concentrons essentiellement sur les tâches de détection et de suivi des objets mobiles à partir d'une caméra fixe. Contrairement aux méthodes basées uniquement sur les images acquises par les caméras, notre approche consiste à intégrer un certain nombre d'informations contextuelles à l'observation afin de pouvoir mieux interpréter ces images. Ainsi, nous proposons de construire un modèle géométrique et géolocalisé de la scène et de la caméra. Ce modèle est construit directement à partir des études de prédéploiement des caméras et peut notamment utiliser les données OpenStreetMap afin d'établir les modèles 3d des bâtiments proches de la caméra. Nous avons complété ce modèle en intégrant la possibilité de prédire la position du Soleil tout au long de la journée et ainsi pouvoir calculer les ombres projetées des objets de la scène. Cette prédiction des ombres a été mise à profit afin d'améliorer la segmentation des piétons par modèle de fond en supprimant les ombres du masque de mouvement. Concernant le suivi des objets mobiles, nous utilisons le formalisme des automates finis afin de modéliser efficacement les états et évolutions possibles d'un objet. Ceci nous permet d'adapter le traitement de chaque objet selon son état. Nous gérons les occultations inter-objets à l'aide d'un mécanisme de suivi collectif (suivi en groupe) des objets le temps de l'occultation et de ré-identification de ceux-ci à la fin de l'occultation. Notre algorithme s'adapte à n'importe quel type d'objet se déplaçant au sol (piétons, véhicules, etc.) et s'intègre naturellement au modèle de scène développé. Nous avons également développé un ensemble de "rétro-actions" tirant parti de la connaissance des objets suivis afin d'améliorer les détections obtenues à partir d'un modèle de fond. En particulier, nous avons abordé le cas des objets stationnaires, souvent intégrés à tort dans le fond, et avons revisité la méthode de suppression des ombres du masque de mouvement en tirant parti de la connaissance des objets suivis. L'ensemble des solutions proposées a été implémenté dans le logiciel de l'entreprise Foxstream et est compatible avec la contrainte d'exécution en temps réel nécessaire en vidéosurveillance. / Video-surveillance cameras are increasingly used in our environment. They are indeed present almost everywhere in the cities, supermarkets, airports, warehouses, etc. These cameras are used, among other things, in order to detect suspect behavior (an intrusion for instance) or to recognize a specific category of object or person (gender detection, license plates detection). Other applications also exist to count and/or track people in order to analyze their behavior. Due to the increasing number of cameras and the difficulty to achieve these tasks manually, several video analysis methods have been developed in order to address them automatically. In this thesis, we mainly focus on the detection and tracking of moving objects from a fixed camera. Unlike methods based solely on images captured by cameras, our approach integrates contextual pieces of information in order better interpret these images. Thus we propose to build a geometric and geolocalized model of the scene and the camera. This model is built directly from the pre-deployment studies of the cameras and uses the OpenStreetMap geographical database to build 3d models of buildings near the camera. We added to this model the ability to predict the position of the sun throughout the day and the resulting shadows in the scene. By predicting the shadows, and deleting them from the foreground mask, our method is able to improve the segmentation of pedestrians. Regarding the tracking of multiple mobile objects, we use the formalism of finite state machines to effectively model the states and possible transitions that an object is allowed to take. This allows us to tailor the processing of each object according to its state. We manage the inter-object occlusion using a collective tracking strategy. When taking part in an occlusion, objects are regrouped and tracked collectively. At the end of the occlusion, each object is re-identified and individual tracking resume. Our algorithm adapts to any type of ground-moving object (pedestrians, vehicles, etc.) and seamlessly integrates in the developed scene model. We have also developed several retro-actions taking advantage of the knowledge of tracked objects to improve the detections obtained with the background model. In particular, we tackle the issue of stationary objects often integrated erroneously in the background and we revisited the initial proposal regarding shadow removal. All proposed solutions have been implemented in the Foxstream products and are able to run in real-time.
3

Object Detection in Dynamic Background / Détection d’objets dans un fond dynamique

Ali, Imtiaz 05 March 2012 (has links)
La détection et la reconnaissance d’objets dans des vidéos numériques est l’un des principaux challenges dans de nombreuses applications de vidéo surveillance. Dans le cadre de cette thèse, nous nous sommes attaqué au problème difficile de la segmentation d’objets dans des vidéos dont le fond est en mouvement permanent. Il s’agit de situations qui se produisent par exemple lorsque l’on filme des cours d’eau, ou le ciel,ou encore une scène contenant de la fumée, de la pluie, etc. Il s’agit d’un sujet assez peu étudié dans la littérature car très souvent les scènes traitées sont plutôt statiques et seules quelques parties bougent, telles que les feuillages par exemple, ou les seuls mouvements sont des changements de luminosité. La principale difficulté dans le cadre des scènes dont le fond est en mouvement est de différencier le mouvement de l’objet du mouvement du fond qui peuvent parfois être très similaires. En effet, par exemple, un objet dans une rivière peut se déplacer à la même allure que l’eau. Les algorithmes de la littérature extrayant des champs de déplacement échouent alors et ceux basés sur des modélisations de fond génèrent de très nombreuses erreurs. C’est donc dans ce cadre compliqué que nous avons tenté d’apporter des solutions.La segmentation d’objets pouvant se baser sur différents critères : couleur, texture,forme, mouvement, nous avons proposé différentes méthodes prenant en compte un ou plusieurs de ces critères.Dans un premier temps, nous avons travaillé dans un contexte bien précis qui était celui de la détection des bois morts dans des rivières. Ce problème nous a été apporté par des géographes avec qui nous avons collaboré dans le cadre du projet DADEC (Détection Automatique de Débris pour l’Aide à l’Etude des Crues). Dans ce cadre, nous avons proposé deux méthodes l’une dite " naïve " basée sur la couleur des objets à détecter et sur leur mouvement et l’autre, basée sur une approche probabiliste mettant en oeuvre une modélisation de la couleur de l’objet et également basée sur leur déplacement. Nous avons proposé une méthode pour le comptage des bois morts en utilisant les résultats des segmentations.Dans un deuxième temps, supposant la connaissance a priori du mouvement des objets,dans un contexte quelconque, nous avons proposé un modèle de mouvement de l’objet et avons montré que la prise en compte de cet a priori de mouvement permettait d’améliorer nettement les résultats des segmentations obtenus par les principaux algorithmes de modélisation de fond que l’on trouve dans la littérature.Enfin, dans un troisième temps, en s’inspirant de méthodes utilisées pour caractériser des textures 2D, nous avons proposé un modèle de fond basé sur une approche fréquentielle.Plus précisément, le modèle prend en compte non seulement le voisinage spatial d’un pixel mais également le voisinage temporel de ce dernier. Nous avons appliqué la transformée de Fourier locale au voisinage spatiotemporel d’un pixel pour construire un modèle de fond.Nous avons appliqué nos méthodes sur plusieurs vidéos, notamment les vidéos du projet DADEC, les vidéos de la base DynTex, des vidéos synthétiques et des vidéos que nous avons faites. / Moving object detection is one of the main challenges in many video monitoring applications.In this thesis, we address the difficult problem that consists in object segmentation when background moves permanently. Such situations occur when the background contains water flow, smoke or flames, snowfall, rainfall etc. Object detection in moving background was not studied much in the literature so far. Video backgrounds studied in the literature are often composed of static scenes or only contain a small portion of moving regions (for example, fluttering leaves or brightness changes). The main difficulty when we study such situations is to differentiate the objects movements and the background movements that may be almost similar. For example, an object in river moves at the same speed as water. Therefore, motion-based techniques of the literature, relying on displacements vectors in the scene, may fail to discriminate objects from the background, thus generating a lot of false detections. In this complex context, we propose some solutions for object detection.Object segmentation can be based on different criteria including color, texture, shape and motion. We propose various methods taking into account one or more of these criteria.We first work on the specific context of wood detection in rivers. It is a part of DADEC project (Détection Automatique de Débris pour l’Aide à l’Etude des Crues) in collaboration with geographers. We propose two approaches for wood detection: a naïve method and the probabilistic image model. The naïve approach is based on binary decisions based on object color and motion, whereas the probabilistic image model uses wood intensity distribution with pixel motion. Such detection methods are used fortracking and counting pieces of wood in rivers.Secondly, we consider a context in which we suppose a priori knowledge about objectmotion is available. Hence, we propose to model and incorporate this knowledge into the detection process. We show that combining this prior motion knowledge with classical background model improves object detection rate.Finally, drawing our inspiration from methods used for 2D texture representation, we propose to model moving backgrounds using a frequency-based approach. More precisely, the model takes into account the spatial neighborhoods of pixels but also their temporal neighborhoods. We apply local Fourier transform on the obtained regions in order to extract spatiotemporal color patterns.We apply our methods on multiple videos, including river videos under DADEC project, image sequences from the DynTex video database, several synthetic videos andsome of our own made videos. We compare our object detection results with the existing methods for real and synthetic videos quantitatively as well as qualitatively
4

Stanovení podobnosti objektů / Object similarity detection

Přidal, Oldřich January 2011 (has links)
The aim of this thesis was to make a program for object finding, object segmentation and similarity object detection in the image. Object are representing by cars. Description of image making, image preprocessing, geometrical transform and Hough transform was written in the theoretical part of the thesis. Also basic morphological operations, corner detection algorithms and methods of object similarity detection were described in this part. The practical part of the thesis focus to realization of single segments from how to make image, through main program analysis and auxiliary functions to similarity results evaluation. Main program is devided to four parts. The program is preprocessed in the first part. The geometrical transforms are used in the second part and the object similarity is detected in the third part. The last part shows the results. The algorithm is realized in C++ language using the OpenCV library.
5

Rekonstrukce pozadí z několika fotografií / Background Reconstruction from Several Photographs

Motáček, Vladimír January 2010 (has links)
This thesis concerns the background reconstruction from several photographs (so called depopulation scene efect). There are presented methods for obtaining the background from video and discussion of their use for photographs. The greatest emphasis is placed on the Gaussian mixture model and effort to improve this algorithm due to static image. The photographs should be taken with a tripod.
6

Characterisation of Coincidence Data of the Gerda Experiment to Search for Double Beta Decays to Excited States

Wester, Thomas 29 January 2020 (has links)
The GERDA experiment is searching for the neutrinoless double beta (0vbb) decay of Ge-76. By that, it tries to answer two long standing questions about the neutrino: 'How large is the neutrino mass?' and 'Is the neutrino either Dirac or Majorana particle?'. Additionally, an observation would imply that lepton number is not conserved, which is an important puzzle piece for theories explaining the asymmetry between matter and anti-matter in the universe. The effective Majorana electron neutrino mass can be extracted from the half-life of the 0vbb -decay. However, during that conversion large uncertainties are added through nuclear matrix elements, that are calculated by a variety of theoretical models. Experimental input is required to constrain such models and their parameters to improve the reliability of the calculations. Additional input can be obtained by comparing the model predictions for the two neutrino double beta (2vbb) decay to the ground state, but also for decay modes to excited states of the daughter nuclide with measurements. The latter decay modes have not yet been observed in the case of Ge-76. The event signature of transitions to excited states is enhanced by de-excitation gamma-rays. The GERDA experiment employs an array of bare germanium semi-conductor detectors in a liquid argon cryostat. This array is suited to search for excited state transition in the 2vbb and 0vbb -decay modes using data with coincident energy depositions in multiple detectors. This work presents the preparation and characterisation of this data set, which includes the evaluation and correction of crosstalk between detector channels, the determination of the energy resolution of the detectors and the modelling of background. In an analysis combining 22 kgyr of Phase I data with the first 35 kgyr of Phase II data of GERDA, no signal has been observed for 2/0vbb -decays of Ge-76 to the energetically lowest three excited states in Se-76. New limits have been set for the two neutrino decay modes at T1/2(2v)(0+g.s. to 0+1) > 3.1x10^23 yr, T1/2(2v)(0+g.s. to 2+1) > 3.4x10^23 yr T1/2(2v)(0+g.s. to 2+2) > 2.5x10^23 yr with 90% credibility using a Bayesian approach, improving upon the limits obtained in Phase I. The corresponding sensitivities are 3.6x10^23 yr, 6.7x10^23 yr and 3.7x10^23 yr, respectively. First limits are set for the neutrinoless decay modes in the order of (10^24-10^25) yr. Reaching the desired Phase II exposure of 100 kgyr, the sensitivities will increase by up to 50%.
7

Vizuální systém pro detekci obsazenosti parkoviště pomocí hlubokých neuronových sítí / Visual Car-Detection on the Parking Lots Using Deep Neural Networks

Stránský, Václav January 2017 (has links)
The concept of smart cities is inherently connected with efficient parking solutions based on the knowledge of individual parking space occupancy. The subject of this paper is the design and implementation of a robust system for analyzing parking space occupancy from a multi-camera system with the possibility of visual overlap between cameras. The system is designed and implemented in Robot Operating System (ROS) and its core consists of two separate classifiers. The more successful, however, a slower option is detection by a deep neural network. A quick interaction is provided by a less accurate classifier of movement with a background model. The system is capable of working in real time on a graphic card as well as on a processor. The success rate of the system on a testing data set from real operation exceeds 95 %.
8

Detekce a počítání automobilů v obraze (videodetekce) / Videodetection - traffic monitoring

Kozina, Lubomír January 2010 (has links)
In this master’s thesis on the topic Videodetection - traffic monitoring I was engaged in searching moving objects in traffic images sequence. There are described various methods background model computation and moving vehicles marking, counting or velocity calculating in the thesis. It was created a graphical user interface for traffic scene evaluation in MATLAB.
9

A performance measurement of a Speaker Verification system based on a variance in data collection for Gaussian Mixture Model and Universal Background Model

Bekli, Zeid, Ouda, William January 2018 (has links)
Voice recognition has become a more focused and researched field in the last century,and new techniques to identify speech has been introduced. A part of voice recognition isspeaker verification which is divided into Front-end and Back-end. The first componentis the front-end or feature extraction where techniques such as Mel-Frequency CepstrumCoefficients (MFCC) is used to extract the speaker specific features of a speech signal,MFCC is mostly used because it is based on the known variations of the humans ear’scritical frequency bandwidth. The second component is the back-end and handles thespeaker modeling. The back-end is based on the Gaussian Mixture Model (GMM) andGaussian Mixture Model-Universal Background Model (GMM-UBM) methods forenrollment and verification of the specific speaker. In addition, normalization techniquessuch as Cepstral Means Subtraction (CMS) and feature warping is also used forrobustness against noise and distortion. In this paper, we are going to build a speakerverification system and experiment with a variance in the amount of training data for thetrue speaker model, and to evaluate the system performance. And further investigate thearea of security in a speaker verification system then two methods are compared (GMMand GMM-UBM) to experiment on which is more secure depending on the amount oftraining data available.This research will therefore give a contribution to how much data is really necessary fora secure system where the False Positive is as close to zero as possible, how will theamount of training data affect the False Negative (FN), and how does this differ betweenGMM and GMM-UBM.The result shows that an increase in speaker specific training data will increase theperformance of the system. However, too much training data has been proven to beunnecessary because the performance of the system will eventually reach its highest point and in this case it was around 48 min of data, and the results also show that the GMMUBM model containing 48- to 60 minutes outperformed the GMM models.
10

Exploring variabilities through factor analysis in automatic acoustic language recognition

Verdet, Florian 05 September 2011 (has links) (PDF)
Language Recognition is the problem of discovering the language of a spoken definitionutterance. This thesis achieves this goal by using short term acoustic information within a GMM-UBM approach.The main problem of many pattern recognition applications is the variability of problemthe observed data. In the context of Language Recognition (LR), this troublesomevariability is due to the speaker characteristics, speech evolution, acquisition and transmission channels.In the context of Speaker Recognition, the variability problem is solved by solutionthe Joint Factor Analysis (JFA) technique. Here, we introduce this paradigm toLanguage Recognition. The success of JFA relies on several assumptions: The globalJFA assumption is that the observed information can be decomposed into a universalglobal part, a language-dependent part and the language-independent variabilitypart. The second, more technical assumption consists in the unwanted variability part to be thought to live in a low-dimensional, globally defined subspace. In this work, we analyze how JFA behaves in the context of a GMM-UBM LR framework. We also introduce and analyze its combination with Support Vector Machines(SVMs).The first JFA publications put all unwanted information (hence the variability) improvemen tinto one and the same component, which is thought to follow a Gaussian distribution.This handles diverse kinds of variability in a unique manner. But in practice,we observe that this hypothesis is not always verified. We have for example thecase, where the data can be divided into two clearly separate subsets, namely datafrom telephony and from broadcast sources. In this case, our detailed investigations show that there is some benefit of handling the two kinds of data with two separatesystems and then to elect the output score of the system, which corresponds to the source of the testing utterance.For selecting the score of one or the other system, we need a channel source related analyses detector. We propose here different novel designs for such automatic detectors.In this framework, we show that JFA's variability factors (of the subspace) can beused with success for detecting the source. This opens the interesting perspectiveof partitioning the data into automatically determined channel source categories,avoiding the need of source-labeled training data, which is not always available.The JFA approach results in up to 72% relative cost reduction, compared to the overall resultsGMM-UBM baseline system. Using source specific systems followed by a scoreselector, we achieve 81% relative improvement.

Page generated in 0.0746 seconds