• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Study of the effects of background and motion camera on the efficacy of Kalman and particle filter algorithms.

Morita, Yasuhiro 08 1900 (has links)
This study compares independent use of two known algorithms (Kalmar filter with background subtraction and Particle Filter) that are commonly deployed in object tracking applications. Object tracking in general is very challenging; it presents numerous problems that need to be addressed by the application in order to facilitate its successful deployment. Such problems range from abrupt object motion, during tracking, to a change in appearance of the scene and the object, as well as object to scene occlusions, and camera motion among others. It is important to take into consideration some issues, such as, accounting for noise associated with the image in question, ability to predict to an acceptable statistical accuracy, the position of the object at a particular time given its current position. This study tackles some of the issues raised above prior to addressing how the use of either of the aforementioned algorithm, minimize or in some cases eliminate the negative effects
2

Modèles d'attention visuelle pour l'analyse de scènes dynamiques / Spatio-temporal saliency detection in dynamic scenes using color and texture features

Muddamsetty, Satya Mahesh 07 July 2014 (has links)
De nombreuses applications de la vision par ordinateur requièrent la détection, la localisation et le suivi de régions ou d’objets d’intérêt dans une image ou une séquence d’images. De nombreux modèles d’attention visuelle, inspirés de la vision humaine, qui détectent de manière automatique les régions d’intérêt dans une image ou une vidéo, ont récemment été développés et utilisés avec succès dans différentes applications. Néanmoins, la plupart des approches existantes sont limitées à l’analyse de scènes statiques et très peu de méthodes exploitent la nature temporelle des séquences d’images.L'objectif principal de ce travail de thèse est donc l'étude de modèles d'attention visuelle pour l'analyse de scènes dynamiques complexes. Une carte de saliance est habituellement obtenue par la fusion d'une carte statitque (saliance spatiale dans une image) d'une part, et d'une carte dynamique (salience temporelle entre une série d'image) d'autre part. Dans notre travail, nous modélisons les changements dynamiques par un opérateur de texture LBP-TOP (Local Binary Patterns) et nous utilisons l'information couleur pour l'aspect spatial.Les deux cartes de saliances sont calculées en utilisant une formulation discriminante inspirée du système visuel humain, et fuionnées de manière appropriée en une carte de saliance spatio-temporelle.De nombreuses expériences avec des bases de données publiques, montrent que notre approche obteint des résulats meilleurs ou comparables avec les approches de la littérature. / Visual saliency is an important research topic in the field of computer vision due to its numerouspossible applications. It helps to focus on regions of interest instead of processingthe whole image or video data. Detecting visual saliency in still images has been widelyaddressed in literature with several formulations. However, visual saliency detection invideos has attracted little attention, and is a more challenging task due to additional temporalinformation. Indeed, a video contains strong spatio-temporal correlation betweenthe regions of consecutive frames, and, furthermore, motion of foreground objects dramaticallychanges the importance of the objects in a scene. The main objective of thethesis is to develop a spatio-temporal saliency method that works well for complex dynamicscenes.A spatio-temporal saliency map is usually obtained by the fusion of a static saliency mapand a dynamic saliency map. In our work, we model the dynamic textures in a dynamicscene with Local Binary Patterns (LBP-TOP) to compute the dynamic saliency map, andwe use color features to compute the static saliency map. Both saliency maps are computedusing a bio-inspired mechanism of Human Visual System (HVS) with a discriminantformulation known as center surround saliency, and are fused in a proper way.The proposed models have been extensively evaluated with diverse publicly availabledatasets which contain several videos of dynamic scenes. The evaluation is performed intwo parts. First, the method in locating interesting foreground objects in complex scene.Secondly, we evaluate our model on the task of predicting human observers fixations.The proposed method is also compared against state-of-the art methods, and the resultsshow that the proposed approach achieves competitive results.In this thesis we also evaluate the performance of different fusion techniques, because fusionplays a critical role in the accuracy of the spatio-temporal saliency map. We evaluatethe performances of different fusion techniques on a large and diverse complex datasetand the results show that a fusion method must be selected depending on the characteristics,in terms of color and motion contrasts, of a sequence. Overall, fusion techniqueswhich take the best of each saliency map (static and dynamic) in the final spatio-temporalmap achieve best results.

Page generated in 0.0321 seconds