• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Object localization using deformable templates

Spiller, Jonathan Michael 12 March 2008 (has links)
Object localization refers to the detection, matching and segmentation of objects in images. The localization model presented in this paper relies on deformable templates to match objects based on shape alone. The shape structure is captured by a prototype template consisting of hand-drawn edges and contours representing the object to be localized. A multistage, multiresolution algorithm is utilized to reduce the computational intensity of the search. The first stage reduces the physical search space dimensions using correlation to determine the regions of interest where a match it likely to occur. The second stage finds approximate matches between the template and target image at progressively finer resolutions, by attracting the template to salient image features using Edge Potential Fields. The third stage entails the use of evolutionary optimization to determine control point placement for a Local Weighted Mean warp, which deforms the template to fit the object boundaries. Results are presented for a number of applications, showing the successful localization of various objects. The algorithm’s invariance to rotation, scale, translation and moderate shape variation of the target objects is clearly illustrated.
2

Détection et classification de cibles multispectrales dans l'infrarouge / Detection and classification of multispectral infrared targets

Maire, Florian 14 February 2014 (has links)
Les dispositifs de protection de sites sensibles doivent permettre de détecter des menaces potentielles suffisamment à l’avance pour pouvoir mettre en place une stratégie de défense. Dans cette optique, les méthodes de détection et de reconnaissance d’aéronefs se basant sur des images infrarouge multispectrales doivent être adaptées à des images faiblement résolues et être robustes à la variabilité spectrale et spatiale des cibles. Nous mettons au point dans cette thèse, des méthodes statistiques de détection et de reconnaissance d’aéronefs satisfaisant ces contraintes. Tout d’abord, nous spécifions une méthode de détection d’anomalies pour des images multispectrales, combinant un calcul de vraisemblance spectrale avec une étude sur les ensembles de niveaux de la transformée de Mahalanobis de l’image. Cette méthode ne nécessite aucune information a priori sur les aéronefs et nous permet d’identifier les images contenant des cibles. Ces images sont ensuite considérées comme des réalisations d’un modèle statistique d’observations fluctuant spectralement et spatialement autour de formes caractéristiques inconnues. L’estimation des paramètres de ce modèle est réalisée par une nouvelle méthodologie d’apprentissage séquentiel non supervisé pour des modèles à données manquantes que nous avons développée. La mise au point de ce modèle nous permet in fine de proposer une méthode de reconnaissance de cibles basée sur l’estimateur du maximum de vraisemblance a posteriori. Les résultats encourageants, tant en détection qu’en classification, justifient l’intérêt du développement de dispositifs permettant l’acquisition d’images multispectrales. Ces méthodes nous ont également permis d’identifier les regroupements de bandes spectrales optimales pour la détection et la reconnaissance d’aéronefs faiblement résolus en infrarouge / Surveillance systems should be able to detect potential threats far ahead in order to put forward a defence strategy. In this context, detection and recognition methods making use of multispectral infrared images should cope with low resolution signals and handle both spectral and spatial variability of the targets. We introduce in this PhD thesis a novel statistical methodology to perform aircraft detection and classification which take into account these constraints. We first propose an anomaly detection method designed for multispectral images, which combines a spectral likelihood measure and a level set study of the image Mahalanobis transform. This technique allows to identify images which feature an anomaly without any prior knowledge on the target. In a second time, these images are used as realizations of a statistical model in which the observations are described as random spectral and spatial deformation of prototype shapes. The model inference, and in particular the prototype shape estimation, is achieved through a novel unsupervised sequential learning algorithm designed for missing data models. This model allows to propose a classification algorithm based on maximum a posteriori probability Promising results in detection as well as in classification, justify the growing interest surrounding the development of multispectral imaging devices. These methods have also allowed us to identify the optimal infrared spectral band regroupments regarding the low resolution aircraft IRS detection and classification
3

Left ventricle functional analysis in 2D+t contrast echocardiography within an atlas-based deformable template model framework

Casero Cañas, Ramón January 2008 (has links)
This biomedical engineering thesis explores the opportunities and challenges of 2D+t contrast echocardiography for left ventricle functional analysis, both clinically and within a computer vision atlas-based deformable template model framework. A database was created for the experiments in this thesis, with 21 studies of contrast Dobutamine Stress Echo, in all 4 principal planes. The database includes clinical variables, human expert hand-traced myocardial contours and visual scoring. First the problem is studied from a clinical perspective. Quantification of endocardial global and local function using standard measures shows expected values and agreement with human expert visual scoring, but the results are less reliable for myocardial thickening. Next, the problem of segmenting the endocardium with a computer is posed in a standard landmark and atlas-based deformable template model framework. The underlying assumption is that these models can emulate human experts in terms of integrating previous knowledge about the anatomy and physiology with three sources of information from the image: texture, geometry and kinetics. Probabilistic atlases of contrast echocardiography are computed, while noting from histograms at selected anatomical locations that modelling texture with just mean intensity values may be too naive. Intensity analysis together with the clinical results above suggest that lack of external boundary definition may preclude this imaging technique for appropriate measuring of myocardial thickening, while endocardial boundary definition is appropriate for evaluation of wall motion. Geometry is presented in a Principal Component Analysis (PCA) context, highlighting issues about Gaussianity, the correlation and covariance matrices with respect to physiology, and analysing different measures of dimensionality. A popular extension of deformable models ---Active Appearance Models (AAMs)--- is then studied in depth. Contrary to common wisdom, it is contended that using a PCA texture space instead of a fixed atlas is detrimental to segmentation, and that PCA models are not convenient for texture modelling. To integrate kinetics, a novel spatio-temporal model of cardiac contours is proposed. The new explicit model does not require frame interpolation, and it is compared to previous implicit models in terms of approximation error when the shape vector changes from frame to frame or remains constant throughout the cardiac cycle. Finally, the 2D+t atlas-based deformable model segmentation problem is formulated and solved with a gradient descent approach. Experiments using the similarity transformation suggest that segmentation of the whole cardiac volume outperforms segmentation of individual frames. A relatively new approach ---the inverse compositional algorithm--- is shown to decrease running times of the classic Lucas-Kanade algorithm by a factor of 20 to 25, to values that are within real-time processing reach.

Page generated in 0.0848 seconds