• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 40
  • 40
  • 15
  • 14
  • 11
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Ondelettes pour la détection de caractéristiques en traitement d'images. Application à la détection de région d'intérêt.

Damerval, Christophe 07 May 2008 (has links) (PDF)
Cette thèse en traitement d'images aborde le problème de la mise en évidence de certaines structures remarquables, comme des objets que nous percevons visuellement. Celles-ci peuvent être autant monodimensionnelles, comme des contours, que bidimensionnelles, ce qui correspond des objets plus complexes. Un problème important issu de la vision par ordinateur est de détecter de telles structures, ainsi que d'extraire des grandeurs caractéristiques de celles-ci. Dans diverses applications, comme la reconnaissance d'objets, l'appariement d'images, le suivi de mouvement ou le rehaussement de certains éléments particuliers, il s'agit d'une première étape avant d'autres opérations de plus haut niveau. Ainsi, la formulation de détecteurs performants apparaît comme essentielle. Nous montrons que cela peut être réalisé grâce des décompositions en ondelettes ; en particulier, il est possible de définir certaines lignes de maxima, qui s'avèrent pertinentes vis à vis de ce problème : d'une part, pour détecter des objets (par des régions d'intérêt), et, d'autre part, afin de les caractériser (calculs de régularité Lipschitzienne et d'échelle caractéristique). Cette approche originale de détection fondée sur des lignes de maxima peut alors être comparée aux approches classiques.
32

Correspondance de maillages dynamiques basée sur les caractéristiques / Feature-based matching of animated meshes

Mykhalchuk, Vasyl 09 April 2015 (has links)
Correspondance de forme est un problème fondamental dans de nombreuses disciplines de recherche, tels que la géométrie algorithmique, vision par ordinateur et l'infographie. Communément définie comme un problème de trouver injective/ multivaluée correspondance entre une source et une cible, il constitue une tâche centrale dans de nombreuses applications y compris le transfert de attributes, récupération des formes etc. Dans récupération des formes, on peut d'abord calculer la correspondance entre la forme de requête et les formes dans une base de données, puis obtenir le meilleure correspondance en utilisant une mesure de qualité de correspondance prédéfini. Il est également particulièrement avantageuse dans les applications basées sur la modélisation statistique des formes. En encapsulant les propriétés statistiques de l'anatomie du sujet dans le model de forme, comme variations géométriques, des variations de densité, etc., il est utile non seulement pour l'analyse des structures anatomiques telles que des organes ou des os et leur variations valides, mais aussi pour apprendre les modèle de déformation de la classe d'objets. Dans cette thèse, nous nous intéressons à une enquête sur une nouvelle méthode d'appariement de forme qui exploite grande redondance de l'information à partir des ensembles de données dynamiques, variables dans le temps. Récemment, une grande quantité de recherches ont été effectuées en infographie sur l'établissement de correspondances entre les mailles statiques (Anguelov, Srinivasan et al. 2005, Aiger, Mitra et al. 2008, Castellani, Cristani et al. 2008). Ces méthodes reposent sur les caractéristiques géométriques ou les propriétés extrinsèques/intrinsèques des surfaces statiques (Lipman et Funkhouser 2009, Sun, Ovsjanikov et al. 2009, Ovsjanikov, Mérigot et al. 2010, Kim, Lipman et al., 2011) pour élaguer efficacement les paires. Bien que l'utilisation de la caractéristique géométrique est encore un standard d'or, les méthodes reposant uniquement sur l'information statique de formes peuvent générer dans les résultats de correspondance grossièrement trompeurs lorsque les formes sont radicalement différentes ou ne contiennent pas suffisamment de caractéristiques géométriques. [...] / 3D geometry modelling tools and 3D scanners become more enhanced and to a greater degree affordable today. Thus, development of the new algorithms in geometry processing, shape analysis and shape correspondence gather momentum in computer graphics. Those algorithms steadily extend and increasingly replace prevailing methods based on images and videos. Non-rigid shape correspondence or deformable shape matching has been a long-studied subject in computer graphics and related research fields. Not to forget, shape correspondence is of wide use in many applications such as statistical shape analysis, motion cloning, texture transfer, medical applications and many more. However, robust and efficient non-rigid shape correspondence still remains a challenging task due to fundamental variations between individual subjects, acquisition noise and the number of degrees of freedom involved in correspondence search. Although dynamic 2D/3D intra-subject shape correspondence problem has been addressed in the rich set of previous methods, dynamic inter-subject shape correspondence received much less attention. The primary purpose of our research is to develop a novel, efficient, robust deforming shape analysis and correspondence framework for animated meshes based on their dynamic and motion properties. We elaborate our method by exploiting a profitable set of motion data exhibited by deforming meshes with time-varying embedding. Our approach is based on an observation that a dynamic, deforming shape of a given subject contains much more information rather than a single static posture of it. That is different from the existing methods that rely on static shape information for shape correspondence and analysis.Our framework of deforming shape analysis and correspondence of animated meshes is comprised of several major contributions: a new dynamic feature detection technique based on multi-scale animated mesh’s deformation characteristics, novel dynamic feature descriptor, and an adaptation of a robust graph-based feature correspondence approach followed by the fine matching of the animated meshes. [...]
33

Anatomy of the SIFT method / L'Anatomie de la méthode SIFT

Rey Otero, Ives 26 September 2015 (has links)
Cette thèse est une analyse approfondie de la méthode SIFT, la méthode de comparaison d'images la plus populaire. En proposant un échantillonnage du scale-space Gaussien, elle est aussi la première méthode à mettre en pratique la théorie scale-space et faire usage de ses propriétés d'invariance aux changements d'échelles.SIFT associe à une image un ensemble de descripteurs invariants aux changements d'échelle, invariants à la rotation et à la translation. Les descripteurs de différentes images peuvent être comparés afin de mettre en correspondance les images. Compte tenu de ses nombreuses applications et ses innombrables variantes, étudier un algorithme publié il y a une décennie pourrait surprendre. Il apparaît néanmoins que peu a été fait pour réellement comprendre cet algorithme majeur et établir de façon rigoureuse dans quelle mesure il peut être amélioré pour des applications de haute précision. Cette étude se découpe en quatre parties. Le calcul exact du scale-space Gaussien, qui est au cœur de la méthode SIFT et de la plupart de ses compétiteurs, est l'objet de la première partie.La deuxième partie est une dissection méticuleuse de la longue chaîne de transformations qui constitue la méthode SIFT. Chaque paramètre y est documenté et son influence analysée. Cette dissection est aussi associé à une publication en ligne de l'algorithme. La description détaillée s'accompagne d'un code en C ainsi que d'une plateforme de démonstration permettant l'analyse par le lecteur de l'influence de chaque paramètre. Dans la troisième partie, nous définissons un cadre d'analyse expérimental exact dans le but de vérifier que la méthode SIFT détecte de façon fiable et stable les extrema du scale-space continue à partir de la grille discrète. En découlent des conclusions pratiques sur le bon échantillonnage du scale-space Gaussien ainsi que sur les stratégies de filtrage de points instables. Ce même cadre expérimental est utilisé dans l'analyse de l'influence de perturbations dans l'image (aliasing, bruit, flou). Cette analyse démontre que la marge d'amélioration est réduite pour la méthode SIFT ainsi que pour toutes ses variantes s'appuyant sur le scale-space pour extraire des points d'intérêt. L'analyse démontre qu'un suréchantillonnage du scale-space permet d'améliorer l'extraction d'extrema et que se restreindre aux échelles élevées améliore la robustesse aux perturbations de l'image.La dernière partie porte sur l'évaluation des performances de détecteurs de points. La métrique de performance la plus généralement utilisée est la répétabilité. Nous démontrons que cette métrique souffre pourtant d'un biais et qu'elle favorise les méthodes générant des détections redondantes. Afin d'éliminer ce biais, nous proposons une variante qui prend en considération la répartition spatiale des détections. A l'aide de cette correction nous réévaluons l'état de l'art et montrons que, une fois la redondance des détections prise en compte, la méthode SIFT est meilleure que nombre de ses variantes les plus modernes. / This dissertation contributes to an in-depth analysis of the SIFT method. SIFT is the most popular and the first efficient image comparison model. SIFT is also the first method to propose a practical scale-space sampling and to put in practice the theoretical scale invariance in scale space. It associates with each image a list of scale invariant (also rotation and translation invariant) features which can be used for comparison with other images. Because after SIFT feature detectors have been used in countless image processing applications, and because of an intimidating number of variants, studying an algorithm that was published more than a decade ago may be surprising. It seems however that not much has been done to really understand this central algorithm and to find out exactly what improvements we can hope for on the matter of reliable image matching methods. Our analysis of the SIFT algorithm is organized as follows. We focus first on the exact computation of the Gaussian scale-space which is at the heart of SIFT as well as most of its competitors. We provide a meticulous dissection of the complex chain of transformations that form the SIFT method and a presentation of every design parameter from the extraction of invariant keypoints to the computation of feature vectors. Using this documented implementation permitting to vary all of its own parameters, we define a rigorous simulation framework to find out if the scale-space features are indeed correctly detected by SIFT, and which sampling parameters influence the stability of extracted keypoints. This analysis is extended to see the influence of other crucial perturbations, such as errors on the amount of blur, aliasing and noise. This analysis demonstrates that, despite the fact that numerous methods claim to outperform the SIFT method, there is in fact limited room for improvement in methods that extract keypoints from a scale-space. The comparison of many detectors proposed in SIFT competitors is the subject of the last part of this thesis. The performance analysis of local feature detectors has been mainly based on the repeatability criterion. We show that this popular criterion is biased toward methods producing redundant (overlapping) descriptors. We therefore propose an amended evaluation metric and use it to revisit a classic benchmark. For the amended repeatability criterion, SIFT is shown to outperform most of its more recent competitors. This last fact corroborates the unabating interest in SIFT and the necessity of a thorough scrutiny of this method.
34

Scale Selection Properties of Generalized Scale-Space Interest Point Detectors

Lindeberg, Tony January 2013 (has links)
Scale-invariant interest points have found several highly successful applications in computer vision, in particular for image-based matching and recognition. This paper presents a theoretical analysis of the scale selection properties of a generalized framework for detecting interest points from scale-space features presented in Lindeberg (Int. J. Comput. Vis. 2010, under revision) and comprising: an enriched set of differential interest operators at a fixed scale including the Laplacian operator, the determinant of the Hessian, the new Hessian feature strength measures I and II and the rescaled level curve curvature operator, as well as an enriched set of scale selection mechanisms including scale selection based on local extrema over scale, complementary post-smoothing after the computation of non-linear differential invariants and scale selection based on weighted averaging of scale values along feature trajectories over scale. A theoretical analysis of the sensitivity to affine image deformations is presented, and it is shown that the scale estimates obtained from the determinant of the Hessian operator are affine covariant for an anisotropic Gaussian blob model. Among the other purely second-order operators, the Hessian feature strength measure I has the lowest sensitivity to non-uniform scaling transformations, followed by the Laplacian operator and the Hessian feature strength measure II. The predictions from this theoretical analysis agree with experimental results of the repeatability properties of the different interest point detectors under affine and perspective transformations of real image data. A number of less complete results are derived for the level curve curvature operator. / <p>QC 20121003</p> / Image descriptors and scale-space theory for spatial and spatio-temporal recognition
35

Multiscale Active Contour Methods in Computer Vision with Applications in Tomography

Alvino, Christopher Vincent 10 April 2005 (has links)
Most applications in computer vision suffer from two major difficulties. The first is they are notoriously ridden with sub-optimal local minima. The second is that they typically require high computational cost to be solved robustly. The reason for these two drawbacks is that most problems in computer vision, even when well-defined, typically require finding a solution in a very large high-dimensional space. It is for these two reasons that multiscale methods are particularly well-suited to problems in computer vision. Multiscale methods, by way of looking at the coarse scale nature of a problem before considering the fine scale nature, often have the ability to avoid sub-optimal local minima and obtain a more globally optimal solution. In addition, multiscale methods typically enjoy reduced computational cost. This thesis applies novel multiscale active contour methods to several problems in computer vision, especially in simultaneous segmentation and reconstruction of tomography images. In addition, novel multiscale methods are applied to contour registration using minimal surfaces and to the computation of non-linear rotationally invariant optical flow. Finally, a methodology for fast robust image segmentation is presented that relies on a lower dimensional image basis derived from an image scale space. The specific advantages of using multiscale methods in each of these problems is highlighted in the various simulations throughout the thesis, particularly their ability to avoid sub-optimal local minima and their ability to solve the problems at a lower overall computational cost.
36

3D imaging and nonparametric function estimation methods for analysis of infant cranial shape and detection of twin zygosity

Vuollo, V. (Ville) 17 April 2018 (has links)
Abstract The use of 3D imaging of craniofacial soft tissue has increased in medical science, and imaging technology has been developed greatly in recent years. 3D models are quite accurate and with imaging devices based on stereophotogrammetry, capturing the data is a quick and easy operation for the subject. However, analyzing 3D models of the face or head can be challenging and there is a growing need for efficient quantitative methods. In this thesis, new mathematical methods and tools for measuring craniofacial structures are developed. The thesis is divided into three parts. In the first part, facial 3D data of Lithuanian twins are used for the determination of zygosity. Statistical pattern recognition methodology is used for classification and the results are compared with DNA testing. In the second part of the thesis, the distribution of surface normal vector directions of a 3D infant head model is used to analyze skull deformation. The level of flatness and asymmetry are quantified by functionals of the kernel density estimate of the normal vector directions. Using 3D models from infants at the age of three months and clinical ratings made by experts, this novel method is compared with some previously suggested approaches. The method is also applied to clinical longitudinal research in which 3D images from three different time points are analyzed to find the course of positional cranial deformation and associated risk factors. The final part of the thesis introduces a novel statistical scale space method, SphereSiZer, for exploring the structures of a probability density function defined on the unit sphere. The tools developed in the second part are used for the implementation of SphereSiZer. In SphereSiZer, the scale-dependent features of the density are visualized by projecting the statistically significant gradients onto a planar contour plot of the density function. The method is tested by analyzing samples of surface unit normal vector data of an infant head as well as data from generated simulated spherical densities. The results and examples of the study show that the proposed novel methods perform well. The methods can be extended and developed in further studies. Cranial and facial 3D models will offer many opportunities for the development of new and sophisticated analytical methods in the future. / Tiivistelmä Pään ja kasvojen pehmytkudoksen 3D-kuvantaminen on yleistynyt lääketieteessä, ja siihen tarvittava teknologia on kehittynyt huomattavasti viime vuosina. 3D-mallit ovat melko tarkkoja, ja kuvaus stereofotogrammetriaan perustuvalla laitteella on nopea ja helppo tilanne kuvattavalle. Kasvojen ja pään 3D-mallien analysointi voi kuitenkin olla haastavaa, ja tarve tehokkaille kvantitatiivisille menetelmille on kasvanut. Tässä väitöskirjassa kehitetään uusia matemaattisia kraniofakiaalisten rakenteiden mittausmenetelmiä ja -työkaluja. Työ on jaettu kolmeen osaan. Ensimmäisessä osassa pyritään määrittämään liettualaisten kaksosten tsygositeetti kasvojen 3D-datan perusteella. Luokituksessa hyödynnetään tilastollista hahmontunnistusta, ja tuloksia verrataan DNA-testituloksiin. Toisessa osassa analysoidaan pään epämuodostumia imeväisikäisten päiden 3D-kuvista laskettujen pintanormaalivektorien suuntiin perustuvan jakauman avulla. Tasaisuuden ja epäsymmetrian määrää mitataan normaalivektorien suuntakulmien ydinestimaatin funktionaalien avulla. Kehitettyä menetelmää verrataan joihinkin aiemmin ehdotettuihin lähestymistapoihin mittaamalla kolmen kuukauden ikäisten imeväisten 3D-malleja ja tarkastelemalla asiantuntijoiden tekemiä kliinisiä pisteytyksiä. Menetelmää sovelletaan myös kliiniseen pitkittäistutkimukseen, jossa tutkitaan pään epämuodostumien ja niihin liittyvien riskitekijöiden kehitystä kolmena eri ajankohtana otettujen 3D-kuvien perusteella. Viimeisessä osassa esitellään uusi tilastollinen skaala-avaruusmenetelmä SphereSiZer, jolla tutkitaan yksikköpallon tiheysfunktion rakenteita. Toisessa osassa kehitettyjä työkaluja sovelletaan SphereSiZerin toteutukseen. SphereSiZer-menetelmässä tiheysfunktion eri skaalojen piirteet visualisoidaan projisoimalla tilastollisesti merkitsevät gradientit tiheysfunktiota kuvaavalle isoviivakartalle. Menetelmää sovelletaan imeväisikäisen pään pintanormaalivektoridataan ja simuloituihin, pallotiheysfunktioihin perustuviin otoksiin. Tulosten ja esimerkkien perusteella väitöskirjassa esitetyt uudet menetelmät toimivat hyvin. Menetelmiä voidaan myös kehittää edelleen ja laajentaa jatkotutkimuksissa. Pään ja kasvojen 3D-mallit tarjoavat paljon mahdollisuuksia uusien ja laadukkaiden analyysityökalujen kehitykseen myöhemmissä tutkimuksissa.
37

Discrete Scale-Space Theory and the Scale-Space Primal Sketch

Lindeberg, Tony January 1991 (has links)
This thesis, within the subfield of computer science known as computer vision, deals with the use of scale-space analysis in early low-level processing of visual information. The main contributions comprise the following five subjects: The formulation of a scale-space theory for discrete signals. Previously, the scale-space concept has been expressed for continuous signals only. We propose that the canonical way to construct a scale-space for discrete signals is by convolution with a kernel called the discrete analogue of the Gaussian kernel, or equivalently by solving a semi-discretized version of the diffusion equation. Both the one-dimensional and two-dimensional cases are covered. An extensive analysis of discrete smoothing kernels is carried out for one-dimensional signals and the discrete scale-space properties of the most common discretizations to the continuous theory are analysed. A representation, called the scale-space primal sketch, which gives a formal description of the hierarchical relations between structures at different levels of scale. It is aimed at making information in the scale-space representation explicit. We give a theory for its construction and an algorithm for computing it. A theory for extracting significant image structures and determining the scales of these structures from this representation in a solely bottom-up data-driven way. Examples demonstrating how such qualitative information extracted from the scale-space primal sketch can be used for guiding and simplifying other early visual processes. Applications are given to edge detection, histogram analysis and classification based on local features. Among other possible applications one can mention perceptual grouping, texture analysis, stereo matching, model matching and motion. A detailed theoretical analysis of the evolution properties of critical points and blobs in scale-space, comprising drift velocity estimates under scale-space smoothing, a classification of the possible types of generic events at bifurcation situations and estimates of how the number of local extrema in a signal can be expected to decrease as function of the scale parameter. For two-dimensional signals the generic bifurcation events are annihilations and creations of extremum-saddle point pairs. Interpreted in terms of blobs, these transitions correspond to annihilations, merges, splits and creations. Experiments on different types of real imagery demonstrate that the proposed theory gives perceptually intuitive results. / <p>QC 20120119</p>
38

Popis objektů v obraze / Object Description in Images

Dvořák, Pavel January 2011 (has links)
This thesis consider description of segments identified in image. At first there are described main methods of segmentation because it is a process contiguous before describing of objects. Next chapter is devoted to methods which focus on description identified regions. There are studied algorithms used for characterizing of different features. There are parts devoted to color, location, size, orientation, shape and topology. The end of this chapter is devoted to moments. Next chapters are focused on designing fit algorithms for segments description and XML files creating according to MPEG-7 standards and their implementation into RapidMiner. In the last chapter there are described results of the implementation.
39

Multiscale object-specific analysis : an integrated hierarchical approach for landscape ecology

Hay, Geoffrey J. January 2002 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
40

Vyhledání význačných bodů v rastrovém obraze / Searching for Points of Interest in Raster Image

Kaněčka, Petr Unknown Date (has links)
This document deals with an image points of interest detection possibilities, especially corner detectors. Many applications which are interested in computer vision needs these points as their necessary step in the image processing. It describes the reasons why it is so useful to find these points and shows some basic methods to find them. There are compared features of these methods at the end.

Page generated in 0.0591 seconds