• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 50
  • 50
  • 14
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Monocular and Binocular Visual Tracking

Salama, Gouda Ismail Mohamed 06 January 2000 (has links)
Visual tracking is one of the most important applications of computer vision. Several tracking systems have been developed which either focus mainly on the tracking of targets moving on a plane, or attempt to reduce the 3-dimensional tracking problem to the tracking of a set of characteristic points of the target. These approaches are seriously handicapped in complex visual situations, particularly those involving significant perspective, textures, repeating patterns, or occlusion. This dissertation describes a new approach to visual tracking for monocular and binocular image sequences, and for both passive and active cameras. The method combines Kalman-type prediction with steepest-descent search for correspondences, using 2-dimensional affine mappings between images. This approach differs significantly from many recent tracking systems, which emphasize the recovery of 3-dimensional motion and/or structure of objects in the scene. We argue that 2-dimensional area-based matching is sufficient in many situations of interest, and we present experimental results with real image sequences to illustrate the efficacy of this approach. Image matching between two images is a simple one to one mapping, if there is no occlusion. In the presence of occlusion wrong matching is inevitable. Few approaches have been developed to address this issue. This dissertation considers the effect of occlusion on tracking a moving object for both monocular and binocular image sequences. The visual tracking system described here attempts to detect occlusion based on the residual error computed by the matching method. If the residual matching error exceeds a user-defined threshold, this means that the tracked object may be occluded by another object. When occlusion is detected, tracking continues with the predicted locations based on Kalman filtering. This serves as a predictor of the target position until it reemerges from the occlusion again. Although the method uses a constant image velocity Kalman filtering, it has been shown to function reasonably well in a non-constant velocity situation. Experimental results show that tracking can be maintained during periods of substantial occlusion. The area-based approach to image matching often involves correlation-based comparisons between images, and this requires the specification of a size for the correlation windows. Accordingly, a new approach based on moment invariants was developed to select window size adaptively. This approach is based on the sudden increasing or decreasing in the first Maitra moment invariant. We applied a robust regression model to smooth the first Maitra moment invariant to make the method robust against noise. This dissertation also considers the effect of spatial quantization on several moment invariants. Of particular interest are the affine moment invariants, which have emerged, in recent years as a useful tool for image reconstruction, image registration, and recognition of deformed objects. Traditional analysis assumes moments and moment invariants for images that are defined in the continuous domain. Quantization of the image plane is necessary, because otherwise the image cannot be processed digitally. Image acquisition by a digital system imposes spatial and intensity quantization that, in turn, introduce errors into moment and invariant computations. This dissertation also derives expressions for quantization-induced error in several important cases. Although it considers spatial quantization only, this represents an important extension of work by other researchers. A mathematical theory for a visual tracking approach of a moving object is presented in this dissertation. This approach can track a moving object in an image sequence where the camera is passive, and when the camera is actively controlled. The algorithm used here is computationally cheap and suitable for real-time implementation. We implemented the proposed method on an active vision system, and carried out experiments of monocular and binocular tracking for various kinds of objects in different environments. These experiments demonstrated that very good performance using real images for fairly complicated situations. / Ph. D.
22

Algoritmos de casamento de imagens com filtragem adaptativa de outliers / Image matching algorithms with adaptive filtering of outliers.

Ramos, Jonathan da Silva 01 December 2016 (has links)
O registro de imagens tem um papel importante em várias aplicações, tais como reconstrução de objetos 3D, reconhecimento de padrões, imagens microscópicas, entre outras. Este registro é composto por três passos principais: (1) seleção de pontos de interesse; (2) extração de características dos pontos de interesse; (3) correspondência entre os pontos de interesse de uma imagem para a outra. Para os passos 1 e 2, algoritmos como SIFT e SURF têm apresentado resultados satisfatórios. Entretanto, para o passo 3 ocorre a presença de outliers, ou seja, pontos de interesse que foram incorretamente correspondidos. Uma única correspondência incorreta leva a um resultado final indesejável. Os algoritmos para remoção de outliers (consenso) possuem um alto custo computacional, que cresce à medida que a quantidade de outliers aumenta. Com o objetivo de reduzir o tempo de processamento necessário por esses algoritmos, o algoritmo FOMP(do inglês, Filtering out Outliers from Matched Points), foi proposto e desenvolvido neste trabalho para realizar a filtragem de outliers no conjunto de pontos inicialmente correspondidos. O método FOMP considera cada conjunto de pontos como um grafo completo, no qual os pesos são as distâncias entre os pontos. Por meio da soma de diferenças entre os pesos das arestas, o vértice que apresentar maior valor é removido. Para validar o método FOMP, foram realizados experimentos utilizando quatro bases de imagens. Cada base apresenta características intrínsecas: (a) diferenças de rotação zoom da câmera; (b) padrões repetitivos, os quais geram duplicidade nos vetores de características; (c) objetos de formados, tais como plásticos, papéis ou tecido; (d) transformações afins (diferentes pontos de vista). Os experimentos realizados mostraram que o filtro FOMP remove mais de 65% dos outliers, enquanto mantém cerca de 98%dos inliers. A abordagem proposta mantém a precisão dos métodos de consenso, enquanto reduz o tempo de processamento pela metade para os métodos baseados em grafos. / Image matching plays a major role in many applications, such as pattern recognition and microscopic imaging. It encompasses three steps: 1) interest point selection; 2) feature extraction from each point; 3) feature point matching. For steps 1 and 2, traditional interest point detectors/ extractors have worked well. However, for step 3 even a few points incorrectly matched (outliers), might lead to an undesirable result. State-of-the-art consensus algorithms present a high time cost as the number of outlier increases. Aiming at overcoming this problem, we present FOMP, a preprocessing approach, that reduces the number of outliers in the initial set of matched points. FOMP filters out the vertices that present a higher difference among their edges in a complete graph representation of the points. To validate the proposed method, experiments were performed with four image database: (a) variations of rotation or camera zoom; (b) repetitive patterns, which leads to duplicity of features vectors; (c) deformable objects, such as plastics, clothes or papers; (d) affine transformations (different viewpoint). The experimental results showed that FOMP removes more than 65% of the outliers, while keeping over 98% of the inliers. Moreover, the precision of traditional methods is kept, while reducing the processing time of graph based approaches by half.
23

Binär matchning av bilder med hjälp av vektorer från deneuklidiska avståndstransformen / Binary matching on images using the Euclidean Distance Transform

Hjelm Andersson, Patrick January 2004 (has links)
<p>This thesis shows the result from investigations of methods that use distance vectors when matching pictures. The distance vectors are available in a distance map made by the Euclidean Distance Transform. The investigated methods use the two characteristic features of the distance vector when matching pictures, length and direction. The length of the vector is used to calculate a value of how good a match is and the direction of the vector is used to predict a transformation to get a better match. The results shows that the number of calculation steps that are used during a search can be reduced compared to matching methods that only uses the distance during the matching.</p>
24

Binär matchning av bilder med hjälp av vektorer från deneuklidiska avståndstransformen / Binary matching on images using the Euclidean Distance Transform

Hjelm Andersson, Patrick January 2004 (has links)
This thesis shows the result from investigations of methods that use distance vectors when matching pictures. The distance vectors are available in a distance map made by the Euclidean Distance Transform. The investigated methods use the two characteristic features of the distance vector when matching pictures, length and direction. The length of the vector is used to calculate a value of how good a match is and the direction of the vector is used to predict a transformation to get a better match. The results shows that the number of calculation steps that are used during a search can be reduced compared to matching methods that only uses the distance during the matching.
25

Alignement élastique d'images pour la reconnaissance d'objet

Duchenne, Olivier 29 November 2012 (has links) (PDF)
The objective of this thesis is to explore the use of graph matching in object recognition systems. In the continuity of the previously described articles, rather than using descriptors invariant to misalignment, this work directly tries to find explicit correspondences between prototypes and test images, in order to build a robust similarity measure and infer the class of the test images. In chapter 2, we will present a method that given interest points in two images tries to find correspondences between them. It extends previous graph matching approaches [Leordeanu and Hebert, 2005a] to handle interactions between more than two feature correspondences. This allows us to build a more discriminative and/or more invariant matching method. The main contributions of this chapter are: The introduction of an high-order objective function for hyper-graph matching (Section 2.3.1). The application of the tensor power iteration method to the high-order matching task, combined with a relaxation based on constraints on the row norms of assignment matrices, which is tighter than previous methods (Section 2.3.1). An l1-norm instead of the classical l2-norm relaxation, that provides solutions that are more interpretable but still allows an efficient power iteration algorithm (Section 2.3.5). The design of appropriate similarity measures that can be chosen either to improve the invariance of matching, or to improve the expressivity of the model (Section 2.3.6). The proposed approach has been implemented, and it is compared to stateof-the-art algorithms on both synthetic and real data. As shown by our experiments (Section 2.5), our implementation is, overall, as fast as these methods in spite of the higher complexity of the model, with better accuracy on standard databases. In chapter 3, we build a graph-matching method for object categorization. The main contributions of this chapter are: Generalizing [Caputo and Jie, 2009; Wallraven et al., 2003], we propose in Section 3.3 to use the optimum value of the graph-matching problem associated with two images as a (non positive definite) kernel, suitable for SVM classification. We propose in Section 3.4 a novel extension of Ishikawa's method [Ishikawa, 2003] for optimizing MRFs which is orders of magnitude faster than competing algorithms (e.g., [Kim and Grauman, 2010; Kolmogorov and Zabih, 2004; Leordeanu and Hebert, 2005a]) for the grids with a few hundred nodes considered in this article). In turn, this allows us to combine our kernel with SVMs in image classification tasks. We demonstrate in Section 3.5 through experiments with standard benchmarks (Caltech 101, Caltech 256, and Scenes datasets) that our method matches and in some cases exceeds the state of the art for methods using a single type of features. In chapter 4, we introduce our work about object detection that perform fast image alignment. The main contributions of this chapter are: We propose a novel image similarity measure that allows for arbitrary deformations of the image pattern within some given disparity range and can be evaluated very efficiently [Lemire, 2006], with a cost equal to a small constant times that of correlation in a sliding-window mode. Our similarity measure relies on a hierarchical notion of parts based on simple rectangular image primitives and HOG cells [Dalal and Triggs, 2005a], and does not require manual part specification [Felzenszwalb and Huttenlocher, 2005b; Bourdev and Malik, 2009; Felzenszwalb et al., 2010] or automated discovery [Lazebnik et al., 2005; Kushal et al., 2007].
26

Využití optických a laserových dat k modelování lesních porostů / Utilization of optical and laser data for modeling forest areas

Jebavá, Lucie January 2018 (has links)
The thesis deals with the possible use of optical data for modeling forest area compared with utilization of airborne laser scanning data. At first these two datasets are compared and causes of differences are explained. Then canopy height models are made and object-oriented classification is applied for separation of vegetation stands. Methodical procedure is suggested for delineation and detection individual trees in forest. Then their height is detected. There are summarized and other possibilities for improvement in detection and delineation of trees. The results show that optical data with resolution about 25 cm are suitable for dermining the characteristics of the forest stands up to individual tree level. The outputs of this research can be used for forest inventory. Key words: aerial imagery, image matching, laser scanning, point cloud, forest inventory
27

Analysis of 3D color matches for the creation and consumption of video content / Appariement d'images par appariement de couleurs dans un espace 3D pour la création et la consommation de contenus vidéo

Sheikh Faridul, Hasan 06 January 2014 (has links)
L'objectif de cette thèse est de proposer une solution au problème de la constance des couleurs entre les images d'une même scène acquises selon un même point de vue ou selon différents points de vue. Ce problème constitue un défi majeur en vision par ordinateur car d'un point de vue à l'autre, on peut être confronté à des variations des conditions d'éclairage (spectre de l'éclairage, intensité de l'éclairage) et des conditions de prise de vue (point de vue, type de caméra, paramètres d'acquisition tels que focus, exposition, balance des blancs, etc.). Ces variations induisent alors des différences d'apparence couleur entre les images acquises qui touchent soit sur l'ensemble de la scène observée soit sur une partie de celle-ci. Dans cette thèse, nous proposons une solution à ce problème qui permet de modéliser puis de compenser, de corriger, ces variations de couleur à partir d'une méthode basée sur quatre étapes : (1) calcul des correspondances géométriques à partir de points d'intérêt (SIFT et MESR) ; (2) calculs des correspondances couleurs à partir d'une approche locale; (3) modélisation de ces correspondances par une méthode de type RANSAC; (4) compensation des différences de couleur par une méthode polynomiale à partir de chacun des canaux couleur, puis par une méthode d'approximation linéaire conjuguée à une méthode d'estimation de l'illuminant de type CAT afin de tenir compte des intercorrélations entre canaux couleur et des changements couleur dus à l'illuminant. Cette solution est comparée aux autres approches de l'état de l'art. Afin d'évaluer quantitativement et qualitativement la pertinence, la performance et la robustesse de cette solution, nous proposons deux jeux d'images spécialement conçus à cet effet. Les résultats de différentes expérimentations que nous avons menées prouvent que la solution que nous proposons est plus performante que toutes les autres solutions proposées jusqu'alors / The objective of this thesis is to propose a solution to the problem of color consistency between images originate from the same scene irrespective of acquisition conditions. Therefore, we present a new color mapping framework that is able to compensate color differences and achieve color consistency between views of the same scene. Our proposed, new framework works in two phases. In the first phase, we propose a new method that can robustly collect color correspondences from the neighborhood of sparse feature correspondences, despite the low accuracy of feature correspondences. In the second phase, from these color correspondences, we introduce a new, two-step, robust estimation of the color mapping model: first, nonlinear channel-wise estimation; second, linear cross-channel estimation. For experimental assessment, we propose two new image datasets: one with ground truth for quantitative assessment; another, without the ground truth for qualitative assessment. We have demonstrated a series of experiments in order to investigate the robustness of our proposed framework as well as its comparison with the state of the art. We have also provided brief overview, sample results, and future perspectives of various applications of color mapping. In experimental results, we have demonstrated that, unlike many methods of the state of the art, our proposed color mapping is robust to changes of: illumination spectrum, illumination intensity, imaging devices (sensor, optic), imaging device settings (exposure, white balance), viewing conditions (viewing angle, viewing distance)
28

Algoritmos de casamento de imagens com filtragem adaptativa de outliers / Image matching algorithms with adaptive filtering of outliers.

Jonathan da Silva Ramos 01 December 2016 (has links)
O registro de imagens tem um papel importante em várias aplicações, tais como reconstrução de objetos 3D, reconhecimento de padrões, imagens microscópicas, entre outras. Este registro é composto por três passos principais: (1) seleção de pontos de interesse; (2) extração de características dos pontos de interesse; (3) correspondência entre os pontos de interesse de uma imagem para a outra. Para os passos 1 e 2, algoritmos como SIFT e SURF têm apresentado resultados satisfatórios. Entretanto, para o passo 3 ocorre a presença de outliers, ou seja, pontos de interesse que foram incorretamente correspondidos. Uma única correspondência incorreta leva a um resultado final indesejável. Os algoritmos para remoção de outliers (consenso) possuem um alto custo computacional, que cresce à medida que a quantidade de outliers aumenta. Com o objetivo de reduzir o tempo de processamento necessário por esses algoritmos, o algoritmo FOMP(do inglês, Filtering out Outliers from Matched Points), foi proposto e desenvolvido neste trabalho para realizar a filtragem de outliers no conjunto de pontos inicialmente correspondidos. O método FOMP considera cada conjunto de pontos como um grafo completo, no qual os pesos são as distâncias entre os pontos. Por meio da soma de diferenças entre os pesos das arestas, o vértice que apresentar maior valor é removido. Para validar o método FOMP, foram realizados experimentos utilizando quatro bases de imagens. Cada base apresenta características intrínsecas: (a) diferenças de rotação zoom da câmera; (b) padrões repetitivos, os quais geram duplicidade nos vetores de características; (c) objetos de formados, tais como plásticos, papéis ou tecido; (d) transformações afins (diferentes pontos de vista). Os experimentos realizados mostraram que o filtro FOMP remove mais de 65% dos outliers, enquanto mantém cerca de 98%dos inliers. A abordagem proposta mantém a precisão dos métodos de consenso, enquanto reduz o tempo de processamento pela metade para os métodos baseados em grafos. / Image matching plays a major role in many applications, such as pattern recognition and microscopic imaging. It encompasses three steps: 1) interest point selection; 2) feature extraction from each point; 3) feature point matching. For steps 1 and 2, traditional interest point detectors/ extractors have worked well. However, for step 3 even a few points incorrectly matched (outliers), might lead to an undesirable result. State-of-the-art consensus algorithms present a high time cost as the number of outlier increases. Aiming at overcoming this problem, we present FOMP, a preprocessing approach, that reduces the number of outliers in the initial set of matched points. FOMP filters out the vertices that present a higher difference among their edges in a complete graph representation of the points. To validate the proposed method, experiments were performed with four image database: (a) variations of rotation or camera zoom; (b) repetitive patterns, which leads to duplicity of features vectors; (c) deformable objects, such as plastics, clothes or papers; (d) affine transformations (different viewpoint). The experimental results showed that FOMP removes more than 65% of the outliers, while keeping over 98% of the inliers. Moreover, the precision of traditional methods is kept, while reducing the processing time of graph based approaches by half.
29

Jämförelse mellan 60 % och 80 % övertäckning vid matchning av flygbilder : För framställning av ytmodell

Rudolfsson, Anton January 2017 (has links)
No description available.
30

Geo-localization Refinement of Optical Satellite Images by Embedding Synthetic Aperture Radar Data in Novel Deep Learning Frameworks

Merkle, Nina Marie 06 December 2018 (has links)
Every year, the number of applications relying on information extracted from high-resolution satellite imagery increases. In particular, the combined use of different data sources is rising steadily, for example to create high-resolution maps, to detect changes over time or to conduct image classification. In order to correctly fuse information from multiple data sources, the utilized images have to be precisely geometrically registered and have to exhibit a high absolute geo-localization accuracy. Due to the image acquisition process, optical satellite images commonly have an absolute geo-localization accuracy in the order of meters or tens of meters only. On the other hand, images captured by the high-resolution synthetic aperture radar satellite TerraSAR-X can achieve an absolute geo-localization accuracy within a few decimeters and therefore represent a reliable source for absolute geo-localization accuracy improvement of optical data. The main objective of this thesis is to address the challenge of image matching between high resolution optical and synthetic aperture radar (SAR) satellite imagery in order to improve the absolute geo-localization accuracy of the optical images. The different imaging properties of optical and SAR data pose a substantial challenge for a precise and accurate image matching, in particular for the handcrafted feature extraction stage common for traditional optical and SAR image matching methods. Therefore, a concept is required which is carefully tailored to the characteristics of optical and SAR imagery and is able to learn the identification and extraction of relevant features. Inspired by recent breakthroughs in the training of neural networks through deep learning techniques and the subsequent developments for automatic feature extraction and matching methods of single sensor images, two novel optical and SAR image matching methods are developed. Both methods pursue the goal of generating accurate and precise tie points by matching optical and SAR image patches. The foundation of these frameworks is a semi-automatic matching area selection method creating an optimal initialization for the matching approaches, by limiting the geometric differences of optical and SAR image pairs. The idea of the first approach is to eliminate the radiometric differences between the images trough an image-to-image translation with the help of generative adversarial networks and to realize the subsequent image matching through traditional algorithms. The second approach is an end-to-end method in which a Siamese neural network learns to automatically create tie points between image pairs through a targeted training. The geo-localization accuracy improvement of optical images is ultimately achieved by adjusting the corresponding optical sensor model parameters through the generated set of tie points. The quality of the proposed methods is verified using an independent set of optical and SAR image pairs spread over Europe. Thereby, the focus is set on a quantitative and qualitative evaluation of the two tie point generation methods and their ability to generate reliable and accurate tie points. The results prove the potential of the developed concepts, but also reveal weaknesses such as the limited number of training and test data acquired by only one combination of optical and SAR sensor systems. Overall, the tie points generated by both deep learning-based concepts enable an absolute geo-localization improvement of optical images, outperforming state-of-the-art methods.

Page generated in 0.4432 seconds