• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 6
  • 5
  • 4
  • 3
  • Tagged with
  • 29
  • 29
  • 11
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Τμηματοποίηση εικόνων υφής με χρήση πολυφασματικής ανάλυσης και ελάττωσης διαστάσεων

Θεοδωρακόπουλος, Ηλίας 16 June 2010 (has links)
Τμηματοποίηση υφής ονομάζεται η διαδικασία του διαμερισμού μίας εικόνας σε πολλαπλά τμήματα-περιοχές, με κριτήριο την υφή κάθε περιοχής. Η διαδικασία αυτή βρίσκει πολλές εφαρμογές στους τομείς της υπολογιστικής όρασης, της ανάκτησης εικόνων, της ρομποτικής, της ανάλυσης δορυφορικών εικόνων κλπ. Αντικείμενο της παρούσης εργασίας είναι να διερευνηθεί η ικανότητα των αλγορίθμων μη γραμμικής ελάττωσης διάστασης, και ιδιαίτερα του αλγορίθμου Laplacian Eigenmaps, να παράγει μία αποδοτική αναπαράσταση των δεδομένων που προέρχονται από πολυφασματική ανάλυση εικόνων με χρήση φίλτρων Gabor, για την επίλυση του προβλήματος της τμηματοποίησης εικόνων υφής. Για το σκοπό αυτό προτείνεται μία νέα μέθοδος επιβλεπόμενης τμηματοποίησης υφής, που αξιοποιεί μία χαμηλής διάστασης αναπαράσταση των χαρακτηριστικών διανυσμάτων, και γνωστούς αλγόριθμους ομαδοποίησης δεδομένων όπως οι Fuzzy C-means και K-means, για την παραγωγή της τελικής τμηματοποίησης. Η αποτελεσματικότητα της μεθόδου συγκρίνεται με παρόμοιες μεθόδους που έχουν προταθεί στη βιβλιογραφία, και χρησιμοποιούν την αρχική , υψηλών διαστάσεων, αναπαράσταση των χαρακτηριστικών διανυσμάτων. Τα πειράματα διενεργήθηκαν χρησιμοποιώντας την βάση εικόνων υφής Brodatz. Κατά το στάδιο αξιολόγησης της μεθόδου, χρησιμοποιήθηκε ο δείκτης Rand index σαν μέτρο ομοιότητας ανάμεσα σε κάθε παραγόμενη τμηματοποίηση και την αντίστοιχη ground-truth τμηματοποίηση. / Texture segmentation is the process of partitioning an image into multiple segments (regions) based on their texture, with many applications in the area of computer vision, image retrieval, robotics, satellite imagery etc. The objective of this thesis is to investigate the ability of non-linear dimensionality reduction algorithms, and especially of LE algorithm, to produce an efficient representation for data derived from multi-spectral image analysis using Gabor filters, in solving the texture segmentation problem. For this purpose, we introduce a new supervised texture segmentation algorithm, which exploits a low-dimensional representation of feature vectors and well known clustering methods, such as Fuzzy C-means and K-means, to produce the final segmentation. The effectiveness of this method was compared to that of similar methods proposed in the literature, which use the initial high-dimensional representation of feature vectors. Experiments were performed on Brodatz texture database. During evaluation stage, Rand index has been used as a similarity measure between each segmentation and the corresponding ground-truth segmentation.
12

Ανάκτηση εικόνας βάσει υφής με χρήση Eye tracker / A texture based image retrieval technique using Eye tracker

Καραδήμας, Ηλίας 11 January 2011 (has links)
Η ραγδαία αύξηση των εικόνων, σε συνδυασμό με την αδυναμία των συστημάτων ανάκτησης εικόνας βάσει περιεχομένου να εξάγουν σημασιολογικά χαρακτηριστικά, οδήγησαν στην εισαγωγή του ανθρώπινου παράγοντα στην πειραματική διαδικασία. Ένας πολύ συνηθισμένος και επιτυχημένος τρόπος χρησιμοποίησης του ανθρώπινου συστήματος όρασης είναι μέσω της καταγραφής των οφθαλμικών κινήσεων. Στο σύστημα ανάκτησης το οποίο προτείνεται στην παρούσα εργασία γίνεται καταγραφή των σημείων εστίασης που προέκυψαν κατά την παρατήρηση των εικόνων βάσεως. Από τα σημεία αυτά, γίνεται εξαγωγή χαρακτηριστικών υφής με δύο μεθόδους, τα φίλτρα Gabor και το διακριτό μετασχηματισμό συνημιτόνου (DCT), παράγοντας πολυδιάστατα διανύσματα. Τα διανύσματα αυτά συγκρίνονται ανά δύο μέσω του μη παραμετρικού WW test, δημιουργώντας έναν πίνακα αποστάσεων. Με την εισαγωγή μιας ζητούμενης εικόνας στο σύστημα, τα χαρακτηριστικά υφής της συγκρίνονται με αυτά της βάσης προσθέτοντας μια επιπλέον διάσταση στον πίνακα απόστασης. Η απεικόνιση της σχέσης μεταξύ όλων των εικόνων (συμπεριλαμβανομένης και της αιτούμενης) γίνεται σε ένα χάρτη τριών διαστάσεων μέσω πολυδιάστατης κλιμάκωσης (MDS αλγόριθμος). Τα αποτελέσματα τα οποία προέρχονται από τα φίλτρα Gabor παρουσιάζουν μεγαλύτερη αξιοπιστία, κάνοντας εφικτή την επέκταση του συστήματος με χρήση μίας μεγαλύτερης βάσης εικόνων. / The rapid increase of images, combined with the weakness of the Content Based Image Retrieval (CBIR) systems to extract semantic features, led to the introduction of the human factor into the experimental procedure. A very common and successful way of using the human vision system is through the record of eye movements. In the retrieval system which is proposed in the present thesis, the fixation points that arose from viewing the database images are recorded. From these points, the texture features are extracted using two methods, Gabor filters and Discrete Cosine Transform (DCT), producing multidimensional vectors. These vectors are compared through the non parametric WW test, creating a distance matrix. By producing a query image in the system, its’ texture features are compared to those of the database, adding an extra dimension to the distance matrix. The visual representation of the relation among all the images (query image included), is depicted in a three dimensional map using multidimensional scaling (MDS algorithm). The results obtained from Gabor filters are characterized by higher robustness, making the expansion of the system possible, by using a bigger image database.
13

Segmentation of Clouds in Satellite Images / Klassificering av Moln i Satellitbilder

Gasslander, Maja January 2016 (has links)
The usage of 3D modelling is increasing fast, both for civilian and military areas, such as navigation, targeting and urban planning. When creating a 3D model from satellite images, clouds canbe problematic. Thus, automatic detection ofclouds inthe imagesis ofgreat use. This master thesis was carried out at Vricon, who produces 3D models of the earth from satellite images.This thesis aimed to investigate if Support Vector Machines could classify pixels into cloud or non-cloud, with a combination of texture and color as features. To solve the stated goal, the task was divided into several subproblems, where the first part was to extract features from the images. Then the images were preprocessed before fed to the classifier. After that, the classifier was trained, and finally evaluated.The two methods that gave the best results in this thesis had approximately 95 % correctly classified pixels. This result is better than the existing cloud segmentation method at Vricon, for the tested terrain and cloud types.
14

Uma abordagem multi-escala para a geração de mosaicos / A multi-scale approach for mosaic generation

João Roberto de Godoy Sampaio 25 April 2007 (has links)
Um mosaico é o conjunto de fotos de uma determinada área, recortadas e montadas técnica e artísticamente, de forma a dar a impressão de que todo o conjunto é uma única fotografia. No caso de fotografias aéreas, sua utilização soluciona o problema da necessidade de se retratar uma área de interesse mais extensa do que o campo de cobertura das lentes da câmera utilizada. O foco deste trabalho é a criação automática de mosaicos buscando encontrar a posição real de um conjunto de imagens imagens adquiridas em baixa altitude, de baixa escala, em relação à um Mapa de Base, de escala maior, realizando, assim, uma correlação entre imagens de escalas diferentes. Este problema é abordado por técnicas de análise multi-escala, mais precisamente, pela utilização de filtros de Gabor. A metodologia desenvolvida utiliza um banco de filtros de Gabor aplicado sobre uma imagem de referência de modo que, a partir da aplicação destes filtros sobre a mesma, seja possível gerar um processo automático de geração do mosaico para o restante do conjunto de imagens. Experimentos realizados utilizando o método proposto demonstram a eficácia do mesmo para imagens com texturas de orientação marcante, como o caso de imagens aéreas de plantação de eucaliptos / A mosaic is a set of pictures of a given area, technically and artistically cut and ?glued? together, giving the impression that the entire set resembles a single picture. For aerial photography, the use of mosaics solves the problem of imaging an area of interest whose dimension is much larger than that covered by the majority of the cameras available. This work focuses on the automatic creation of mosaics and aims to compute the real position of a set of images acquired at low altitudes (lower scale), in relation with a base map larger scale), by correlating images in different scales. Multi-scale analysis techniques, in particular, the Gabor filters, constitute an approach to this problem. The proposed methodology uses a bank of Gabor filters applied over a reference image in a way that an automatic process of mosaic generation, with the remaining set of images, could be carried out. Experiments have shown the efficiency of the proposed technique especially for images with clear textural orientation, for example, the case of aerial photographs of eucalyptus plantations
15

Tracking of railroads for autonomous guidance of UAVs : using Vanishing Point detection

Clerc, Anthony January 2018 (has links)
UAVs have gained in popularity and the number of applications has soared over the past years, ranging from leisure to commercial activities. This thesis is discussing specifically railroad applications, which is a domain rarely explored. Two different aspects are analysed. While developing a new application or migrating a ground-based system to UAV platform, the different challenges encountered are often unknown. Therefore, this thesis highlights the most important ones to take into consideration during the development process. From a more technical aspect, the implementation of autonomous guidance for UAVs over railroads using vanishing point extraction is studied. Two different algorithms are presented and compared, the first one is using line extraction method whereas the second uses joint activities of Gabor filters. The results demonstrate that the applied methodologies provide good results and that a significant difference exists between both algorithms in terms of computation time. A second implementation tackling the detection of railway topologies to enable the use on multiple rail road configurations is discussed. A first technique is presented using exclusively vanishing points for the detection, however, the results for complex images are not satisfactory. Therefore, a second method is studied using line characteristics on top of the previous algorithm. This second implementation has proven to give good results.
16

Efficient FPGA Architectures for Separable Filters and Logarithmic Multipliers and Automation of Fish Feature Extraction Using Gabor Filters

Joginipelly, Arjun Kumar 13 August 2014 (has links)
Convolution and multiplication operations in the filtering process can be optimized by minimizing the resource utilization using Field Programmable Gate Arrays (FPGA) and separable filter kernels. An FPGA architecture for separable convolution is proposed to achieve reduction of on-chip resource utilization and external memory bandwidth for a given processing rate of the convolution unit. Multiplication in integer number system can be optimized in terms of resources, operation time and power consumption by converting to logarithmic domain. To achieve this, a method altering the filter weights is proposed and implemented for error reduction. The results obtained depict significant error reduction when compared to existing methods, thereby optimizing the multiplication in terms of the above mentioned metrics. Underwater video and still images are used by many programs within National Oceanic Atmospheric and Administration (NOAA) fisheries with the objective of identifying, classifying and quantifying living marine resources. They use underwater cameras to get video recording data for manual analysis. This process of manual analysis is labour intensive, time consuming and error prone. An efficient solution for this problem is proposed which uses Gabor filters for feature extraction. The proposed method is implemented to identify two species of fish namely Epinephelus morio and Ocyurus chrysurus. The results show higher rate of detection with minimal rate of false alarms.
17

"Recuperação de imagens por conteúdo através de análise multiresolução por Wavelets" / "Content based image retrieval through multiresolution wavelet analysis

Castañon, Cesar Armando Beltran 28 February 2003 (has links)
Os sistemas de recuperação de imagens por conteúdo (CBIR -Content-based Image Retrieval) possuem a habilidade de retornar imagens utilizando como chave de busca outras imagens. Considerando uma imagem de consulta, o foco de um sistema CBIR é pesquisar no banco de dados as "n" imagens mais similares à imagem de consulta de acordo com um critério dado. Este trabalho de pesquisa foi direcionado na geração de vetores de características para um sistema CBIR considerando bancos de imagens médicas, para propiciar tal tipo de consulta. Um vetor de características é uma representação numérica sucinta de uma imagem ou parte dela, descrevendo seus detalhes mais representativos. O vetor de características é um vetor "n"-dimensional contendo esses valores. Essa nova representação da imagem pode ser armazenada em uma base de dados, e assim, agilizar o processo de recuperação de imagens. Uma abordagem alternativa para caracterizar imagens para um sistema CBIR é a transformação do domínio. A principal vantagem de uma transformação é sua efetiva caracterização das propriedades locais da imagem. Recentemente, pesquisadores das áreas de matemática aplicada e de processamento de sinais desenvolveram técnicas práticas de "wavelet" para a representação multiescala e análise de sinais. Estas novas ferramentas diferenciam-se das tradicionais técnicas de Fourier pela forma de localizar a informação no plano tempo-freqüência; basicamente, elas têm a capacidade de mudar de uma resolução para outra, o que faz delas especialmente adequadas para a análise de sinais não estacionários. A transformada "wavelet" consiste de um conjunto de funções base que representa o sinal em diferentes bandas de freqüência, cada uma com resoluções distintas correspondentes a cada escala. Estas foram aplicadas com sucesso na compressão, melhoria, análise, classificação, caracterização e recuperação de imagens. Uma das áreas beneficiadas, onde essas propriedades têm encontrado grande relevância, é a área médica, através da representação e descrição de imagens médicas. Este trabalho descreve uma abordagem para um banco de imagens médicas, que é orientada à extração de características para um sistema CBIR baseada na decomposição multiresolução de "wavelets" utilizando os filtros de Daubechies e Gabor. Essas novas características de imagens foram também testadas utilizando uma estrutura de indexação métrica "Slim-tree". Assim, pode-se aumentar o alcance semântico do sistema cbPACS (Content-Based Picture Archiving and Comunication Systems), atualmente em desenvolvimento conjunto entre o Grupo de Bases de Dados e Imagens do ICMC--USP e o Centro de Ciências de Imagens e Física Médica do Hospital das Clínicas de Riberão Preto-USP. / Content-based image retrieval (CBIR) refers to the ability to retrieve images on the basis of the image content. Given a query image, the goal of a CBIR system is to search the database and return the "n" most similar (close) ones to the query image according to a given criteria. Our research addresses the generation of feature vectors of a CBIR system for medical image databases. A feature vector is a numeric representation of an image or part of it over its representative aspects. The feature vector is a "n"-dimensional vector organizing such values. This new image representation can be stored into a database and allow a fast image retrieval. An alternative for image characterization for a CBIR system is the domain transform. The principal advantage of a transform is its effective characterization for their local image properties. In the past few years, researches in applied mathematics and signal processing have developed practical "wavelet" methods for the multiscale representation and analysis of signals. These new tools differ from the traditional Fourier techniques by the way in which they localize the information in the time-frequency plane; in particular, they are capable of trading one type of resolution for the other, which makes them especially suitable for the analysis of non-stationary signals. The "wavelet" transform is a set of basis functions that represents signals in different frequency bands, each one with a resolution matching its scale. They have been successfully applied to image compression, enhancements, analysis, classifications, characterization and retrieval. One privileged area of application where these properties have been found to be relevant is medical imaging. In this work we describe an approach to CBIR for medical image databases focused on feature extraction based on multiresolution "wavelets" decomposition, taking advantage of the Daubechies and Gabor. Fundamental to our approach is how images are characterized, such that the retrieval procedure can bring similar images within the domain of interest, using a metric structure indexing, like the "Slim-tree". Thus, it increased the semantic capability of the cbPACS(Content-Based Picture Archiving and Comunication Systems), currently in joined developing between the Database and Image Group of the ICMC--USP and the Science Center for Images and Physical Medic of the Clinics Hospital of Riberão Preto--USP.
18

[en] SCENE RECONSTRUCTION USING SHAPE FROM TEXTURE / [pt] RECONSTRUÇÃO DO ESPAÇO TRIDIMENSIONAL A PARTIR DA DEFORMAÇÃO DE TEXTURA DE IMAGENS

DIOGO MENEZES DUARTE 11 September 2006 (has links)
[pt] O presente trabalho apresenta um estudo sobre técnicas de construção de um modelo tridimensional de objetos a partir unicamente da informação de textura. Estas técnicas são baseadas na medida da deformação da textura ao longo de uma superfície, obtendo assim a orientação do vetor normal à superfície em cada ponto. De posse da orientação é possível construir um modelo tridimensional do objeto. São avaliados três métodos. O primeiro emprega Filtros de Gabor e momentos de segunda ordem como medida de textura e os outros dois estimam a transformação afim entre recortes de igual tamanho na imagem. A estimativa da transformação afim tem ênfase especial neste trabalho por ser um passo fundamental no algoritmo que mede a deformação da textura. Os métodos foram validados em diferentes etapas, de forma a avaliar: estimativa da transformação afim; decomposição em ângulos; e reconstrução do modelo 3D a partir do mapa de orientação, também conhecido como mapa de agulhas. A avaliação experimental foi realizada com imagens sintéticas e fotos de objetos reais. Os resultados mostram a aplicabilidade, dificuldades e restrições dos métodos analisados. / [en] The current work presents a study about methods for 3D object shape reconstruction based on their texture information. These methods, called Shape from Texture, measure texture deformation along object surface, obtaining the orientation in each point of the image. Having the orientation in each point (a needle map) it is possible to construct the object 3D model. Three methods are studied in this dissertation. One of these methods uses Gabor Filters and second order moments, and other two that estimate the affine transform between images patches. The affine estimation problem gets emphasis in the present work since it is an essential step in most Shape from Texture algorithms. The methods were tested in separate steps: evaluate the affine transform estimation; the decomposition of the affine matrix in slant and tilt angles; and the 3D model reconstruction using the needle map. Both synthetic and real images were used on the experiments. The results clearly show the applicability, difficulties and restrictions of the investigated methods.
19

Recognition Of Human Face Expressions

Ener, Emrah 01 September 2006 (has links) (PDF)
In this study a fully automatic and scale invariant feature extractor which does not require manual initialization or special equipment is proposed. Face location and size is extracted using skin segmentation and ellipse fitting. Extracted face region is scaled to a predefined size, later upper and lower facial templates are used for feature extraction. Template localization and template parameter calculations are carried out using Principal Component Analysis. Changes in facial feature coordinates between analyzed image and neutral expression image are used for expression classification. Performances of different classifiers are evaluated. Performance of proposed feature extractor is also tested on sample video sequences. Facial features are extracted in the first frame and KLT tracker is used for tracking the extracted features. Lost features are detected using face geometry rules and they are relocated using feature extractor. As an alternative to feature based technique an available holistic method which analyses face without partitioning is implemented. Face images are filtered using Gabor filters tuned to different scales and orientations. Filtered images are combined to form Gabor jets. Dimensionality of Gabor jets is decreased using Principal Component Analysis. Performances of different classifiers on low dimensional Gabor jets are compared. Feature based and holistic classifier performances are compared using JAFFE and AF facial expression databases.
20

Human Activity Recognition By Gait Analysis

Kepenekci, Burcu 01 February 2011 (has links) (PDF)
This thesis analyzes the human action recognition problem. Human actions are modeled as a time evolving temporal texture. Gabor filters, which are proved to be a robust 2D texture representation tool by detecting spatial points with high variation, is extended to 3D domain to capture motion texture features. A well known filtering algorithm and a recent unsupervised clustering algorithm, the Genetic Chromodynamics, are combined to select salient spatio-temporal features of the temporal texture and to segment the activity sequence into temporal texture primitives. Each activity sequence is represented as a composition of temporal texture primitives with its salient spatio-temporal features, which are also the symbols of our codebook. To overcome temporal variation between different performances of the same action, a Profile Hidden Markov Model is applied with Viterbi Path Counting (ensemble training). Not only parameters and structure but also codebook is learned during training.

Page generated in 0.2155 seconds