• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 14
  • 7
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 42
  • 34
  • 31
  • 31
  • 24
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Otimização de desempenho na recuperação de imagens de um sistema de auxílio ao diagnóstico de pneumonias na infância / Performance optimization in an image retrieval system to aid diagnosis of pneumonia in children

Silva, Keila Sousa 23 September 2013 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-09-22T21:46:54Z No. of bitstreams: 2 Keila Sousa Silva.pdf: 7076994 bytes, checksum: b2ae0f0393e80451a0c468e865f49847 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-23T15:39:04Z (GMT) No. of bitstreams: 2 Keila Sousa Silva.pdf: 7076994 bytes, checksum: b2ae0f0393e80451a0c468e865f49847 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-09-23T15:39:04Z (GMT). No. of bitstreams: 2 Keila Sousa Silva.pdf: 7076994 bytes, checksum: b2ae0f0393e80451a0c468e865f49847 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-09-23 / This study aims to optimize runtime performance of a system developed to aid the diagnosis of childhood pneumonia by computer. This system, called Pneumocad according Macedo (2012), aims to identify chest radiographs consistent with the disease using computational techniques for recognizing patterns in textures through decomposition of the transformed wavelets, features extracted from the decomposition and classi cation applied to radiographs. In pursuit of this optimization in terms of performance at runtime, insertion of new rays and the recovery of their radiographs similar, we used the proposed deployment of a cluster architecture of radiographs already stored in the database Pneumocad. In parallel , functionality, responsible for de ning how a radiograph is similar to another , were transferred from the source code Java to views in the database. The experiments were performed on databases that contained 183, 2.568 and 10.200 radiographs and using the Pneumo- cad of Macedo (2012) , the Pneumocad Optimized with views and without grouping and Pneumocad Optimized with views and grouping. The experiments and results show that the proposed optimization contributed to the evolution of Pneumocad and enhanced this tool to support the diagnosis of pneumonia in childhood. / O presente estudo propõe a otimizaçao de desempenho de tempo de execu ção de um sistema desenvolvido para auxiliar o diagn ostico de pneumonia infantil por computador. Esse sistema, denominado Pneumocad, segundo Macedo (2012), visa identi car radiografi as de t órax compatí veis com a doen ça utilizando t écnicas de reconhecimento computacional de padrões em texturas por meio da decomposi ção das transformadas wavelets, das caracter sticas extra das das decomposi ções e da classi ca ção aplicadas as radiogra as. Em busca dessa otimiza ção, em termos de desempenho em tempo de execu ção, na inser ção de novas radiogra fias e na recupera ção de suas radiogra fias similares, utilizou-se a proposta da implanta c~ao de uma arquitetura de agrupamento das radiogra fias j a armazenadas na base de dados do Pneumocad. Em paralelo, as funcionalidades, respons aveis por de finir o quanto uma radiogra fia e similar a outra, foram transferidas do c odigo-fonte Java para views na base de dados. Os experimentos foram executados sobre bases de dados que continham 183, 2.568 e 10.200 radiogra fias e utilizando o Pneumocad de Macedo (2012), o Pneumocad Otimizado com views e sem agrupamento e o Pneumocad Otimizado com views e com agrupamento. Os experimentos e resultados mostram que a otimiza c~ao proposta contribuiu para a evolu ção do Pneumocad e aprimorou essa ferramenta que visa apoiar os diagn osticos de pneumonias na infância.
52

Um descritor de imagens baseado em particionamento extremo para busca em bases grandes e heterogêneas

Vidal, Márcio Luiz Assis 25 October 2013 (has links)
Submitted by Geyciane Santos (geyciane_thamires@hotmail.com) on 2015-06-22T14:59:26Z No. of bitstreams: 1 Tese- Márcio Luiz Assis Vidal.pdf: 6102842 bytes, checksum: 12c4e5a330ea91e55788a8d2d6b46898 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-24T15:29:04Z (GMT) No. of bitstreams: 1 Tese- Márcio Luiz Assis Vidal.pdf: 6102842 bytes, checksum: 12c4e5a330ea91e55788a8d2d6b46898 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-06-24T16:06:25Z (GMT) No. of bitstreams: 1 Tese- Márcio Luiz Assis Vidal.pdf: 6102842 bytes, checksum: 12c4e5a330ea91e55788a8d2d6b46898 (MD5) / Made available in DSpace on 2015-06-24T16:06:25Z (GMT). No. of bitstreams: 1 Tese- Márcio Luiz Assis Vidal.pdf: 6102842 bytes, checksum: 12c4e5a330ea91e55788a8d2d6b46898 (MD5) Previous issue date: 2013-10-25 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In this thesis we propose a new image descriptor that address the problem of image search in large and heterogeneous databases. This approach uses the idea of extreme partitioning to obtain the visual properties of images that will be converted into a textual description. Once the textual description is appropriately generated, traditional text-based information retrieval techniques can be used. The key point of the proposed work is escalability, given that text-based search techniques can deal with databases with millions of documents. We have carried out experiments in order to con rm the viability of our proposal. The experimental results showed that our technique reaches higher precision levels compared to other content-based image retrieval techniques in a database with more than 100,000 images. / Neste trabalho é proposto um novo descritor de imagens que lida com o problema de busca de imagens em bases grandes e heterogêneas. Esta abordagem utiliza a idéia de um particionamento extremo para obter detalhes da imagem que são convertidos em uma descrição textual. Uma vez que a descrição textual é devidamente gerada, utiliza-se as técnicas de Recuperação de Informação (RI) tradicionais. O ponto chave do trabalho proposto é a representação textual das propriedades visuais das partições de uma imagem. Isto permite uma grande escalabilidade desta técnica, visto a existências de técnicas eficientes de busca baseada em texto para bases da ordem de milhões de documentos. Nossos experimentos comprovaram a viabilidade da técnica proposta, atingindo graus de precisão superiores às técnicas de busca de imagens tradicionais em uma base com mais de 100.000 imagens.
53

Near Sets: Theory and Applications

Henry, Christopher James 13 October 2010 (has links)
The focus of this research is on a tolerance space-based approach to image analysis and correspondence. The problem considered in this thesis is one of extracting perceptually relevant information from groups of objects based on their descriptions. Object descriptions are represented by feature vectors containing probe function values in a manner similar to feature extraction in pattern classification theory. The motivation behind this work is the synthesizing of human perception of nearness for improvement of image processing systems. In these systems, the desired output is similar to the output of a human performing the same task. Thus, it is important to have systems that accurately model human perception. Near set theory provides a framework for measuring the similarity of objects based on features that describe them in much the same way that humans perceive the similarity of objects. In this thesis, near set theory is presented and advanced, and work is presented toward a near set approach to performing content-based image retrieval. Furthermore, results are given based on these new techniques and future work is presented. The contributions of this thesis are: the introduction of a nearness measure to determine the degree that near sets resemble each other; a systematic approach to finding tolerance classes, together with proofs demonstrating that the proposed approach will find all tolerance classes on a set of objects; an approach to applying near set theory to images; the application of near set theory to the problem of content-based image retrieval; demonstration that near set theory is well suited to solving problems in which the outcome is similar to that of human perception; two other near set measures, one based on Hausdorff distance, the other based on Hamming distance.
54

Fuzzy Tolerance Neighborhood Approach to Image Similarity in Content-based Image Retrieval

Meghdadi, Amir Hossein 22 June 2012 (has links)
The main contribution of this thesis, is to define similarity measures between two images with the main focus on content-based image retrieval (CBIR). Each image is considered as a set of visual elements that can be described with a set of visual descriptions (features). The similarity between images is then defined as the nearness between sets of elements based on a tolerance and a fuzzy tolerance relation. A tolerance relation is used to describe the approximate nature of the visual perception. A fuzzy tolerance relation is adopted to eliminate the need for a sharp threshold and hence model the gradual changes in perception of similarities. Three real valued similarity measures as well as a fuzzy valued similarity measure are proposed. All of the methods are then used in two CBIR experiments and the results are compared with classical measures of distance (namely, Kantorovich, Hausdorff and Mahalanobis). The results are compared with other published research papers. An important advantage of the proposed methods is shown to be their effectiveness in an unsupervised setting with no prior information. Eighteen different features (based on color, texture and edge) are used in all the experiments. A feature selection algorithm is also used to train the system in choosing a suboptimal set of visual features.
55

Near Sets: Theory and Applications

Henry, Christopher James 13 October 2010 (has links)
The focus of this research is on a tolerance space-based approach to image analysis and correspondence. The problem considered in this thesis is one of extracting perceptually relevant information from groups of objects based on their descriptions. Object descriptions are represented by feature vectors containing probe function values in a manner similar to feature extraction in pattern classification theory. The motivation behind this work is the synthesizing of human perception of nearness for improvement of image processing systems. In these systems, the desired output is similar to the output of a human performing the same task. Thus, it is important to have systems that accurately model human perception. Near set theory provides a framework for measuring the similarity of objects based on features that describe them in much the same way that humans perceive the similarity of objects. In this thesis, near set theory is presented and advanced, and work is presented toward a near set approach to performing content-based image retrieval. Furthermore, results are given based on these new techniques and future work is presented. The contributions of this thesis are: the introduction of a nearness measure to determine the degree that near sets resemble each other; a systematic approach to finding tolerance classes, together with proofs demonstrating that the proposed approach will find all tolerance classes on a set of objects; an approach to applying near set theory to images; the application of near set theory to the problem of content-based image retrieval; demonstration that near set theory is well suited to solving problems in which the outcome is similar to that of human perception; two other near set measures, one based on Hausdorff distance, the other based on Hamming distance.
56

Fuzzy Tolerance Neighborhood Approach to Image Similarity in Content-based Image Retrieval

Meghdadi, Amir Hossein 22 June 2012 (has links)
The main contribution of this thesis, is to define similarity measures between two images with the main focus on content-based image retrieval (CBIR). Each image is considered as a set of visual elements that can be described with a set of visual descriptions (features). The similarity between images is then defined as the nearness between sets of elements based on a tolerance and a fuzzy tolerance relation. A tolerance relation is used to describe the approximate nature of the visual perception. A fuzzy tolerance relation is adopted to eliminate the need for a sharp threshold and hence model the gradual changes in perception of similarities. Three real valued similarity measures as well as a fuzzy valued similarity measure are proposed. All of the methods are then used in two CBIR experiments and the results are compared with classical measures of distance (namely, Kantorovich, Hausdorff and Mahalanobis). The results are compared with other published research papers. An important advantage of the proposed methods is shown to be their effectiveness in an unsupervised setting with no prior information. Eighteen different features (based on color, texture and edge) are used in all the experiments. A feature selection algorithm is also used to train the system in choosing a suboptimal set of visual features.
57

Une méthode hybride pour la classification d'images à grain fin / An hybrid method for fine-grained content based image retrieval

Pighetti, Romaric 28 November 2016 (has links)
La quantité d'images disponible sur Internet ne fait que croître, engendrant un besoin d'algorithmes permettant de fouiller ces images et retrouver de l'information. Les systèmes de recherche d'images par le contenu ont été développées dans ce but. Mais les bases de données grandissant, de nouveaux défis sont apparus. Dans cette thèse, la classification à grain fin est étudiée en particulier. Elle consiste à séparer des images qui sont relativement semblables visuellement mais représentent différents concepts, et à regrouper des images qui sont différentes visuellement mais représentent le même concept. Il est montré dans un premier temps que les techniques classiques de recherche d'images par le contenu rencontrent des difficultés à effectuer cette tâche. Même les techniques utilisant les machines à vecteur de support (SVM), qui sont très performants pour la classification, n'y parviennent pas complètement. Ces techniques n'explorent souvent pas assez l'espace de recherche pour résoudre ce problème. D'autres méthodes, comme les algorithmes évolutionnaires sont également étudiées pour leur capacité à identifier des zones intéressantes de l'espace de recherche en un temps raisonnable. Toutefois, leurs performances restent encore limitées. Par conséquent, l'apport de la thèse consiste à proposer un système hybride combinant un algorithme évolutionnaire et un SVM a finalement été développé. L'algorithme évolutionnaire est utilisé pour construire itérativement un ensemble d'apprentissage pour le SVM. Ce système est évalué avec succès sur la base de données Caltech-256 contenant envieront 30000 images réparties en 256 catégories / Given the ever growing amount of visual content available on the Internet, the need for systems able to search through this content has grown. Content based image retrieval systems have been developed to address this need. But with the growing size of the databases, new challenges arise. In this thesis, the fine grained classification problem is studied in particular. It is first shown that existing techniques, and in particular the support vector machines which are one of the best image classification technique, have some difficulties in solving this problem. They often lack of exploration in their process. Then, evolutionary algorithms are considered to solve the problem, for their balance between exploration and exploitation. But their performances are not good enough either. Finally, an hybrid system combining an evolutionary algorithm and a support vector machine is proposed. This system uses the evolutionary algorithm to iteratively feed the support vector machine with training samples. The experiments conducted on Caltech-256, a state of the art database containing around 30000 images, show very encouraging results
58

Taxonomy Based Image Retrieval : Taxonomy Based Image Retrieval using Data from Multiple Sources / Taxonomibaserad Bildsök

Larsson, Jimmy January 2016 (has links)
With a multitude of images available on the Internet, how do we find what we are looking for? This project tries to determine how much the precision and recall of search queries is improved by using a word taxonomy on traditional Text-Based Image Search and Content-Based Image Search. By applying a word taxonomy to different data sources, a strong keyword filter and a keyword extender were implemented and tested. The results show that depending on the implementation, the precision or the recall can be increased. By using a similar approach on real life implementations, it is possible to force images with higher precisions to the front while keeping a high recall value, thus increasing the experienced relevance of image search. / Med den mängd bilder som nu finns tillgänglig på Internet, hur kan vi fortfarande hitta det vi letar efter? Denna uppsats försöker avgöra hur mycket bildprecision och bildåterkallning kan öka med hjälp av appliceringen av en ordtaxonomi på traditionell Text-Based Image Search och Content-Based Image Search. Genom att applicera en ordtaxonomi på olika datakällor kan ett starkt ordfilter samt en modul som förlänger ordlistor skapas och testas. Resultaten pekar på att beroende på implementationen så kan antingen precisionen eller återkallningen förbättras. Genom att använda en liknande metod i ett verkligt scenario är det därför möjligt att flytta bilder med hög precision längre fram i resultatlistan och samtidigt behålla hög återkallning, och därmed öka den upplevda relevansen i bildsök.
59

Ανάπτυξη μεθόδων ανάκτησης εικόνας βάσει περιεχομένου σε αναπαραστάσεις αντικειμένων ασαφών ορίων / Development of methods for content-based image retrieval in representations of fuzzily bounded objects

Καρτσακάλης, Κωνσταντίνος 11 March 2014 (has links)
Τα δεδομένα εικόνων που προκύπτουν από την χρήση βιο-ιατρικών μηχανημάτων είναι από την φύση τους ασαφή, χάρη σε μια σειρά από παράγοντες ανάμεσα στους οποίους οι περιορισμοί στον χώρο, τον χρόνο, οι παραμετρικές αναλύσεις καθώς και οι φυσικοί περιορισμοί που επιβάλλει το εκάστοτε μηχάνημα. Όταν το αντικείμενο ενδιαφέροντος σε μια τέτοια εικόνα έχει κάποιο μοτίβο φωτεινότητας ευκρινώς διαφορετικό από τα μοτίβα των υπόλοιπων αντικειμένων που εμφανίζονται, είναι εφικτή η κατάτμηση της εικόνας με έναν απόλυτο, δυαδικό τρόπο που να εκφράζει επαρκώς τα όρια των αντικειμένων. Συχνά ωστόσο σε τέτοιες εικόνες υπεισέρχονται παράγοντες όπως η ανομοιογένεια των υλικών που απεικονίζονται, θόλωμα, θόρυβος ή και μεταβολές στο υπόβαθρο που εισάγονται από την συσκευή απεικόνισης με αποτέλεσμα οι εντάσεις φωτεινότητας σε μια τέτοια εικόνα να εμφανίζονται με έναν ασαφή, βαθμωτό, «μη-δυαδικό» τρόπο. Μια πρωτοπόρα τάση στην σχετική βιβλιογραφία είναι η αξιοποίηση της ασαφούς σύνθεσης των αντικειμένων μιας τέτοιας εικόνας, με τρόπο ώστε η ασάφεια να αποτελεί γνώρισμα του εκάστοτε αντικειμένου αντί για ανεπιθύμητο χαρακτηριστικό: αντλώντας από την θεωρία ασαφών συνόλων, τέτοιες προσεγγίσεις κατατμούν μια εικόνα με βαθμωτό, μη-δυαδικό τρόπο αποφεύγοντας τον μονοσήμαντο καθορισμό ορίων μεταξύ των αντικειμένων. Μια τέτοια προσέγγιση καταφέρνει να αποτυπώσει με μαθηματικούς όρους την ασάφεια της θολής εικόνας, μετατρέποντάς την σε χρήσιμο εργαλείο ανάλυσης στα χέρια ενός ειδικού. Από την άλλη, το μέγεθος της ασάφειας που παρατηρείται σε τέτοιες εικόνες είναι τέτοιο ώστε πολλές φορές να ωθεί τους ειδικούς σε διαφορετικές ή και αντικρουόμενες κατατμήσεις, ακόμη και από το ίδιο ανθρώπινο χέρι. Επιπλέον, το παραπάνω έχει ως αποτέλεσμα την οικοδόμηση βάσεων δεδομένων στις οποίες για μια εικόνα αποθηκεύονται πολλαπλές κατατμήσεις, δυαδικές και μη. Μπορούμε με βάση μια κατάτμηση εικόνας να ανακτήσουμε άλλες, παρόμοιες τέτοιες εικόνες των οποίων τα δεδομένα έχουν προέλθει από αναλύσεις ειδικών, χωρίς σε κάποιο βήμα να υποβαθμίζουμε την ασαφή φύση των αντικειμένων που απεικονίζονται; Πως επιχειρείται η ανάκτηση σε μια βάση δεδομένων στην οποία έχουν αποθηκευτεί οι παραπάνω πολλαπλές κατατμήσεις για κάθε εικόνα; Αποτελεί κριτήριο ομοιότητας μεταξύ εικόνων το πόσο συχνά θα επέλεγε ένας ειδικός να οριοθετήσει ένα εικονοστοιχείο μιας τέτοιας εικόνας εντός ή εκτός ενός τέτοιου θολού αντικειμένου; Στα πλαίσια της παρούσας εργασίας προσπαθούμε να απαντήσουμε στα παραπάνω ερωτήματα, μελετώντας διεξοδικά την διαδικασία ανάκτησης τέτοιων εικόνων. Προσεγγίζουμε το πρόβλημα θεωρώντας ότι για κάθε εικόνα αποθηκεύονται στην βάση μας περισσότερες της μίας κατατμήσεις, τόσο δυαδικής φύσης από ειδικούς όσο και από ασαφείς από αυτόματους αλγορίθμους. Επιδιώκουμε εκμεταλλευόμενοι το χαρακτηριστικό της ασάφειας να ενοποιήσουμε την διαδικασία της ανάκτησης και για τις δυο παραπάνω περιπτώσεις, προσεγγίζοντας την συχνότητα με την οποία ένας ειδικός θα οριοθετούσε το εκάστοτε ασαφές αντικείμενο με συγκεκριμένο τρόπο καθώς και τα ενδογενή χαρακτηριστικά ενός ασαφούς αντικειμένου που έχει εξαχθεί από αυτόματο αλγόριθμο. Προτείνουμε κατάλληλο μηχανισμό ανάκτησης ο οποίος αναλαμβάνει την μετάβαση από τον χώρο της αναποφασιστικότητας και του ασαφούς στον χώρο της πιθανοτικής αναπαράστασης, διατηρώντας παράλληλα όλους τους περιορισμούς που έχουν επιβληθεί στα δεδομένα από την πρωταρχική ανάλυσή τους. Στην συνέχεια αξιολογούμε την διαδικασία της ανάκτησης, εφαρμόζοντας την νέα μέθοδο σε ήδη υπάρχον σύνολο δεδομένων από το οποίο και εξάγουμε συμπεράσματα για τα αποτελέσματά της. / Image data acquired through the use of bio-medical scanners are by nature fuzzy, thanks to a series of factors including limitations in spatial, temporal and parametric resolutions other than the physical limitations of the device. When the object of interest in such an image displays intensity patterns that are distinct from the patterns of other objects appearing together, a segmentation of the image in a hard, binary manner that clearly defines the borders between objects is feasible. It is frequent though that in such images factors like the lack of homogeneity between materials depicted, blurring, noise or deviations in the background pose difficulties in the above process. Intensity values in such an image appear in a fuzzy, gradient, “non-binary” manner. An innovative trend in the field of study is to make use of the fuzzy composition of objects in such an image, in a way in which fuzziness becomes a characteristic feature of the object instead of an undesirable trait: deriving from the theory of fuzzy sets, such approaches segment an image in a gradient, non-binary manner, therefore avoiding to set up a clear boundary between depicted objects. Such approaches are successful in capturing the fuzziness of the blurry image in mathematical terms, transforming the quality into a powerful tool of analysis in the hands of an expert. On the other hand, the scale of fuzziness observed in such images often leads experts towards different or contradictory segmentations, even drawn by the same human hand. What is more, the aforementioned case results in the compilation of image data bases consisting of multiple segmentations for each image, both binary and fuzzy. Are we able, by segmenting an image, to retrieve other similar such images whose segmented data have been acquired by experts, without downgrading the importance of the fuzziness of the objects depicted in any step involved? How exactly are images in such a database storing multiple segmentations of each retrieved? Is the frequency with which an expert would choose to either include or exclude from a fuzzy object a pixel of an image, a criterion of semblance between objects depicted in images? Finally, how able are we to tackle the feature of fuzziness in a probabilistic manner, thus providing a valuable tool in bridging the gap between automatic segmentation algorithms and segmentations coming from field experts? In the context of this thesis, we tackle the aforementioned problems studying thoroughly the process of image retrieval in a fuzzy context. We consider the case in which a database consists of images for which exist more than one segmentations, both crisp, derived by experts’ analysis, and fuzzy, generated by segmentation algorithms. We attempt to unify the retrieval process for both cases by taking advantage of the feature of fuzziness, and by approximating the frequency with which an expert would confine the boundaries of the fuzzy object in a uniform manner, along with the intrinsic features of a fuzzy, algorithm-generated object. We propose a suitable retrieval mechanism that undertakes the transition from the field of indecisiveness to that of a probabilistic representation, at the same time preserving all the limitations imposed on the data by their initial analysis. Next, we evaluate the retrieval process, by implementing the new method on an already existing data-set and draw conclusions on the effectiveness of the proposed scheme.
60

Efficient Image Retrieval with Statistical Color Descriptors

Viet Tran, Linh January 2003 (has links)
Color has been widely used in content-based image retrieval (CBIR) applications. In such applications the color properties of an image are usually characterized by the probability distribution of the colors in the image. A distance measure is then used to measure the (dis-)similarity between images based on the descriptions of their color distributions in order to quickly find relevant images. The development and investigation of statistical methods for robust representations of such distributions, the construction of distance measures between them and their applications in efficient retrieval, browsing, and structuring of very large image databases are the main contributions of the thesis. In particular we have addressed the following problems in CBIR. Firstly, different non-parametric density estimators are used to describe color information for CBIR applications. Kernel-based methods using nonorthogonal bases together with a Gram-Schmidt procedure and the application of the Fourier transform are introduced and compared to previously used histogram-based methods. Our experiments show that efficient use of kernel density estimators improves the retrieval performance of CBIR. The practical problem of how to choose an optimal smoothing parameter for such density estimators as well as the selection of the histogram bin-width for CBIR applications are also discussed. Distance measures between color distributions are then described in a differential geometry-based framework. This allows the incorporation of geometrical features of the underlying color space into the distance measure between the probability distributions. The general framework is illustrated with two examples: Normal distributions and linear representations of distributions. The linear representation of color distributions is then used to derive new compact descriptors for color-based image retrieval. These descriptors are based on the combination of two ideas: Incorporating information from the structure of the color space with information from images and application of projection methods in the space of color distribution and the space of differences between neighboring color distributions. In our experiments we used several image databases containing more than 1,300,000 images. The experiments show that the method developed in this thesis is very fast and that the retrieval performance chievedcompares favorably with existing methods. A CBIR system has been developed and is currently available at http://www.media.itn.liu.se/cse. We also describe color invariant descriptors that can be used to retrieve images of objects independent of geometrical factors and the illumination conditions under which these images were taken. Both statistics- and physics-based methods are proposed and examined. We investigated the interaction between light and material using different physical models and applied the theory of transformation groups to derive geometry color invariants. Using the proposed framework, we are able to construct all independent invariants for a given physical model. The dichromatic reflection model and the Kubelka-Munk model are used as examples for the framework. The proposed color invariant descriptors are then applied to both CBIR, color image segmentation, and color correction applications. In the last chapter of the thesis we describe an industrial application where different color correction methods are used to optimize the layout of a newspaper page. / <p>A search engine based, on the methodes discribed in this thesis, can be found at http://pub.ep.liu.se/cse/db/?. Note that the question mark must be included in the address.</p>

Page generated in 0.4087 seconds