• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Fusing integrated visual vocabularies-based bag of visual words and weighted colour moments on spatial pyramid layout for natural scene image classification

Alqasrawi, Yousef T. N., Neagu, Daniel, Cowling, Peter I. January 2013 (has links)
No / The bag of visual words (BOW) model is an efficient image representation technique for image categorization and annotation tasks. Building good visual vocabularies, from automatically extracted image feature vectors, produces discriminative visual words, which can improve the accuracy of image categorization tasks. Most approaches that use the BOW model in categorizing images ignore useful information that can be obtained from image classes to build visual vocabularies. Moreover, most BOW models use intensity features extracted from local regions and disregard colour information, which is an important characteristic of any natural scene image. In this paper, we show that integrating visual vocabularies generated from each image category improves the BOW image representation and improves accuracy in natural scene image classification. We use a keypoint density-based weighting method to combine the BOW representation with image colour information on a spatial pyramid layout. In addition, we show that visual vocabularies generated from training images of one scene image dataset can plausibly represent another scene image dataset on the same domain. This helps in reducing time and effort needed to build new visual vocabularies. The proposed approach is evaluated over three well-known scene classification datasets with 6, 8 and 15 scene categories, respectively, using 10-fold cross-validation. The experimental results, using support vector machines with histogram intersection kernel, show that the proposed approach outperforms baseline methods such as Gist features, rgbSIFT features and different configurations of the BOW model.
12

Contributions to generic and affective visual concept recognition / Contribution à la reconnaissance de concepts visuels génériques et émotionnels

Liu, Ningning 22 November 2013 (has links)
Cette thèse de doctorat est consacrée à la reconnaissance de concepts visuels (VCR pour "Visual Concept Recognition"). En raison des nombreuses difficultés qui la caractérisent, cette tâche est toujours considérée comme l’une des plus difficiles en vision par ordinateur et reconnaissance de formes. Dans ce contexte, nous avons proposé plusieurs contributions, particulièrement dans le cadre d’une approche de reconnaissance multimodale combinant efficacement les informations visuelles et textuelles. Tout d’abord, nous avons étudié différents types de descripteurs visuels de bas-niveau sémantique pour la tâche de VCR incluant des descripteurs de couleur, de texture et de forme. Plus précisément, nous pensons que chaque concept nécessite différents descripteurs pour le caractériser efficacement pour permettre sa reconnaissance automatique. Ainsi, nous avons évalué l’efficacité de diverses représentations visuelles, non seulement globales comme la couleur, la texture et la forme, mais également locales telles que SIFT, Color SIFT, HOG, DAISY, LBP et Color LBP. Afin de faciliter le franchissement du fossé sémantique entre les descripteurs bas-niveau et les concepts de haut niveau sémantique, et particulièrement ceux relatifs aux émotions, nous avons proposé des descripteurs visuels de niveau intermédiaire basés sur l’harmonie visuelle et le dynamisme exprimés dans les images. De plus, nous avons utilisé une décomposition spatiale pyramidale des images pour capturer l’information locale et spatiale lors de la construction des descripteurs d’harmonie et de dynamisme. Par ailleurs, nous avons également proposé une nouvelle représentation reposant sur les histogrammes de couleur HSV en utilisant un modèle d’attention visuelle pour identifier les régions d’intérêt dans les images. Ensuite, nous avons proposé un nouveau descripteur textuel dédié au problème de VCR. En effet, la plupart des photos publiées sur des sites de partage en ligne (Flickr, Facebook, ...) sont accompagnées d’une description textuelle sous la forme de mots-clés ou de légende. Ces descriptions constituent une riche source d’information sur la sémantique contenue dans les images et il semble donc particulièrement intéressant de les considérer dans un système de VCR. Ainsi, nous avons élaboré des descripteurs HTC ("Histograms of Textual Concepts") pour capturer les liens sémantiques entre les concepts. L’idée générale derrière HTC est de représenter un document textuel comme un histogramme de concepts textuels selon un dictionnaire (ou vocabulaire), pour lequel chaque valeur associée à un concept est l’accumulation de la contribution de chaque mot du texte pour ce concept, en fonction d’une mesure de distance sémantique. Plusieurs variantes de HTC ont été proposées qui se sont révélées être très efficaces pour la tâche de VCR. Inspirés par la démarche de l’analyse cepstrale de la parole, nous avons également développé Cepstral HTC pour capturer à la fois l’information de fréquence d’occurrence des mots (comme TF-IDF) et les liens sémantiques entre concepts fournis par HTC à partir des mots-clés associés aux images. Enfin, nous avons élaboré une méthode de fusion (SWLF pour "Selective Weighted Later Fusion") afin de combiner efficacement différentes sources d’information pour le problème de VCR. Cette approche de fusion est conçue pour sélectionner les meilleurs descripteurs et pondérer leur contribution pour chaque concept à reconnaître. SWLF s’est révélé être particulièrement efficace pour fusion des modalités visuelles et textuelles, par rapport à des schémas de fusion standards. [...] / This Ph.D thesis is dedicated to visual concept recognition (VCR). Due to many realistic difficulties, it is still considered to be one of the most challenging problems in computer vision and pattern recognition. In this context, we have proposed some innovative contributions for the task of VCR, particularly in building multimodal approaches that efficiently combine visual and textual information. Firstly, we have proposed semantic features for VCR and have investigated the efficiency of different types of low-level visual features for VCR including color, texture and shape. Specifically, we believe that different concepts require different features to efficiently characterize them for the recognition. Therefore, we have investigated in the context of VCR various visual representations, not only global features including color, shape and texture, but also the state-of-the-art local visual descriptors such as SIFT, Color SIFT, HOG, DAISY, LBP, Color LBP. To help bridging the semantic gap between low-level visual features and high level semantic concepts, and particularly those related to emotions and feelings, we have proposed mid-level visual features based on the visual harmony and dynamism semantics using Itten’s color theory and psychological interpretations. Moreover, we have employed a spatial pyramid strategy to capture the spatial information when building our mid-level features harmony and dynamism. We have also proposed a new representation of color HSV histograms by employing a visual attention model to identify the regions of interest in images. Secondly, we have proposed a novel textual feature designed for VCR. Indeed, most of online-shared photos provide textual descriptions in the form of tags or legends. In fact, these textual descriptions are a rich source of semantic information on visual data that is interesting to consider for the purpose of VCR or multimedia information retrieval. We propose the Histograms of Textual Concepts (HTC) to capture the semantic relatedness of concepts. The general idea behind HTC is to represent a text document as a histogram of textual concepts towards a vocabulary or dictionary, whereas its value is the accumulation of the contribution of each word within the text document toward the underlying concept according to a predefined semantic similarity measure. Several variants of HTC have been proposed that revealed to be very efficient for VCR. Inspired by the Cepstral speech analysis process, we have also developed Cepstral HTC to capture both term frequency-based information (like TF-IDF) and the relatedness of semantic concepts in the sparse image tags, which overcomes the HTC’s shortcoming of ignoring term frequency-based information. Thirdly, we have proposed a fusion scheme to combine different sources of Later Fusion, (SWLF) is designed to select the best features and to weight their scores for each concept to be recognized. SWLF proves particularly efficient for fusing visual and textual modalities in comparison with some other standard fusion schemes. While a late fusion at score level is reputed as a simple and effective way to fuse features of different nature for machine-learning problems, the proposed SWLF builds on two simple insights. First, the score delivered by a feature type should be weighted by its intrinsic quality for the classification problem at hand. Second, in a multi-label scenario where several visual concepts may be assigned to an image, different visual concepts may require different features which best recognize them. In addition to SWLF, we also propose a novel combination approach based on Dempster-Shafer’s evidence theory, whose interesting properties allow fusing different ambiguous sources of information for visual affective recognition. [...]
13

Classificadores e aprendizado em processamento de imagens e visão computacional / Classifiers and machine learning techniques for image processing and computer vision

Rocha, Anderson de Rezende, 1980- 03 March 2009 (has links)
Orientador: Siome Klein Goldenstein / Tese (doutorado) - Universidade Estadual de Campinas, Instituto da Computação / Made available in DSpace on 2018-08-12T17:37:15Z (GMT). No. of bitstreams: 1 Rocha_AndersondeRezende_D.pdf: 10303487 bytes, checksum: 243dccfe5255c828ce7ead27c27eb1cd (MD5) Previous issue date: 2009 / Resumo: Neste trabalho de doutorado, propomos a utilizaçãoo de classificadores e técnicas de aprendizado de maquina para extrair informações relevantes de um conjunto de dados (e.g., imagens) para solução de alguns problemas em Processamento de Imagens e Visão Computacional. Os problemas de nosso interesse são: categorização de imagens em duas ou mais classes, detecçãao de mensagens escondidas, distinção entre imagens digitalmente adulteradas e imagens naturais, autenticação, multi-classificação, entre outros. Inicialmente, apresentamos uma revisão comparativa e crítica do estado da arte em análise forense de imagens e detecção de mensagens escondidas em imagens. Nosso objetivo é mostrar as potencialidades das técnicas existentes e, mais importante, apontar suas limitações. Com esse estudo, mostramos que boa parte dos problemas nessa área apontam para dois pontos em comum: a seleção de características e as técnicas de aprendizado a serem utilizadas. Nesse estudo, também discutimos questões legais associadas a análise forense de imagens como, por exemplo, o uso de fotografias digitais por criminosos. Em seguida, introduzimos uma técnica para análise forense de imagens testada no contexto de detecção de mensagens escondidas e de classificação geral de imagens em categorias como indoors, outdoors, geradas em computador e obras de arte. Ao estudarmos esse problema de multi-classificação, surgem algumas questões: como resolver um problema multi-classe de modo a poder combinar, por exemplo, caracteríisticas de classificação de imagens baseadas em cor, textura, forma e silhueta, sem nos preocuparmos demasiadamente em como normalizar o vetor-comum de caracteristicas gerado? Como utilizar diversos classificadores diferentes, cada um, especializado e melhor configurado para um conjunto de caracteristicas ou classes em confusão? Nesse sentido, apresentamos, uma tecnica para fusão de classificadores e caracteristicas no cenário multi-classe através da combinação de classificadores binários. Nós validamos nossa abordagem numa aplicação real para classificação automática de frutas e legumes. Finalmente, nos deparamos com mais um problema interessante: como tornar a utilização de poderosos classificadores binarios no contexto multi-classe mais eficiente e eficaz? Assim, introduzimos uma tecnica para combinação de classificadores binarios (chamados classificadores base) para a resolução de problemas no contexto geral de multi-classificação. / Abstract: In this work, we propose the use of classifiers and machine learning techniques to extract useful information from data sets (e.g., images) to solve important problems in Image Processing and Computer Vision. We are particularly interested in: two and multi-class image categorization, hidden messages detection, discrimination among natural and forged images, authentication, and multiclassification. To start with, we present a comparative survey of the state-of-the-art in digital image forensics as well as hidden messages detection. Our objective is to show the importance of the existing solutions and discuss their limitations. In this study, we show that most of these techniques strive to solve two common problems in Machine Learning: the feature selection and the classification techniques to be used. Furthermore, we discuss the legal and ethical aspects of image forensics analysis, such as, the use of digital images by criminals. We introduce a technique for image forensics analysis in the context of hidden messages detection and image classification in categories such as indoors, outdoors, computer generated, and art works. From this multi-class classification, we found some important questions: how to solve a multi-class problem in order to combine, for instance, several different features such as color, texture, shape, and silhouette without worrying about the pre-processing and normalization of the combined feature vector? How to take advantage of different classifiers, each one custom tailored to a specific set of classes in confusion? To cope with most of these problems, we present a feature and classifier fusion technique based on combinations of binary classifiers. We validate our solution with a real application for automatic produce classification. Finally, we address another interesting problem: how to combine powerful binary classifiers in the multi-class scenario more effectively? How to boost their efficiency? In this context, we present a solution that boosts the efficiency and effectiveness of multi-class from binary techniques. / Doutorado / Engenharia de Computação / Doutor em Ciência da Computação

Page generated in 0.074 seconds