• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 2
  • 1
  • Tagged with
  • 17
  • 17
  • 8
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A scalable search engine for the Personal Cloud / Un moteur de recherche scalable pour le Personal Cloud

Lallali, Saliha 28 January 2016 (has links)
Un nouveau moteur de recherche embarqué conçu pour les objets intelligents. Ces dispositifs sont généralement équipés d'extrêmement de faible quantité de RAM et une grande capacité de stockage Flash NANAD. Pour faire face à ces contraintes matérielles contradictoires, les moteurs de recherche classique privilégient soit la scalabilité en insertion ou la scalabilité en requête, et ne peut pas répondre à ces deux exigences en même temps. En outre, très peu de solutions prennent en charge les suppressions de documents et mises à jour dans ce contexte. nous avons introduit trois principes de conception, à savoir y Write-Once Partitioning, Linear Pipelining and Background Linear Merging, et montrent comment ils peuvent être combinés pour produire un moteur de recherche intégré concilier un niveau élevé d'insertion / de suppression / et des mises à jour. Nous avons mis en place notre moteur de recherche sur une Board de développement ayant un représentant de configuration matérielle pour les objets intelligents et avons mené de vastes expériences en utilisant deux ensembles de données représentatives. Le dispositif expérimental résultats démontrent la scalabilité de l'approche et sa supériorité par rapport à l'état des procédés de l'art. / A new embedded search engine designed for smart objects. Such devices are generally equipped with extremely low RAM and large Flash storage capacity. To tackle these conflicting hardware constraints, conventional search engines privilege either insertion or query scalability but cannot meet both requirements at the same time. Moreover, very few solutions support document deletions and updates in this context. we introduce three design principles, namely Write-Once Partitioning, Linear Pipelining and Background Linear Merging, and show how they can be combined to produce an embedded search engine reconciling high insert/delete/update rate and query scalability. We have implemented our search engine on a development board having a hardware configuration representative for smart objects and have conducted extensive experiments using two representative datasets. The experimental results demonstrate the scalability of the approach and its superiority compared to state of the art methods.
12

Indexing and Search Algorithmsfor Web shops : / Indexering och sök algoritmer för webshoppar :

Reimers, Axel, Gustafsson, Isak January 2016 (has links)
Web shops today needs to be more and more responsive, where one part of this responsivenessis fast product searches. One way of getting faster searches are by searching against anindex instead of directly against a database. Network Expertise Sweden AB (Net Exp) wants to explore different methods of implementingan index in their future web shop, building upon the open-source web shop platformSmartStore.NET. Since SmartStore.NET does all of its searches directly against itsdatabase, it will not scale well and will wear more on the database. The aim was thereforeto find different solutions to offload the database by using an index instead. A prototype that retrieved products from a database and made them searchable through anindex was developed, evaluated and implemented. The prototype indexed the data with aninverted index algorithm, and was made searchable with a search algorithm that mixed typeboolean queries with normal queries. / Webbutiker idag behöver vara mer och mer responsiva, en del av denna responsivitet ärsnabb produkt sökningar. Ett sätt att skaffa snabbare sökningar är genom att söka mot ettindex istället för att söka direkt mot en databas. Network Expertise Sweden AB vill utforska olika metoder för att implementera ett index ideras framtida webbutik, byggt ovanpå SmartStore.NET som är öppen käll-kod. Då Smart-Store.NET gör alla av sina sökningar direkt mot sin databas, kommer den inte att skala braoch kommer slita mer på databasen. Målsättningen var därför att hitta olika lösningar somavlastar databasen genom att använda ett index istället. En prototyp som hämtade produkter från en databas och gjorde dom sökbara genom ettindex var utvecklad, utvärderad och implementerad. Prototypen indexerade datan med eninverterad indexerings algoritm, och gjordes sökbara med en sök algoritm som blandar booleskafrågor med normala frågor. / <p></p><p></p><p></p>
13

Anotação e classificação automática de entidades nomeadas em notícias esportivas em Português Brasileiro / Automatic named entity recognition and classification for brazilian portuguese sport news

Zaccara, Rodrigo Constantin Ctenas 11 July 2012 (has links)
O objetivo deste trabalho é desenvolver uma plataforma para anotação e classificação automática de entidades nomeadas para notícias escritas em português do Brasil. Para restringir um pouco o escopo do treinamento e análise foram utilizadas notícias esportivas do Campeonato Paulista de 2011 do portal UOL (Universo Online). O primeiro artefato desenvolvido desta plataforma foi a ferramenta WebCorpus. Esta tem como principal intuito facilitar o processo de adição de metainformações a palavras através do uso de uma interface rica web, elaborada para deixar o trabalho ágil e simples. Desta forma as entidades nomeadas das notícias são anotadas e classificadas manualmente. A base de dados foi alimentada pela ferramenta de aquisição e extração de conteúdo desenvolvida também para esta plataforma. O segundo artefato desenvolvido foi o córpus UOLCP2011 (UOL Campeonato Paulista 2011). Este córpus foi anotado e classificado manualmente através do uso da ferramenta WebCorpus utilizando sete tipos de entidades: pessoa, lugar, organização, time, campeonato, estádio e torcida. Para o desenvolvimento do motor de anotação e classificação automática de entidades nomeadas foram utilizadas três diferentes técnicas: maximização de entropia, índices invertidos e métodos de mesclagem das duas técnicas anteriores. Para cada uma destas foram executados três passos: desenvolvimento do algoritmo, treinamento utilizando técnicas de aprendizado de máquina e análise dos melhores resultados. / The main target of this research is to develop an automatic named entity classification tool to sport news written in Brazilian Portuguese. To reduce this scope, during training and analysis only sport news about São Paulo Championship of 2011 written by UOL2 (Universo Online) was used. The first artefact developed was the WebCorpus tool, which aims to make easier the process of add meta informations to words, through a rich web interface. Using this, all the corpora news are tagged manually. The database used by this tool was fed by the crawler tool, also developed during this research. The second artefact developed was the corpora UOLCP2011 (UOL Campeonato Paulista 2011). This corpora was manually tagged using the WebCorpus tool. During this process, seven classification concepts were used: person, place, organization, team, championship, stadium and fans. To develop the automatic named entity classification tool, three different approaches were analysed: maximum entropy, inverted index and merge tecniques using both. Each approach had three steps: algorithm development, training using machine learning tecniques and best score analysis.
14

Recuperação de imagens digitais com base na distribuição de características de baixo nível em partições do domínio utilizando índice invertido

Proença, Patrícia Aparecida 29 March 2010 (has links)
Fundação de Amparo a Pesquisa do Estado de Minas Gerais / The main goal of a images retrieval system is to obtain images from a collection that assist a need of the user. To achieve this objective, in generally, the systems of retrieval of images calculate the similarity between the user's need represented by a query and representations of the images of the collection. Such an objective is dicult of being obtain due to the subjectivity of the similarity concept among images, because a same image can be interpreted in dierent ways by dierent people. In the attempt of solving this problem the content based image retrieval systems explore the characteristics of low level color, forms and texture in the calculation of the similarity among the images. A problem of this approach is that in most of the systems the calculation of the similarity is accomplished being compared the query image with all of the images of the collection, turning the dicult and slow processing. Considering the indexation of characteristics of low level of partitions of digital images mapped to an inverted index, this work looks for improvements in the acting of the processing of querys and improve in the precision considering the group of images retrieval in great bases of data. We used an approach based in inverted index that is here adapted for partitions images. In this approach the concept of term of the retrieval textual, main element of the indexation, it is used in the work as characteristic of partitions of images for the indexation. Experiments show improvement in the quality of the precision using two collections of digital images. / O principal objetivo de um sistema de recuperação de imagens é obter imagens de uma coleção que atendam a uma necessidade do usuário. Para atingir esse objetivo, em geral, os sistemas de recuperação de imagens calculam a similaridade entre a necessidade do usuário, representada por uma consulta, e representações das imagens da coleção. Tal objetivo é difícil de ser alcançado devido à subjetividade do conceito de similaridade entre imagens, visto que uma mesma imagem pode ser interpretada de formas diferentes por pessoas distintas. Na tentativa de resolver este problema os sistemas de recuperação de imagens por conteúdo exploram as características de baixo nível cor, forma e textura no cálculo da similaridade entre as imagens. Um problema desta abordagem é que na maioria dos sistemas o cálculo da similaridade é realizado comparando-se a imagem de consulta com todas as imagens da coleção, tornando o processamento difícil e lento. Considerando a indexação de características de baixo nível de partições de imagens digitais mapeadas para um índice invertido, este trabalho busca melhorias no desempenho do processamento de consultas e ganho na precisão considerando o conjunto de imagens recuperadas em grandes bases de dados. Utilizamos uma abordagem baseada em índice invertido, que é aqui adaptada para imagens particionadas. Nesta abordagem o conceito de termo da recuperação textual, principal elemento da indexação, é utilizado no trabalho como característica de partições de imagens para a indexação. Experimentos mostram ganho na qualidade da precisão usando duas coleções de imagens digitais. / Mestre em Ciência da Computação
15

Recherche multi-descripteurs dans les fonds photographiques numérisés / Multi-descriptor retrieval in digitalized photographs collections

Bhowmik, Neelanjan 07 November 2017 (has links)
La recherche d’images par contenu (CBIR) est une discipline de l’informatique qui vise à structurer automatiquement les collections d’images selon des critères visuels. Les fonctionnalités proposées couvrent notamment l’accès efficace aux images dans une grande base de données d’images ou l’identification de leur contenu par des outils de détection et de reconnaissance d’objets. Ils ont un impact sur une large gamme de domaines qui manipulent ce genre de données, telles que le multimedia, la culture, la sécurité, la santé, la recherche scientifique, etc.Indexer une image à partir de son contenu visuel nécessite d’abord de produire un résumé visuel de ce contenu pour un usage donné, qui sera l’index de cette image dans la collection. En matière de descripteurs d’images, la littérature est désormais trés riche: plusieurs familles de descripteurs existent, et dans chaque famille de nombreuses approches cohabitent. Bon nombre de descripteurs ne décrivant pas la même information et n’ayant pas les mêmes propriétés d’invariance, il peut être pertinent de les combiner de manière à mieux décrire le contenu de l’image. Cette combinaison peut être mise en oeuvre de différentes manières, selon les descripteurs considérés et le but recherché. Dans cette thése, nous nous concentrons sur la famille des descripteurs locaux, avec pour application la recherche d’images ou d’objets par l’exemple dans une collection d’images. Leurs bonnes propriétés les rendent très populaires pour la recherche, la reconnaissance et la catégorisation d'objets et de scènes. Deux directions de recherche sont étudiées:Combinaison de caractéristiques pour la recherche d’images par l’exemple: Le coeur de la thèse repose sur la proposition d’un modèle pour combiner des descripteurs de bas niveau et génériques afin d’obtenir un descripteur plus riche et adapté à un cas d’utilisation donné tout en conservant la généricité afin d’indexer différents types de contenus visuels. L’application considérée étant la recherche par l’exemple, une autre difficulté majeure est la complexité de la proposition, qui doit correspondre à des temps de récupération réduits, même avec de grands ensembles de données. Pour atteindre ces objectifs, nous proposons une approche basée sur la fusion d'index inversés, ce qui permet de mieux représenter le contenu tout en étant associé à une méthode d’accès efficace.Complémentarité des descripteurs: Nous nous concentrons sur l’évaluation de la complémentarité des descripteurs locaux existant en proposant des critères statistiques d’analyse de leur répartition spatiale dans l'image. Ce travail permet de mettre en évidence une synergie entre certaines de ces techniques lorsqu’elles sont jugées suffisamment complémentaires. Les critères spatiaux sont exploités dans un modèle de prédiction à base de régression linéaire, qui a l'avantage de permettre la sélection de combinaisons de descripteurs optimale pour la base considérée mais surtout pour chaque image de cette base. L'approche est évaluée avec le moteur de recherche multi-index, où il montre sa pertinence et met aussi en lumière le fait que la combinaison optimale de descripteurs peut varier d'une image à l'autre.En outre, nous exploitons les deux propositions précédentes pour traiter le problème de la recherche d'images inter-domaines, correspondant notamment à des vues multi-source et multi-date. Deux applications sont explorées dans cette thèse. La recherche d’images inter-domaines est appliquée aux collections photographiques culturelles numérisées d’un musée, où elle démontre son efficacité pour l’exploration et la valorisation de ces contenus à différents niveaux, depuis leur archivage jusqu’à leur exposition ou ex situ. Ensuite, nous explorons l’application de la localisation basée image entre domaines, où la pose d’une image est estimée à partir d’images géoréférencées, en retrouvant des images géolocalisées visuellement similaires à la requête / Content-Based Image Retrieval (CBIR) is a discipline of Computer Science which aims at automatically structuring image collections according to some visual criteria. The offered functionalities include the efficient access to images in a large database of images, or the identification of their content through object detection and recognition tools. They impact a large range of fields which manipulate this kind of data, such as multimedia, culture, security, health, scientific research, etc.To index an image from its visual content first requires producing a visual summary of this content for a given use, which will be the index of this image in the database. From now on, the literature on image descriptors is very rich; several families of descriptors exist and in each family, a lot of approaches live together. Many descriptors do not describe the same information and do not have the same properties. Therefore it is relevant to combine some of them to better describe the image content. The combination can be implemented differently according to the involved descriptors and to the application. In this thesis, we focus on the family of local descriptors, with application to image and object retrieval by example in a collection of images. Their nice properties make them very popular for retrieval, recognition and categorization of objects and scenes. Two directions of research are investigated:Feature combination applied to query-by-example image retrieval: the core of the thesis rests on the proposal of a model for combining low-level and generic descriptors in order to obtain a descriptor richer and adapted to a given use case while maintaining genericity in order to be able to index different types of visual contents. The considered application being query-by-example, another major difficulty is the complexity of the proposal, which has to meet with reduced retrieval times, even with large datasets. To meet these goals, we propose an approach based on the fusion of inverted indices, which allows to represent the content better while being associated with an efficient access method.Complementarity of the descriptors: We focus on the evaluation of the complementarity of existing local descriptors by proposing statistical criteria of analysis of their spatial distribution. This work allows highlighting a synergy between some of these techniques when judged sufficiently complementary. The spatial criteria are employed within a regression-based prediction model which has the advantage of selecting the suitable feature combinations globally for a dataset but most importantly for each image. The approach is evaluated within the fusion of inverted indices search engine, where it shows its relevance and also highlights that the optimal combination of features may vary from an image to another.Additionally, we exploit the previous two proposals to address the problem of cross-domain image retrieval, where the images are matched across different domains, including multi-source and multi-date contents. Two applications of cross-domain matching are explored. First, cross-domain image retrieval is applied to the digitized cultural photographic collections of a museum, where it demonstrates its effectiveness for the exploration and promotion of these contents at different levels from their archiving up to their exhibition in or ex-situ. Second, we explore the application of cross-domain image localization, where the pose of a landmark is estimated by retrieving visually similar geo-referenced images to the query images
16

Anotação e classificação automática de entidades nomeadas em notícias esportivas em Português Brasileiro / Automatic named entity recognition and classification for brazilian portuguese sport news

Rodrigo Constantin Ctenas Zaccara 11 July 2012 (has links)
O objetivo deste trabalho é desenvolver uma plataforma para anotação e classificação automática de entidades nomeadas para notícias escritas em português do Brasil. Para restringir um pouco o escopo do treinamento e análise foram utilizadas notícias esportivas do Campeonato Paulista de 2011 do portal UOL (Universo Online). O primeiro artefato desenvolvido desta plataforma foi a ferramenta WebCorpus. Esta tem como principal intuito facilitar o processo de adição de metainformações a palavras através do uso de uma interface rica web, elaborada para deixar o trabalho ágil e simples. Desta forma as entidades nomeadas das notícias são anotadas e classificadas manualmente. A base de dados foi alimentada pela ferramenta de aquisição e extração de conteúdo desenvolvida também para esta plataforma. O segundo artefato desenvolvido foi o córpus UOLCP2011 (UOL Campeonato Paulista 2011). Este córpus foi anotado e classificado manualmente através do uso da ferramenta WebCorpus utilizando sete tipos de entidades: pessoa, lugar, organização, time, campeonato, estádio e torcida. Para o desenvolvimento do motor de anotação e classificação automática de entidades nomeadas foram utilizadas três diferentes técnicas: maximização de entropia, índices invertidos e métodos de mesclagem das duas técnicas anteriores. Para cada uma destas foram executados três passos: desenvolvimento do algoritmo, treinamento utilizando técnicas de aprendizado de máquina e análise dos melhores resultados. / The main target of this research is to develop an automatic named entity classification tool to sport news written in Brazilian Portuguese. To reduce this scope, during training and analysis only sport news about São Paulo Championship of 2011 written by UOL2 (Universo Online) was used. The first artefact developed was the WebCorpus tool, which aims to make easier the process of add meta informations to words, through a rich web interface. Using this, all the corpora news are tagged manually. The database used by this tool was fed by the crawler tool, also developed during this research. The second artefact developed was the corpora UOLCP2011 (UOL Campeonato Paulista 2011). This corpora was manually tagged using the WebCorpus tool. During this process, seven classification concepts were used: person, place, organization, team, championship, stadium and fans. To develop the automatic named entity classification tool, three different approaches were analysed: maximum entropy, inverted index and merge tecniques using both. Each approach had three steps: algorithm development, training using machine learning tecniques and best score analysis.
17

[en] APPROXIMATE NEAREST NEIGHBOR SEARCH FOR THE KULLBACK-LEIBLER DIVERGENCE / [pt] BUSCA APROXIMADA DE VIZINHOS MAIS PRÓXIMOS PARA DIVERGÊNCIA DE KULLBACK-LEIBLER

19 March 2018 (has links)
[pt] Em uma série de aplicações, os pontos de dados podem ser representados como distribuições de probabilidade. Por exemplo, os documentos podem ser representados como modelos de tópicos, as imagens podem ser representadas como histogramas e também a música pode ser representada como uma distribuição de probabilidade. Neste trabalho, abordamos o problema do Vizinho Próximo Aproximado onde os pontos são distribuições de probabilidade e a função de distância é a divergência de Kullback-Leibler (KL). Mostramos como acelerar as estruturas de dados existentes, como a Bregman Ball Tree, em teoria, colocando a divergência KL como um produto interno. No lado prático, investigamos o uso de duas técnicas de indexação muito populares: Índice Invertido e Locality Sensitive Hashing. Os experimentos realizados em 6 conjuntos de dados do mundo real mostraram que o Índice Invertido é melhor do que LSH e Bregman Ball Tree, em termos de consultas por segundo e precisão. / [en] In a number of applications, data points can be represented as probability distributions. For instance, documents can be represented as topic models, images can be represented as histograms and also music can be represented as a probability distribution. In this work, we address the problem of the Approximate Nearest Neighbor where the points are probability distributions and the distance function is the Kullback-Leibler (KL) divergence. We show how to accelerate existing data structures such as the Bregman Ball Tree, by posing the KL divergence as an inner product embedding. On the practical side we investigated the use of two, very popular, indexing techniques: Inverted Index and Locality Sensitive Hashing. Experiments performed on 6 real world data-sets showed the Inverted Index performs better than LSH and Bregman Ball Tree, in terms of queries per second and precision.

Page generated in 0.0904 seconds