Spelling suggestions: "subject:"nearestneighbor search"" "subject:"nearestneighbour search""
11 |
Learning compact representations for large scale image search / Apprentissage de représentations compactes pour la recherche d'images à grande échelleJain, Himalaya 04 June 2018 (has links)
Cette thèse aborde le problème de la recherche d'images à grande échelle. Pour aborder la recherche d'images à grande échelle, il est nécessaire de coder des images avec des représentations compactes qui peuvent être efficacement utilisées pour comparer des images de manière significative. L'obtention d'une telle représentation compacte peut se faire soit en comprimant des représentations efficaces de grande dimension, soit en apprenant des représentations compactes de bout en bout. Le travail de cette thèse explore et avance dans ces deux directions. Dans notre première contribution, nous étendons les approches de quantification vectorielle structurée telles que la quantification de produit en proposant une représentation somme pondérée de codewords. Nous testons et vérifions les avantages de notre approche pour la recherche approximative du plus proche voisin sur les caractéristiques d'image locales et globales, ce qui est un moyen important d'aborder la recherche d'images à grande échelle. L'apprentissage de la représentation compacte pour la recherche d'images a récemment attiré beaucoup d'attention avec diverses approches basées sur le hachage profond proposées. Dans de telles approches, les réseaux de neurones convolutifs profonds apprennent à coder des images en codes binaires compacts. Dans cette thèse, nous proposons une approche d'apprentissage supervisé profond pour la représentation binaire structurée qui rappelle une approche de quantification vectorielle structurée telle que PQ. Notre approche bénéficie de la recherche asymétrique par rapport aux approches de hachage profond et apporte une nette amélioration de la précision de la recherche au même débit binaire. L'index inversé est une autre partie importante du système de recherche à grande échelle en dehors de la représentation compacte. À cette fin, nous étendons nos idées pour l'apprentissage de la représentation compacte supervisée pour la construction d'index inversés. Dans ce travail, nous abordons l'indexation inversée avec un apprentissage approfondi supervisé et essayons d'unifier l'apprentissage de l'indice inversé et de la représentation compacte. Nous évaluons minutieusement toutes les méthodes proposées sur divers ensembles de données accessibles au public. Nos méthodes surpassent ou sont compétitives avec l'état de l'art. / This thesis addresses the problem of large-scale image search. To tackle image search at large scale, it is required to encode images with compact representations which can be efficiently employed to compare images meaningfully. Obtaining such compact representation can be done either by compressing effective high dimensional representations or by learning compact representations in an end-to-end manner. The work in this thesis explores and advances in both of these directions. In our first contribution, we extend structured vector quantization approaches such as Product Quantization by proposing a weighted codeword sum representation. We test and verify the benefits of our approach for approximate nearest neighbor search on local and global image features which is an important way to approach large scale image search. Learning compact representation for image search recently got a lot of attention with various deep hashing based approaches being proposed. In such approaches, deep convolutional neural networks are learned to encode images into compact binary codes. In this thesis we propose a deep supervised learning approach for structured binary representation which is a reminiscent of structured vector quantization approaches such as PQ. Our approach benefits from asymmetric search over deep hashing approaches and gives a clear improvement for search accuracy at the same bit-rate. Inverted index is another important part of large scale search system apart from the compact representation. To this end, we extend our ideas for supervised compact representation learning for building inverted indexes. In this work we approach inverted indexing with supervised deep learning and make an attempt to unify the learning of inverted index and compact representation. We thoroughly evaluate all the proposed methods on various publicly available datasets. Our methods either outperform, or are competitive with the state-of-the-art.
|
12 |
Fast Algorithms for Nearest Neighbour SearchKibriya, Ashraf Masood January 2007 (has links)
The nearest neighbour problem is of practical significance in a number of fields. Often we are interested in finding an object near to a given query object. The problem is old, and a large number of solutions have been proposed for it in the literature. However, it remains the case that even the most popular of the techniques proposed for its solution have not been compared against each other. Also, many techniques, including the old and popular ones, can be implemented in a number of ways, and often the different implementations of a technique have not been thoroughly compared either. This research presents a detailed investigation of different implementations of two popular nearest neighbour search data structures, KDTrees and Metric Trees, and compares the different implementations of each of the two structures against each other. The best implementations of these structures are then compared against each other and against two other techniques, Annulus Method and Cover Trees. Annulus Method is an old technique that was rediscovered during the research for this thesis. Cover Trees are one of the most novel and promising data structures for nearest neighbour search that have been proposed in the literature.
|
13 |
Soluções aproximadas para algoritmos escaláveis de mineração de dados em domínios de dados complexos usando GPGPU / On approximate solutions to scalable data mining algorithms for complex data problems using GPGPUMamani, Alexander Victor Ocsa 22 September 2011 (has links)
A crescente disponibilidade de dados em diferentes domínios tem motivado o desenvolvimento de técnicas para descoberta de conhecimento em grandes volumes de dados complexos. Trabalhos recentes mostram que a busca em dados complexos é um campo de pesquisa importante, já que muitas tarefas de mineração de dados, como classificação, detecção de agrupamentos e descoberta de motifs, dependem de algoritmos de busca ao vizinho mais próximo. Para resolver o problema da busca dos vizinhos mais próximos em domínios complexos muitas abordagens determinísticas têm sido propostas com o objetivo de reduzir os efeitos da maldição da alta dimensionalidade. Por outro lado, algoritmos probabilísticos têm sido pouco explorados. Técnicas recentes relaxam a precisão dos resultados a fim de reduzir o custo computacional da busca. Além disso, em problemas de grande escala, uma solução aproximada com uma análise teórica sólida mostra-se mais adequada que uma solução exata com um modelo teórico fraco. Por outro lado, apesar de muitas soluções exatas e aproximadas de busca e mineração terem sido propostas, o modelo de programação em CPU impõe restrições de desempenho para esses tipos de solução. Uma abordagem para melhorar o tempo de execução de técnicas de recuperação e mineração de dados em várias ordens de magnitude é empregar arquiteturas emergentes de programação paralela, como a arquitetura CUDA. Neste contexto, este trabalho apresenta uma proposta para buscas kNN de alto desempenho baseada numa técnica de hashing e implementações paralelas em CUDA. A técnica proposta é baseada no esquema LSH, ou seja, usa-se projeções em subespac¸os. O LSH é uma solução aproximada e tem a vantagem de permitir consultas de custo sublinear para dados em altas dimensões. Usando implementações massivamente paralelas melhora-se tarefas de mineração de dados. Especificamente, foram desenvolvidos soluções de alto desempenho para algoritmos de descoberta de motifs baseados em implementações paralelas de consultas kNN. As implementações massivamente paralelas em CUDA permitem executar estudos experimentais sobre grandes conjuntos de dados reais e sintéticos. A avaliação de desempenho realizada neste trabalho usando GeForce GTX470 GPU resultou em um aumento de desempenho de até 7 vezes, em média sobre o estado da arte em buscas por similaridade e descoberta de motifs / The increasing availability of data in diverse domains has created a necessity to develop techniques and methods to discover knowledge from huge volumes of complex data, motivating many research works in databases, data mining and information retrieval communities. Recent studies have suggested that searching in complex data is an interesting research field because many data mining tasks such as classification, clustering and motif discovery depend on nearest neighbor search algorithms. Thus, many deterministic approaches have been proposed to solve the nearest neighbor search problem in complex domains, aiming to reduce the effects of the well-known curse of dimensionality. On the other hand, probabilistic algorithms have been slightly explored. Recently, new techniques aim to reduce the computational cost relaxing the quality of the query results. Moreover, in large-scale problems, an approximate solution with a solid theoretical analysis seems to be more appropriate than an exact solution with a weak theoretical model. On the other hand, even though several exact and approximate solutions have been proposed, single CPU architectures impose limits on performance to deliver these kinds of solution. An approach to improve the runtime of data mining and information retrieval techniques by an order-of-magnitude is to employ emerging many-core architectures such as CUDA-enabled GPUs. In this work we present a massively parallel kNN query algorithm based on hashing and CUDA implementation. Our method, based on the LSH scheme, is an approximate method which queries high-dimensional datasets with sub-linear computational time. By using the massively parallel implementation we improve data mining tasks, specifically we create solutions for (soft) realtime time series motif discovery. Experimental studies on large real and synthetic datasets were carried out thanks to the highly CUDA parallel implementation. Our performance evaluation on GeForce GTX 470 GPU resulted in average runtime speedups of up to 7x on the state-of-art of similarity search and motif discovery solutions
|
14 |
Soluções aproximadas para algoritmos escaláveis de mineração de dados em domínios de dados complexos usando GPGPU / On approximate solutions to scalable data mining algorithms for complex data problems using GPGPUAlexander Victor Ocsa Mamani 22 September 2011 (has links)
A crescente disponibilidade de dados em diferentes domínios tem motivado o desenvolvimento de técnicas para descoberta de conhecimento em grandes volumes de dados complexos. Trabalhos recentes mostram que a busca em dados complexos é um campo de pesquisa importante, já que muitas tarefas de mineração de dados, como classificação, detecção de agrupamentos e descoberta de motifs, dependem de algoritmos de busca ao vizinho mais próximo. Para resolver o problema da busca dos vizinhos mais próximos em domínios complexos muitas abordagens determinísticas têm sido propostas com o objetivo de reduzir os efeitos da maldição da alta dimensionalidade. Por outro lado, algoritmos probabilísticos têm sido pouco explorados. Técnicas recentes relaxam a precisão dos resultados a fim de reduzir o custo computacional da busca. Além disso, em problemas de grande escala, uma solução aproximada com uma análise teórica sólida mostra-se mais adequada que uma solução exata com um modelo teórico fraco. Por outro lado, apesar de muitas soluções exatas e aproximadas de busca e mineração terem sido propostas, o modelo de programação em CPU impõe restrições de desempenho para esses tipos de solução. Uma abordagem para melhorar o tempo de execução de técnicas de recuperação e mineração de dados em várias ordens de magnitude é empregar arquiteturas emergentes de programação paralela, como a arquitetura CUDA. Neste contexto, este trabalho apresenta uma proposta para buscas kNN de alto desempenho baseada numa técnica de hashing e implementações paralelas em CUDA. A técnica proposta é baseada no esquema LSH, ou seja, usa-se projeções em subespac¸os. O LSH é uma solução aproximada e tem a vantagem de permitir consultas de custo sublinear para dados em altas dimensões. Usando implementações massivamente paralelas melhora-se tarefas de mineração de dados. Especificamente, foram desenvolvidos soluções de alto desempenho para algoritmos de descoberta de motifs baseados em implementações paralelas de consultas kNN. As implementações massivamente paralelas em CUDA permitem executar estudos experimentais sobre grandes conjuntos de dados reais e sintéticos. A avaliação de desempenho realizada neste trabalho usando GeForce GTX470 GPU resultou em um aumento de desempenho de até 7 vezes, em média sobre o estado da arte em buscas por similaridade e descoberta de motifs / The increasing availability of data in diverse domains has created a necessity to develop techniques and methods to discover knowledge from huge volumes of complex data, motivating many research works in databases, data mining and information retrieval communities. Recent studies have suggested that searching in complex data is an interesting research field because many data mining tasks such as classification, clustering and motif discovery depend on nearest neighbor search algorithms. Thus, many deterministic approaches have been proposed to solve the nearest neighbor search problem in complex domains, aiming to reduce the effects of the well-known curse of dimensionality. On the other hand, probabilistic algorithms have been slightly explored. Recently, new techniques aim to reduce the computational cost relaxing the quality of the query results. Moreover, in large-scale problems, an approximate solution with a solid theoretical analysis seems to be more appropriate than an exact solution with a weak theoretical model. On the other hand, even though several exact and approximate solutions have been proposed, single CPU architectures impose limits on performance to deliver these kinds of solution. An approach to improve the runtime of data mining and information retrieval techniques by an order-of-magnitude is to employ emerging many-core architectures such as CUDA-enabled GPUs. In this work we present a massively parallel kNN query algorithm based on hashing and CUDA implementation. Our method, based on the LSH scheme, is an approximate method which queries high-dimensional datasets with sub-linear computational time. By using the massively parallel implementation we improve data mining tasks, specifically we create solutions for (soft) realtime time series motif discovery. Experimental studies on large real and synthetic datasets were carried out thanks to the highly CUDA parallel implementation. Our performance evaluation on GeForce GTX 470 GPU resulted in average runtime speedups of up to 7x on the state-of-art of similarity search and motif discovery solutions
|
15 |
Machine learning techniques for content-based information retrieval / Méthodes d’apprentissage automatique pour la recherche par le contenu de l’informationChafik, Sanaa 22 December 2017 (has links)
Avec l’évolution des technologies numériques et la prolifération d'internet, la quantité d’information numérique a considérablement évolué. La recherche par similarité (ou recherche des plus proches voisins) est une problématique que plusieurs communautés de recherche ont tenté de résoudre. Les systèmes de recherche par le contenu de l’information constituent l’une des solutions prometteuses à ce problème. Ces systèmes sont composés essentiellement de trois unités fondamentales, une unité de représentation des données pour l’extraction des primitives, une unité d’indexation multidimensionnelle pour la structuration de l’espace des primitives, et une unité de recherche des plus proches voisins pour la recherche des informations similaires. L’information (image, texte, audio, vidéo) peut être représentée par un vecteur multidimensionnel décrivant le contenu global des données d’entrée. La deuxième unité consiste à structurer l’espace des primitives dans une structure d’index, où la troisième unité -la recherche par similarité- est effective.Dans nos travaux de recherche, nous proposons trois systèmes de recherche par le contenu de plus proches voisins. Les trois approches sont non supervisées, et donc adaptées aux données étiquetées et non étiquetées. Elles sont basées sur le concept du hachage pour une recherche efficace multidimensionnelle des plus proches voisins. Contrairement aux approches de hachage existantes, qui sont binaires, les approches proposées fournissent des structures d’index avec un hachage réel. Bien que les approches de hachage binaires fournissent un bon compromis qualité-temps de calcul, leurs performances en termes de qualité (précision) se dégradent en raison de la perte d’information lors du processus de binarisation. À l'opposé, les approches de hachage réel fournissent une bonne qualité de recherche avec une meilleure approximation de l’espace d’origine, mais induisent en général un surcoût en temps de calcul.Ce dernier problème est abordé dans la troisième contribution. Les approches proposées sont classifiées en deux catégories, superficielle et profonde. Dans la première catégorie, on propose deux techniques de hachage superficiel, intitulées Symmetries of the Cube Locality sensitive hashing (SC-LSH) et Cluster-Based Data Oriented Hashing (CDOH), fondées respectivement sur le hachage aléatoire et l’apprentissage statistique superficiel. SCLSH propose une solution au problème de l’espace mémoire rencontré par la plupart des approches de hachage aléatoire, en considérant un hachage semi-aléatoire réduisant partiellement l’effet aléatoire, et donc l’espace mémoire, de ces dernières, tout en préservant leur efficacité pour la structuration des espaces hétérogènes. La seconde technique, CDOH, propose d’éliminer l’effet aléatoire en combinant des techniques d’apprentissage non-supervisé avec le concept de hachage. CDOH fournit de meilleures performances en temps de calcul, en espace mémoire et en qualité de recherche.La troisième contribution est une approche de hachage basée sur les réseaux de neurones profonds appelée "Unsupervised Deep Neuron-per-Neuron Hashing" (UDN2H). UDN2H propose une indexation individuelle de la sortie de chaque neurone de la couche centrale d’un modèle non supervisé. Ce dernier est un auto-encodeur profond capturant une structure individuelle de haut niveau de chaque neurone de sortie.Nos trois approches, SC-LSH, CDOH et UDN2H, ont été proposées séquentiellement durant cette thèse, avec un niveau croissant, en termes de la complexité des modèles développés, et en termes de la qualité de recherche obtenue sur de grandes bases de données d'information / The amount of media data is growing at high speed with the fast growth of Internet and media resources. Performing an efficient similarity (nearest neighbor) search in such a large collection of data is a very challenging problem that the scientific community has been attempting to tackle. One of the most promising solutions to this fundamental problem is Content-Based Media Retrieval (CBMR) systems. The latter are search systems that perform the retrieval task in large media databases based on the content of the data. CBMR systems consist essentially of three major units, a Data Representation unit for feature representation learning, a Multidimensional Indexing unit for structuring the resulting feature space, and a Nearest Neighbor Search unit to perform efficient search. Media data (i.e. image, text, audio, video, etc.) can be represented by meaningful numeric information (i.e. multidimensional vector), called Feature Description, describing the overall content of the input data. The task of the second unit is to structure the resulting feature descriptor space into an index structure, where the third unit, effective nearest neighbor search, is performed.In this work, we address the problem of nearest neighbor search by proposing three Content-Based Media Retrieval approaches. Our three approaches are unsupervised, and thus can adapt to both labeled and unlabeled real-world datasets. They are based on a hashing indexing scheme to perform effective high dimensional nearest neighbor search. Unlike most recent existing hashing approaches, which favor indexing in Hamming space, our proposed methods provide index structures adapted to a real-space mapping. Although Hamming-based hashing methods achieve good accuracy-speed tradeoff, their accuracy drops owing to information loss during the binarization process. By contrast, real-space hashing approaches provide a more accurate approximation in the mapped real-space as they avoid the hard binary approximations.Our proposed approaches can be classified into shallow and deep approaches. In the former category, we propose two shallow hashing-based approaches namely, "Symmetries of the Cube Locality Sensitive Hashing" (SC-LSH) and "Cluster-based Data Oriented Hashing" (CDOH), based respectively on randomized-hashing and shallow learning-to-hash schemes. The SC-LSH method provides a solution to the space storage problem faced by most randomized-based hashing approaches. It consists of a semi-random scheme reducing partially the randomness effect of randomized hashing approaches, and thus the memory storage problem, while maintaining their efficiency in structuring heterogeneous spaces. The CDOH approach proposes to eliminate the randomness effect by combining machine learning techniques with the hashing concept. The CDOH outperforms the randomized hashing approaches in terms of computation time, memory space and search accuracy.The third approach is a deep learning-based hashing scheme, named "Unsupervised Deep Neuron-per-Neuron Hashing" (UDN2H). The UDN2H approach proposes to index individually the output of each neuron of the top layer of a deep unsupervised model, namely a Deep Autoencoder, with the aim of capturing the high level individual structure of each neuron output.Our three approaches, SC-LSH, CDOH and UDN2H, were proposed sequentially as the thesis was progressing, with an increasing level of complexity in terms of the developed models, and in terms of the effectiveness and the performances obtained on large real-world datasets
|
16 |
[en] APPROXIMATE NEAREST NEIGHBOR SEARCH FOR THE KULLBACK-LEIBLER DIVERGENCE / [pt] BUSCA APROXIMADA DE VIZINHOS MAIS PRÓXIMOS PARA DIVERGÊNCIA DE KULLBACK-LEIBLER19 March 2018 (has links)
[pt] Em uma série de aplicações, os pontos de dados podem ser representados como distribuições de probabilidade. Por exemplo, os documentos podem ser representados como modelos de tópicos, as imagens podem ser representadas como histogramas e também a música pode ser representada como uma distribuição de probabilidade. Neste trabalho, abordamos o problema do Vizinho Próximo Aproximado onde os pontos são distribuições de probabilidade e a função de distância é a divergência de Kullback-Leibler (KL). Mostramos como acelerar as estruturas de dados existentes, como a Bregman Ball Tree, em teoria, colocando a divergência KL como um produto interno. No lado prático, investigamos o uso de duas técnicas de indexação muito populares: Índice Invertido e Locality Sensitive Hashing. Os experimentos realizados em 6 conjuntos de dados do mundo real mostraram que o Índice Invertido é melhor do que LSH e Bregman Ball Tree, em termos
de consultas por segundo e precisão. / [en] In a number of applications, data points can be represented as probability distributions. For instance, documents can be represented as topic models, images can be represented as histograms and also music can be represented as a probability distribution. In this work, we address the problem of the Approximate Nearest Neighbor where the points are probability distributions and the distance function is the Kullback-Leibler (KL) divergence. We show how to accelerate existing data structures such as the Bregman Ball Tree, by posing the KL divergence as an inner product embedding. On the practical side we investigated the use of two, very popular, indexing techniques: Inverted Index and Locality Sensitive Hashing. Experiments performed on 6 real world data-sets showed the Inverted Index performs better than LSH and Bregman Ball Tree, in terms of queries per second and precision.
|
17 |
Spatially Resolved Hydration Statistical Mechanics at Biomolecular Surfaces from Atomistic SimulationsHeinz, Leonard 13 December 2021 (has links)
No description available.
|
18 |
[en] AUTOMATED SYNTHESIS OF OPTIMAL DECISION TREES FOR SMALL COMBINATORIAL OPTIMIZATION PROBLEMS / [pt] SÍNTESE AUTOMATIZADA DE ÁRVORES DE DECISÃO ÓTIMAS PARA PEQUENOS PROBLEMAS DE OTIMIZAÇÃO COMBINATÓRIACLEBER OLIVEIRA DAMASCENO 24 August 2021 (has links)
[pt] A análise de complexidade clássica para problemas NP-difíceis é geralmente
orientada para cenários de pior caso, considerando apenas o comportamento
assintótico. No entanto, existem algoritmos práticos com execução em um tempo razoável para muitos problemas clássicos. Além disso, há evidências que apontam para algoritmos polinomiais no modelo de árvore de decisão linear para resolver esses problemas, embora não muito explorados. Neste trabalho, exploramos esses resultados teóricos anteriores. Mostramos que a solução ótima para problemas combinatórios 0-1 pode ser encontrada reduzindo esses problemas para uma Busca por Vizinho Mais Próximo sobre o conjunto de vértices de Voronoi correspondentes. Utilizamos os hiperplanos que delimitam essas regiões para gerar sistematicamente uma árvore de decisão que repetidamente divide o espaço até que possa separar todas as soluções, garantindo uma resposta ótima. Fazemos experimentos para testar os limites de tamanho para os quais podemos construir essas árvores para os casos do 0-1 knapsack, weighted minimum cut e symmetric traveling salesman. Conseguimos encontrar as árvores desses problemas com tamanhos até 10, 5 e 6, respectivamente. Obtemos também as relações de adjacência completas para os esqueletos dos politopos do knapsack
e do traveling salesman até os tamanhos 10 e 7. Nossa abordagem supera
consistentemente o método de enumeração e os métodos baseline para o weighted
minimum cut e symmetric traveling salesman, fornecendo soluções ótimas em
microssegundos. / [en] Classical complexity analysis for NP-hard problems is usually oriented to
worst-case scenarios, considering only the asymptotic behavior. However, there
are practical algorithms running in a reasonable time for many classic problems. Furthermore, there is evidence pointing towards polynomial algorithms in
the linear decision tree model to solve these problems, although not explored
much. In this work, we explore previous theoretical results. We show that the
optimal solution for 0-1 combinatorial problems can be found by reducing these
problems into a Nearest Neighbor Search over the set of corresponding Voronoi
vertices. We use the hyperplanes delimiting these regions to systematically generate a decision tree that repeatedly splits the space until it can separate all solutions, guaranteeing an optimal answer. We run experiments to test the size limits for which we can build these trees for the cases of the 0-1 knapsack, weighted minimum cut, and symmetric traveling salesman. We manage to find the trees of these problems with sizes up to 10, 5, and 6, respectively. We also obtain the complete adjacency relations for the skeletons of the knapsack and traveling salesman polytopes up to size 10 and 7. Our approach consistently outperforms the enumeration method and the baseline methods for the weighted minimum cut and symmetric traveling salesman, providing optimal solutions within microseconds.
|
19 |
Result Diversification on Spatial, Multidimensional, Opinion, and Bibliographic DataKucuktunc, Onur 01 October 2013 (has links)
No description available.
|
20 |
Détection robuste de jonctions et points d'intérêt dans les images et indexation rapide de caractéristiques dans un espace de grande dimension / Robust junction for line-drawing images and time-efficient feature indexing in feature vector spacePham, The Anh 27 November 2013 (has links)
Les caractéristiques locales sont essentielles dans de nombreux domaines de l’analyse d’images comme la détection et la reconnaissance d’objets, la recherche d’images, etc. Ces dernières années, plusieurs détecteurs dits locaux ont été proposés pour extraire de telles caractéristiques. Ces détecteurs locaux fonctionnent généralement bien pour certaines applications, mais pas pour toutes. Prenons, par exemple, une application de recherche dans une large base d’images. Dans ce cas, un détecteur à base de caractéristiques binaires pourrait être préféré à un autre exploitant des valeurs réelles. En effet, la précision des résultats de recherche pourrait être moins bonne tout en restant raisonnable, mais probablement avec un temps de réponse beaucoup plus court. En général, les détecteurs locaux sont utilisés en combinaison avec une méthode d’indexation. En effet, une méthode d’indexation devient nécessaire dans le cas où les ensembles de points traités sont composés de milliards de points, où chaque point est représenté par un vecteur de caractéristiques de grande dimension. / Local features are of central importance to deal with many different problems in image analysis and understanding including image registration, object detection and recognition, image retrieval, etc. Over the years, many local detectors have been presented to detect such features. Such a local detector usually works well for some particular applications but not all. Taking an application of image retrieval in large database as an example, an efficient method for detecting binary features should be preferred to other real-valued feature detection methods. The reason is easily seen: it is expected to have a reasonable precision of retrieval results but the time response must be as fast as possible. Generally, local features are used in combination with an indexing scheme. This is highly needed for the case where the dataset is composed of billions of data points, each of which is in a high-dimensional feature vector space.
|
Page generated in 0.0655 seconds