• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 603
  • 285
  • 85
  • 61
  • 40
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 12
  • 6
  • 5
  • 5
  • Tagged with
  • 1347
  • 236
  • 168
  • 163
  • 140
  • 124
  • 110
  • 109
  • 103
  • 93
  • 90
  • 90
  • 89
  • 82
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Iterated Stretching, Extensional Rheology and Formation of Beads-on-a-String Structures in Polymer Solutions

Oliveira, Monica S. N., Yeh, Roger, McKinley, Gareth H. 01 December 2005 (has links)
The transient extensional rheology and the dynamics of elastocapillary thinning in aqueous solutions of polyethylene oxide (PEO) are studied with high-speed digital video microscopy. At long times, the evolution of the thread radius deviates from self-similar exponential decay and competition between elastic, capillary and inertial forces leads to the formation of a periodic array of beads connected by axially-uniform ligaments. This configuration is unstable and successive instabilities propagate from the necks connecting the beads and ligaments. This iterated process results in multiple generations of beads developing along the string in general agreement with predictions of Chang et al. [Phys Fluids, 11, 1717 (1999)] although the experiments yield a different recursion relation between the successive generations of beads. At long times, finite extensibility truncates the iterated instability, and slow axial translation of the bead arrays along the interconnecting threads leads to progressive coalescence before the ultimate rupture of the fluid column. Despite these dynamical complexities it is still possible to measure the steady growth in the transient extensional viscosity by monitoring the slow capillarydriven thinning in the cylindrical ligaments between beads. / Accepted for publication in JNNFM, December 2005. / NASA and the Portuguese Science Foundation
212

Detecção, gerenciamento e consulta a réplicas e a versões de documentos XML / Detection, management and querying of replicas and versions of XML documents

Saccol, Deise de Brum January 2008 (has links)
O objetivo geral desta tese é a detecção, o gerenciamento e a consulta às réplicas e às versões de documentos XML. Denota-se por réplica uma cópia idêntica de um objeto do mundo real, enquanto versão é uma representação diferente, mas muito similar, deste objeto. Trabalhos prévios focam em gerenciamento e consulta a versões conhecidas, e não no problema da detecção de que dois ou mais objetos, aparentemente distintos, são variações (versões) do mesmo objeto. No entanto, o problema da detecção é crítico e pode ser observado em diversos cenários, tais como detecção de plágio, ranking de páginas Web, identificação de clones de software e busca em sistemas peer-to-peer (P2P). Nesta tese assume-se que podem existir diversas réplicas de um documento XML. Documentos XML também podem ser modificados ao longo do tempo, ocasionando o surgimento de versões. A detecção de réplicas é relativamente simples e pode ser feita através do uso de funções hash. Já a detecção de versões engloba conceitos de similaridade, a qual pode ser medida por várias métricas, tais como similaridade de conteúdo, de estrutura, de assunto, etc. Além da análise da similaridade entre os arquivos também se faz necessária a definição de um mecanismo de detecção de versões. O mecanismo deve possibilitar o gerenciamento e a posterior consulta às réplicas e às versões detectadas. Para que o objetivo da tese fosse alcançado foram definidos um conjunto de funções de similaridade para arquivos XML e o mecanismo de detecção de réplicas e de versões. Também foi especificado um framework onde tal mecanismo pode ser inserido e os seus respectivos componentes, que possibilitam o gerenciamento e a consulta às réplicas e às versões detectadas. Foi realizado um conjunto de experimentos que validam o mecanismo proposto juntamente com a implementação de protótipos que demonstram a eficácia dos componentes do framework. Como diferencial desta tese, o problema de detecção de versões é tratado como um problema de classificação, para o qual o uso de limiares não é necessário. Esta abordagem é alcançada pelo uso da técnica baseada em classificadores Naïve Bayesianos. Resultados demonstram a boa qualidade obtida com o mecanismo proposto na tese. / The overall goals of this thesis are the detection, management and querying of replicas and versions of XML documents. We denote by replica an identical copy of a real-world object, and by version a different but very similar representation of this object. Previous works focus on version management and querying rather than version detection. However, the version detection problem is critical in many scenarios, such as plagiarism detection, Web page ranking, software clone identification, and peer-to-peer (P2P) searching. In this thesis, we assume the existence of several replicas of a XML document. XML documents can be modified over time, causing the creation of versions. Replica detection is relatively simple and can be achieved by using hash functions. The version detection uses similarity concepts, which can be assessed by some metrics such as content similariy, structure similarity, subject similarity, and so on. Besides the similarity analysis among files, it is also necessary to define the version detection mechanism. The mechanism should allow the management and the querying of the detected replicas and versions. In order to achieve the goals of the thesis, we defined a set of similarity functions for XML files, the replica and version detection mechanism, the framework where such mechanism can be included and its components that allow managing and querying the detected replicas and versions. We performed a set of experiments for evaluating the proposed mechanism and we implemented tool prototypes that demonstrate the accuracy of some framework components. As the main distinguishing point, this thesis considers the version detection problem as a classification problem, for which the use of thresholds is not necessary. This approach is achieved by using Naïve Bayesian classifiers.
213

A study of the CDGSH protein family: biophysical and bioinformatic analysis of the [2FE-2S] cluster protein mitoneet

Bak, Daniel 18 March 2016 (has links)
Iron-sulfur clusters, an important class of redox active cofactors, are ligated by protein-based Cys ligands in a variety of nuclearities. Traditionally, these clusters serve as one-electron transfer units, though many clusters are capable of catalytic activity and sensing functions. Recently, a greater number of iron-sulfur clusters with non-Cys ligation have been identified, wherein one or more of the Cys ligands are replaced by an alternative amino acid residue such as His or Asp. In most cases the role of this ligand substitution is unknown. Some hypotheses are that non-Cys ligation may modify reduction potential, allow for proton-coupled electron transfer, or modulate cluster stability. The human mitoNEET protein contains a 1-His, 3-Cys ligated [2Fe-2S] cluster, identified by the presence of a CDGSH peptide motif. MitoNEET is a binding target for the type II-diabetes drug, pioglitazone, and is implicated in controlling mitochondrial iron levels. How exactly mitoNEET functions in the cell is unknown, as is the role its uniquely ligated FeS cluster may play. This thesis uses mitoNEET as a model for the study of non-Cys ligated FeS clusters and their biological function. Protein film voltammetry was used to examine the pH-dependent electrochemical properties of the mitoNEET cluster, indicating that multiple as yet unidentified protonations control redox potential and that drug binding impacts cluster reduction and protonation. Additionally, the effect of reduction and protonation on cluster and protein structure instability was examined through absorbance and circular dichroism measurements, suggesting an important role for cluster lability in protein function. The CDGSH-motif family of [2Fe-2S] cluster-binding proteins was examined using protein similarity networks. This technique highlights the evolutionary relationship among these proteins, and has led to further work examining the DUF1271 domain containing proteins E. coli YjdI and A. vinosum Alvin0680 (a CDGSH-DUF1271 fusion). This work furthers the scientific knowledge of non-Cys ligated Fe-S clusters by improving our understanding of how the mitoNEET His-ligand contributes to proton-coupled electron transfer and cluster instability, and how the broader class of CDGSH-motif proteins is organized.
214

Detecção, gerenciamento e consulta a réplicas e a versões de documentos XML / Detection, management and querying of replicas and versions of XML documents

Saccol, Deise de Brum January 2008 (has links)
O objetivo geral desta tese é a detecção, o gerenciamento e a consulta às réplicas e às versões de documentos XML. Denota-se por réplica uma cópia idêntica de um objeto do mundo real, enquanto versão é uma representação diferente, mas muito similar, deste objeto. Trabalhos prévios focam em gerenciamento e consulta a versões conhecidas, e não no problema da detecção de que dois ou mais objetos, aparentemente distintos, são variações (versões) do mesmo objeto. No entanto, o problema da detecção é crítico e pode ser observado em diversos cenários, tais como detecção de plágio, ranking de páginas Web, identificação de clones de software e busca em sistemas peer-to-peer (P2P). Nesta tese assume-se que podem existir diversas réplicas de um documento XML. Documentos XML também podem ser modificados ao longo do tempo, ocasionando o surgimento de versões. A detecção de réplicas é relativamente simples e pode ser feita através do uso de funções hash. Já a detecção de versões engloba conceitos de similaridade, a qual pode ser medida por várias métricas, tais como similaridade de conteúdo, de estrutura, de assunto, etc. Além da análise da similaridade entre os arquivos também se faz necessária a definição de um mecanismo de detecção de versões. O mecanismo deve possibilitar o gerenciamento e a posterior consulta às réplicas e às versões detectadas. Para que o objetivo da tese fosse alcançado foram definidos um conjunto de funções de similaridade para arquivos XML e o mecanismo de detecção de réplicas e de versões. Também foi especificado um framework onde tal mecanismo pode ser inserido e os seus respectivos componentes, que possibilitam o gerenciamento e a consulta às réplicas e às versões detectadas. Foi realizado um conjunto de experimentos que validam o mecanismo proposto juntamente com a implementação de protótipos que demonstram a eficácia dos componentes do framework. Como diferencial desta tese, o problema de detecção de versões é tratado como um problema de classificação, para o qual o uso de limiares não é necessário. Esta abordagem é alcançada pelo uso da técnica baseada em classificadores Naïve Bayesianos. Resultados demonstram a boa qualidade obtida com o mecanismo proposto na tese. / The overall goals of this thesis are the detection, management and querying of replicas and versions of XML documents. We denote by replica an identical copy of a real-world object, and by version a different but very similar representation of this object. Previous works focus on version management and querying rather than version detection. However, the version detection problem is critical in many scenarios, such as plagiarism detection, Web page ranking, software clone identification, and peer-to-peer (P2P) searching. In this thesis, we assume the existence of several replicas of a XML document. XML documents can be modified over time, causing the creation of versions. Replica detection is relatively simple and can be achieved by using hash functions. The version detection uses similarity concepts, which can be assessed by some metrics such as content similariy, structure similarity, subject similarity, and so on. Besides the similarity analysis among files, it is also necessary to define the version detection mechanism. The mechanism should allow the management and the querying of the detected replicas and versions. In order to achieve the goals of the thesis, we defined a set of similarity functions for XML files, the replica and version detection mechanism, the framework where such mechanism can be included and its components that allow managing and querying the detected replicas and versions. We performed a set of experiments for evaluating the proposed mechanism and we implemented tool prototypes that demonstrate the accuracy of some framework components. As the main distinguishing point, this thesis considers the version detection problem as a classification problem, for which the use of thresholds is not necessary. This approach is achieved by using Naïve Bayesian classifiers.
215

Visual Analytics Tool for the Global Change Assessment Model

January 2015 (has links)
abstract: The Global Change Assessment Model (GCAM) is an integrated assessment tool for exploring consequences and responses to global change. However, the current iteration of GCAM relies on NetCDF file outputs which need to be exported for visualization and analysis purposes. Such a requirement limits the uptake of this modeling platform for analysts that may wish to explore future scenarios. This work has focused on a web-based geovisual analytics interface for GCAM. Challenges of this work include enabling both domain expert and model experts to be able to functionally explore the model. Furthermore, scenario analysis has been widely applied in climate science to understand the impact of climate change on the future human environment. The inter-comparison of scenario analysis remains a big challenge in both the climate science and visualization communities. In a close collaboration with the Global Change Assessment Model team, I developed the first visual analytics interface for GCAM with a series of interactive functions to help users understand the simulated impact of climate change on sectors of the global economy, and at the same time allow them to explore inter comparison of scenario analysis with GCAM models. This tool implements a hierarchical clustering approach to allow inter-comparison and similarity analysis among multiple scenarios over space, time, and multiple attributes through a set of coordinated multiple views. After working with this tool, the scientists from the GCAM team agree that the geovisual analytics tool can facilitate scenario exploration and enable scientific insight gaining process into scenario comparison. To demonstrate my work, I present two case studies, one of them explores the potential impact that the China south-north water transportation project in the Yangtze River basin will have on projected water demands. The other case study using GCAM models demonstrates how the impact of spatial variations and scales on similarity analysis of climate scenarios varies at world, continental, and country scales. / Dissertation/Thesis / Masters Thesis Computer Science 2015
216

Multi-Variate Time Series Similarity Measures and Their Robustness Against Temporal Asynchrony

January 2015 (has links)
abstract: The amount of time series data generated is increasing due to the integration of sensor technologies with everyday applications, such as gesture recognition, energy optimization, health care, video surveillance. The use of multiple sensors simultaneously for capturing different aspects of the real world attributes has also led to an increase in dimensionality from uni-variate to multi-variate time series. This has facilitated richer data representation but also has necessitated algorithms determining similarity between two multi-variate time series for search and analysis. Various algorithms have been extended from uni-variate to multi-variate case, such as multi-variate versions of Euclidean distance, edit distance, dynamic time warping. However, it has not been studied how these algorithms account for asynchronous in time series. Human gestures, for example, exhibit asynchrony in their patterns as different subjects perform the same gesture with varying movements in their patterns at different speeds. In this thesis, we propose several algorithms (some of which also leverage metadata describing the relationships among the variates). In particular, we present several techniques that leverage the contextual relationships among the variates when measuring multi-variate time series similarities. Based on the way correlation is leveraged, various weighing mechanisms have been proposed that determine the importance of a dimension for discriminating between the time series as giving the same weight to each dimension can led to misclassification. We next study the robustness of the considered techniques against different temporal asynchronies, including shifts and stretching. Exhaustive experiments were carried on datasets with multiple types and amounts of temporal asynchronies. It has been observed that accuracy of algorithms that rely on data to discover variate relationships can be low under the presence of temporal asynchrony, whereas in case of algorithms that rely on external metadata, robustness against asynchronous distortions tends to be stronger. Specifically, algorithms using external metadata have better classification accuracy and cluster separation than existing state-of-the-art work, such as EROS, PCA, and naive dynamic time warping. / Dissertation/Thesis / Masters Thesis Computer Science 2015
217

CodeReco - A Semantic Java Method Recommender

January 2017 (has links)
abstract: The increasing volume and complexity of software systems and the growing demand of programming skills calls for efficient information retrieval techniques from source code documents. Programming related information seeking is often challenging for users facing constraints in knowledge and experience. Source code documents contain multi-faceted semi-structured text, having different levels of semantic information like syntax, blueprints, interfaces, flow graphs, dependencies and design patterns. Matching user queries optimally across these levels is a major challenge for information retrieval systems. Code recommendations can help information seeking and retrieval by pro-actively sampling similar examples based on the users context. These recommendations can be beneficial in improving learning via examples or improving code quality by sampling best practices or alternative implementations. In this thesis, an attempt is made to help programming related information seeking processes via pro-active code recommendations, and information retrieval processes by extracting structural-semantic information from source code. I present CodeReco, a system that recommends semantically similar Java method samples. Conventional code recommendations found in integrated development environments are primarily driven by syntactical compliance and auto-completion, whereas CodeReco is driven by similarities in use of language and structure-semantics. Methods are transformed to a vector space model and a novel metric of similarity is designed. Features in this vector space are categorized as belonging to types signature, structure, concept and language for user personalization. Offline tests show that CodeReco recommendations cover broader programming concepts and have higher conceptual similarity with their samples. A user study was conducted where users rated Java method recommendations that helped them icomplete two programming problems. 61.5% users were positive that real time method recommendations are helpful, and 50% reported this would reduce time spent in web searches. The empirical utility of CodeReco’s similarity metric on those problems was compared with a purely language based similarity metric (baseline). Baseline received higher ratings from novices, arguably due to lack of structure-semantics in their samples while seeking recommendations. / Dissertation/Thesis / Masters Thesis Computer Science 2017
218

Comparison and Prediction of Temporal Hotspot Maps

Arnesson, Andreas, Lewenhagen, Kenneth January 2018 (has links)
Context. To aid law enforcement agencies when coordinating and planningtheir efforts to prevent crime, there is a need to investigate methods usedin such areas. With the help of crime analysis methods, law enforcementare more efficient and pro-active in their work. One analysis method istemporal hotspot maps. The temporal hotspot map is often represented asa matrix with a certain resolution such as hours and days, if the aim is toshow occurrences of hour in correlation to weekday. This thesis includes asoftware prototype that allows for the comparison, visualization and predic-tion of temporal data. Objectives. This thesis explores if multiprocessing can be utilized to im-prove execution time for the following two temporal analysis methods, Aoris-tic and Getis-Ord*. Furthermore, to what extent two temporal hotspotmaps can be compared and visualized is researched. Additionally it wasinvestigated if a naive method could be used to predict temporal hotspotmaps accurately. Lastly this thesis explores how different software packag-ing methods compare to certain aspects defined in this thesis. Methods. An experiment was performed, to answer if multiprocessingcould improve execution time of Getis-Ord* or Aoristic. To explore howhotspot maps can be compared, a case study was carried out. Another ex-periment was used to answer if a naive forecasting method can be used topredict temporal hotspot maps. Lastly a theoretical analysis was executedto extract how different packaging methods work in relation to defined as-pects. Results. For both Getis-Ord* and Aoristic, the sequential implementationsachieved the shortest execution time. The Jaccard measure calculated thesimilarity most accurately. The naive forecasting method created, provednot adequate and a more advanced method is preferred. Forecasting Swedishburglaries with three previous months produced a mean of only 12.1% over-lap between hotspots. The Python package method accumulated the highestscore of the investigated packaging methods. Conclusions. The results showed that multiprocessing, in the languagePython, is not beneficial to use for Aoristic and Getis-Ord* due to thehigh level of overhead. Further, the naive forecasting method did not provepractically useful in predicting temporal hotspot maps.
219

Estudos e desenvolvimento de métodos baseados em harmônicos esféricos para análise de similaridade estrutural entre ligantes / Study and development of spherical harmonics based methods for similarity ligand analysis

Fernando Ribeiro Caires 19 October 2016 (has links)
Descritores moleculares são essenciais em muitas aplicações de física e química computacional, como na análise de similaridade entre ligantes baseada em sua estrutura. Harmônicos esféricos têm sido utilizados como descritores da superfície molecular por serem uma forma compacta de descrição geométrica e por possuírem um descritor invariante por rotação. Assim, este trabalho propõe um método de análise de similaridade estrutural entre ligantes no qual se modela a superfície de uma molécula através de uma expansão em harmônicos esféricos realizada pelo programa LIRA. Os coeficientes encontrados são utilizados para percorrer o banco de dados DUD-E, com descritores previamente calculados, utilizando Distância Euclidiana e diversos valores de corte para selecionar compostos mais semelhantes. O potencial do método é avaliado usando o Ultrafast Shape Recognition (USR) como método padrão, pelo fato de ser uma excelente e rápida métrica para análise da similaridade de ligantes. Foram selecionadas 50 moléculas de diferentes tamanhos e composição de forma a representar todos os grupos moleculares presentes na DUD-E. Em seguida, cada molécula foi submetida à busca de similares variando-se valores de corte para o LIRA em que o conjunto de moléculas selecionadas foi comparado com as selecionadas pelo USR através de um processo de classificação binária e criação e interpretação de curvas ROC. Além do benchmarking, foi realizada a análise das componentes principais para determinar quais descritores são os mais importantes e carregam as melhores informações utilizadas na descrição da superfície da molécula. A partir das componentes principais, foi realizado um estudo do uso de funções peso, associando mais importância aos descritores adequados, e a redução da dimensionalidade do banco de dados, seleção de um novo conjunto de autovetores que formam as bases do espaço vetorial e uma nova descrição das moléculas para o novo espaço, no qual cada variação foi avaliada através de um novo benchmarking. O LIRA se mostrou tão rápido quanto o USR e apresentou grande potencial de seleção de moléculas similares, para a maioria das moléculas testadas, pois as curvas ROC apresentaram pontos acima da linha do aleatório. Tanto a redução da dimensionalidade quanto o uso de funções de ponderação agregaram valor à métrica deixando-a mais veloz, no caso da redução da quantidade de descritores, e seletiva, em ambos os casos. Dessa forma, o método proposto se mostrou eficiente em mensurar a similaridade entre ligantes de forma seletiva e rápida utilizando somente informações a respeito da superfície molecular. / Molecular descriptors are essential for many applications in computational chemistry and physics, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as comprehensive descriptors of molecular structure due to their properties, orthonormality and rotationally invariant. Here we proposed a ligand similarity analysis method where molecule\'s surface is modeled by an expansion in Spherical Harmonics, called LIRA, whose coefficient are used to perform a search in DUD-E database, with all descriptors previously calculated, measured by Euclidian Distance and different cutoff\'s values to select similar compounds. Method\'s potential is evaluated against Ultrafast Shape Recognition (USR), due to it is an excellent a fast metric to ligand similarity analysis, in a benchmarking. Fifty molecules are selected varying chemical composition and size to represent all molecular groups of DUD-E. After that, which one was submitted in a search with different values of cutoff for LIRA and the subset selected was compared with the ones selected by USR through binary classification and ROC curves analysis. Beyond benchmarking, it was performed a principal component analysis to identify which are the most valuable coefficient for shape description. Using principal components two other studies are made, weight functions are applied to descriptors, providing more value for those carry more information, and dimensionality reduction, where a subset of eigenvectors are select to form the new basis of the vector space and the new molecule\'s description was made in the new space, which variation was tested in a new benchmarking. Lira showed to be as fast as USR and a big potential to select similar molecules, for the majority of the molecules tested, because ROC curves had points over the random line. Dimensionality reduction and weight functions improved LIRA results raising velocity, due to the use of less descriptors to model molecule\'s surface, and the selection power, for both cases. In summary, the proposed method showed to be an efficient and fast tool for measure similarity between ligands based in molecular shape.
220

DetecÃÃo de cantos em formas binÃrias planares e aplicaÃÃo em recuperaÃÃo de formas / Corner Detection in Planar Binary Shapes and its application in Shape Retrieval

IÃlis Cavalcante de Paula JÃnior 25 June 2013 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / Sistemas de recuperaÃÃo de imagens baseada em conteÃdo (do termo em inglÃs, Content-Based Image Retrieval - CBIR) que operam em bases com grande volume de dados constituem um problema relevante e desafiador em diferentes Ãreas do conhecimento, a saber, medicina, biologia, computaÃÃo, catalogaÃÃo em geral, etc. A indexaÃÃo das imagens nestas bases pode ser realizada atravÃs de conteÃdo visual como cor, textura e forma, sendo esta Ãltima caracterÃstica a traduÃÃo visual dos objetos em uma cena. Tarefas automatizadas em inspeÃÃo industrial, registro de marca, biometria e descriÃÃo de imagens utilizam atributos da forma, como os cantos, na geraÃÃo de descritores para representaÃÃo, anÃlise e reconhecimento da mesma, possibilitando ainda que estes descritores se adequem ao uso em sistemas de recuperaÃÃo. Esta tese aborda o problema da extraÃÃo de caracterÃsticas de formas planares binÃrias a partir de cantos, na proposta de um detector multiescala de cantos e sua aplicaÃÃo em um sistema CBIR. O mÃtodo de detecÃÃo de cantos proposto combina uma funÃÃo de angulaÃÃo do contorno da forma, a sua decomposiÃÃo nÃo decimada por transformada wavelet ChapÃu Mexicano e a correlaÃÃo espacial entre as escalas do sinal de angulaÃÃo decomposto. A partir dos resultados de detecÃÃo de cantos, foi realizado um experimento com o sistema CBIR proposto, em que informaÃÃes locais e globais extraÃdas dos cantos detectados da forma foram combinadas à tÃcnica DeformaÃÃo Espacial DinÃmica (do termo em inglÃs, Dynamic Space Warping), para fins de anÃlise de similaridade formas com tamanhos distintos. Ainda com este experimento foi traÃada uma estratÃgia de busca e ajuste dos parÃmetros multiescala de detectores de cantos, segundo a maximizaÃÃo de uma funÃÃo de custo. Na avaliaÃÃo de desempenho da metodologia proposta, e outras tÃcnicas de detecÃÃo de cantos, foram empregadas as medidas PrecisÃo e RevocaÃÃo. Estas medidas atestaram o bom desempenho da metodologia proposta na detecÃÃo de cantos verdadeiros das formas, em uma base pÃblica de imagens cujas verdades terrestres estÃo disponÃveis. Para a avaliaÃÃo do experimento de recuperaÃÃo de imagens, utilizamos a taxa Bullâs eye em trÃs bases pÃblicas. Os valores alcanÃados desta taxa mostraram que o experimento proposto foi bem sucedido na descriÃÃo e recuperaÃÃo das formas, dentre os demais mÃtodos avaliados. / Content-based image retrieval (CBIR) applied to large scale datasets is a relevant and challenging problem present in medicine, biology, computer science, general cataloging etc. Image indexing can be done using visual information such as colors, textures and shapes (the visual translation of objects in a scene). Automated tasks in industrial inspection, trademark registration, biostatistics and image description use shape attributes, e.g. corners, to generate descriptors for representation, analysis and recognition; allowing those descriptors to be used in image retrieval systems. This thesis explores the problem of extracting information from binary planar shapes from corners, by proposing a multiscale corner detector and its use in a CBIR system. The proposed corner detection method combines an angulation function of the shape contour, its non-decimated decomposition using the Mexican hat wavelet and the spatial correlation among scales of the decomposed angulation signal. Using the information provided by our corner detection algorithm, we made experiments with the proposed CBIR. Local and global information extracted from the corners detected on shapes was used in a Dynamic Space Warping technique in order to analyze the similarity among shapes of different sizes. We also devised a strategy for searching and refining the multiscale parameters of the corner detector by maximizing an objective function. For performance evaluation of the proposed methodology and other techniques, we employed the Precision and Recall measures. These measures proved the good performance of our method in detecting true corners on shapes from a public image dataset with ground truth information. To assess the image retrieval experiments, we used the Bullâs eye score in three public databases. Our experiments showed our method performed well when compared to the existing approaches in the literature.

Page generated in 0.0605 seconds