• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 66
  • 15
  • 12
  • 8
  • 8
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 282
  • 282
  • 147
  • 114
  • 70
  • 59
  • 49
  • 49
  • 44
  • 41
  • 38
  • 36
  • 36
  • 36
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Group-Theoretical Structure in Multispectral Color and Image Databases

Hai Bui, Thanh January 2005 (has links)
Many applications lead to signals with nonnegative function values. Understanding the structure of the spaces of nonnegative signals is therefore of interest in many different areas. Hence, constructing effective representation spaces with suitable metrics and natural transformations is an important research topic. In this thesis, we present our investigations of the structure of spaces of nonnegative signals and illustrate the results with applications in the fields of multispectral color science and content-based image retrieval. The infinite-dimensional Hilbert space of nonnegative signals is conical and convex. These two properties are preserved under linear projections onto lower dimensional spaces. The conical nature of these coordinate vector spaces suggests the use of hyperbolic geometry. The special case of three-dimensional hyperbolic geometry leads to the application of the SU(1,1) or SO 2,1) groups. We introduce a new framework to investigate nonnegative signals. We use PCA-based coordinates and apply group theoretical tools to investigate sequences of signal coordinate vectors. We describe these sequences with oneparameter subgroups of SU(1,1) and show how to compute the one-parameter subgroup of SU(1,1) from a given set of nonnegative signals. In our experiments we investigate the following signal sequences: (i) blackbody radiation spectra; (ii) sequences of daylight/twilight spectra measured in Norrk¨oping, Sweden and in Granada, Spain; (iii) spectra generated by the SMARTS2 simulation program; and (iv) sequences of image histograms. The results show that important properties of these sequences can be modeled in this framework. We illustrate the usefulness with examples where we derive illumination invariants and introduce an efficient visualization implementation. Content-Based Image Retrieval (CBIR) is another topic of the thesis. In such retrieval systems, images are first characterized by descriptor vectors. Retrieval is then based on these content-based descriptors. Selection of contentbased descriptors and defining suitable metrics are the core of any CBIR system. We introduce new descriptors derived by using group theoretical tools. We exploit the symmetry structure of the space of image patches and use the group theoretical methods to derive low-level image filters in a very general framework. The derived filters are simple and can be used for multispectral images and images defined on different sampling grids. These group theoretical filters are then used to derive content-based descriptors, which will be used in a real implementation of a CBIR.
82

A Common Representation Format for Multimedia Documents

Jeong, Ki Tai 12 1900 (has links)
Multimedia documents are composed of multiple file format combinations, such as image and text, image and sound, or image, text and sound. The type of multimedia document determines the form of analysis for knowledge architecture design and retrieval methods. Over the last few decades, theories of text analysis have been proposed and applied effectively. In recent years, theories of image and sound analysis have been proposed to work with text retrieval systems and progressed quickly due in part to rapid progress in computer processing speed. Retrieval of multimedia documents formerly was divided into the categories of image and text, and image and sound. While standard retrieval process begins from text only, methods are developing that allow the retrieval process to be accomplished simultaneously using text and image. Although image processing for feature extraction and text processing for term extractions are well understood, there are no prior methods that can combine these two features into a single data structure. This dissertation will introduce a common representation format for multimedia documents (CRFMD) composed of both images and text. For image and text analysis, two techniques are used: the Lorenz Information Measurement and the Word Code. A new process named Jeong's Transform is demonstrated for extraction of text and image features, combining the two previous measurements to form a single data structure. Finally, this single data measurements to form a single data structure. Finally, this single data structure is analyzed by using multi-dimensional scaling. This allows multimedia objects to be represented on a two-dimensional graph as vectors. The distance between vectors represents the magnitude of the difference between multimedia documents. This study shows that image classification on a given test set is dramatically improved when text features are encoded together with image features. This effect appears to hold true even when the available text is diffused and is not uniform with the image features. This retrieval system works by representing a multimedia document as a single data structure. CRFMD is applicable to other areas of multimedia document retrieval and processing, such as medical image retrieval, World Wide Web searching, and museum collection retrieval.
83

Latent Semantic Analysis as a Method of Content-Based Image Retrieval in Medical Applications

Makovoz, Gennadiy 01 January 2010 (has links)
The research investigated whether a Latent Semantic Analysis (LSA)-based approach to image retrieval can map pixel intensity into a smaller concept space with good accuracy and reasonable computational cost. From a large set of computed tomography (CT) images, a retrieval query found all images for a particular patient based on semantic similarity. The effectiveness of the LSA retrieval was evaluated based on precision, recall, and F-score. This work extended the application of LSA to high-resolution CT radiology images. The images were chosen for their unique characteristics and their importance in medicine. Because CT images are intensity-only, they carry less information than color images. They typically have greater noise, higher intensity, greater contrast, and fewer colors than a raw RGB image. The study targeted level of intensity for image features extraction. The focus of this work was a formal evaluation of the LSA method in the context of large number of high-resolution radiology images. The study reported on preprocessing and retrieval time and discussed how reduction of the feature set size affected the results. LSA is an information retrieval technique that is based on the vector-space model. It works by reducing the dimensionality of the vector space, bringing similar terms and documents closer together. Matlab software was used to report on retrieval and preprocessing time. In determining the minimum size of concept space, it was found that the best combination of precision, recall, and F-score was achieved with 250 concepts (k = 250). This research reported precision of 100% on 100% of the queries and recall close to 90% on 100% of the queries with k=250. Selecting a higher number of concepts did not improve recall and resulted in significantly increased computational cost.
84

Ensemble de agrupamentos para sistemas de recomendação baseados em conteúdo / Cluster ensemble to content-based recommender systems

Costa, Fernando Henrique da Silva 05 November 2018 (has links)
O crescimento acelerado da internet proporcionou uma quantidade grande de informações acessíveis aos usuários. Ainda que tal quantidade possua algumas vantagens, os usuários que possuem pouca ou nenhuma experiência para escolher uma alternativa dentre as várias apresentadas terão dificuldades em encontrar informações (ou itens, considerando o escopo deste trabalho) úteis e que atendam às suas necessidades. Devido a esse contexto, os sistemas de recomendação foram desenvolvidos para auxiliar os usuários a encontrar itens relevantes e personalizados. Tais sistemas são divididos em diversas arquiteturas. Como exemplo estão as arquiteturas baseadas em: conteúdo, filtro colaborativo e conhecimento. Para este trabalho, a primeira arquitetura foi explorada. A arquitetura baseada em conteúdo recomenda itens ao usuário com base na similaridade desses aos itens que o usuário mostrou interesse no passado. Por consequência, essa arquitetura possui a limitação de, geralmente, realizar recomendações com baixa serendipidade, uma vez que os itens recomendados tendem a ser semelhantes àqueles observados pelo o usuário e, portanto, não apresentam novidade ou surpresa. Diante desta limitação, o aspecto de serendipidade tem destaque nas discussões apresentadas neste trabalho. Assim, o objetivo deste trabalho é minimizar o problema da baixa serendipidade das recomendações por meio da utilização da análise de similaridades parciais implementada usando ensemble de agrupamentos. Para alcançar este objetivo, estratégias de recomendação baseadas em conteúdo implementadas usando agrupamento e ensemble de agrupamento foram propostas e avaliadas neste trabalho. A avaliação contou com análises qualitativas sobre as recomendações produzidas e com um estudo com usuários. Nesse estudo, quatro estratégias de recomendação de notícias foram avaliadas, incluindo as duas propostas neste trabalhos, uma estratégia baseada em recomendação aleatória, e uma estratégia baseada em coagrupamento. As avaliações consideraram aspectos de relevância, surpresa e serendipidade de recomendações. Esse último aspecto é descrito como itens que apresentam tanto surpresa quanto relevância ao usuário. Os resultados de ambas análises mostraram a viabilidade da utilização de agrupamento como base de recomendação, uma vez que o ensemble de agrupamentos obteve resultados satisfatórios em todos os aspectos, principalmente em surpresa, enquanto a estratégia baseada em agrupamento simples obteve os melhores resultados em relevância e serendipidade / The accelerated growth of the internet has provided a large amount of information accessible to users. Although this amount of information has some advantages, users who have little or no experience in choosing one of several alternatives will find it difficulty to find useful information (or items, considering the scope of this work) that meets their needs. Due to this context, recommender systems have been developed to help users find relevant and personalized items. Such systems are divided into several architectures as content-based, collaborative filtering and knowledge-based. The first architecture was explored in this work. The content-based architecture recommends items to the user based on their similarity to items that the user has shown interest in the past. Consequently, this architecture has the limitation of generally making recommendations with low serendipity, since the recommended items tend to be similar to those observed by the user and, therefore, do not present novelty or surprise. Given this limitation, the aspect of serendipity is highlighted in the discussions presented in this work. Thus, the objective of this work is to minimize the problem of the low serendipity of the recommendations through the use of the partial similarity analysis implemented using cluster ensemble. To achieve this goal, content-based recommendation strategies implemented using clustering and cluster ensemble were proposed and evaluated. The evaluation involved qualitative analysis of the recommendations and a study with users. In such a study, four news recommendation strategies were evaluated including the two strategies proposed in this work, a strategy based on random recommendation, and a strategy based on co-clustering. The evaluations considered aspects of relevance, surprise and serendipity of recommendations. This last aspect is described as items that present both surprise and relevance to the user. The results of both analyzes showed the feasibility of using clustering as the basis of recommendation, since cluster ensemble had satisfactory results in all aspects, mainly in surprise, whereas the simple clustering-based strategy obtained the best results in relevance and serendipity
85

Processamento de consultas por similaridade em imagens médicas visando à recuperação perceptual guiada pelo usuário / Similarity Queries Processing Aimed at Retrieving Medical Images Guided by the User´s Perception

Silva, Marcelo Ponciano da 19 March 2009 (has links)
O aumento da geração e do intercâmbio de imagens médicas digitais tem incentivado profissionais da computação a criarem ferramentas para manipulação, armazenamento e busca por similaridade dessas imagens. As ferramentas de recuperação de imagens por conteúdo, foco desse trabalho, têm a função de auxiliar na tomada de decisão e na prática da medicina baseada em estudo de casos semelhantes. Porém, seus principais obstáculos são conseguir uma rápida recuperação de imagens armazenadas em grandes bases e reduzir o gap semântico, caracterizado pela divergência entre o resultado obtido pelo computador e aquele esperado pelo médico. No presente trabalho, uma análise das funções de distância e dos descritores computacionais de características está sendo realizada com o objetivo de encontrar uma aproximação eficiente entre os métodos de extração de características de baixo nível e os parâmetros de percepção do médico (de alto nível) envolvidos na análise de imagens. O trabalho de integração desses três elementos (Extratores de Características, Função de Distância e Parâmetro Perceptual) resultou na criação de operadores de similaridade, que podem ser utilizados para aproximar o sistema computacional ao usuário final, visto que serão recuperadas imagens de acordo com a percepção de similaridade do médico, usuário final do sistema / The continuous growth of the medical images generation and their use in the day-to-day procedures in hospitals and medical centers has motivated the computer science researchers to develop algorithms, methods and tools to store, search and retrieve images by their content. Therefore, the content-based image retrieval (CBIR) field is also growing at a very fast pace. Algorithms and tools for CBIR, which are at the core of this work, can help on the decision making process when the specialist is composing the images analysis. This is based on the fact that the specialist can retrieve similar cases to the one under evaluation. However, the main reservation about the use of CBIR is to achieve a fast and effective retrieval, in the sense that the specialist gets what is expected for. That is, the problem is to bridge the semantic gap given by the divergence among the result automatically delivered by the system and what the user is expecting. In this work it is proposed the perceptual parameter, which adds to the relationship between the feature extraction algorithms and distance functions aimed at finding the best combination to deliver to the user what he/she expected from the query. Therefore, this research integrated the three main elements of similarity queries: the image features, the distance function and the perceptual parameter, what resulted in searching operators. The experiments performed show that these operators can narrow the distance between the system and the specialist, contributing to bridge the semantic gap
86

Elaboração de uma base de conhecimentos para auxílio ao diagnóstico através da comparação visual de imagens mamográficas / Survey and implementation of a database of knowledge to aid the diagnostic of breast images though visual inspection and comparison

Honda, Marcelo Ossamu 27 August 2001 (has links)
Este trabalho apresenta o estudo e implementação de um banco de conhecimentos para auxiliar o diagnóstico de lesões da mama por inspeção visual, permitindo ao médico consultas através de características pictóricas da imagem e a comparação visual entre imagem investigada e imagens previamente classificadas e suas informações clínicas. As imagens encontram-se classificadas no banco de conhecimentos segundo o padrão \"Breast imaging reporting and data systems\" (BI-RADS) do Colégio Americano de Radiologia. A seleção das imagens, informações clínicas representativas, bem como sua classificação foram realizada em conjunto com médicos radiologistas do Centro de Ciências das Imagens e Física Médica (CCIFM) da Faculdade de Medicina de Ribeirão Preto (FMRP) da Universidade de São Paulo (USP). O processo de indexação e recuperação das imagens é baseado em atributos de textura extraídos de \"Regions of interest\" (ROIs) previamente estabelecidas em mamogramas digitalizados. Para simplificar este processo, foi utilizado a Análise de Componentes Principais (PCA), que visa a redução do número de atributos de textura e as informações redundantes existentes. Os melhores resultados obtidos foram para as ROIs 139 (Precisão = 0.80), 59 (Precisão = 0.86) e um valor de 100% de acerto para a ROI 40. / This work presents the survey and implementation of a database of knowledge to aid the diagnostic of breast lesions through visual inspection, allowing the physician a seach through the characteristics of the contents of the image and the visual comparison between the analysed image and the previously classified images and its clinical information. The images are classified into the database of knowledge according to the pattern Breast Imaging Reporting and Data Systems (BI-RADS) of the American College of Radiology. The selection of the images, the representative clinical information, as well as its classification have been performed in conjunction with practictioners radiologists of the Centro de Ciências das Imagens e Física Médica (CCIFM) from Faculdade de Medicina de Ribeirão Preto (FMRP) from Universidade de São Paulo (USP). The process of indexing and retrieving the images is based on characteristic of the texture extracted from the regions of interest (ROIs) previously established through scanned mammograms. To simplify this path, the Principal Components Analysis (PCA) was used it aims the reduction of the number of features of texture and the existing redundant information. The best results obtained were to the ROIs 139 (precision = 0.80), 59 (precision = 0.86) and a value of 100% of precision for ROI 40.
87

Extração de características de imagens médicas utilizando wavelets para mineração de imagens e auxílio ao diagnóstico / Feature extraction of medical images through wavelets aiming at image mining and diagnosis support

Silva, Carolina Yukari Veludo Watanabe da 05 December 2007 (has links)
Sistemas PACS (Picture Archieving and Communication Systems) têm sido desenvolvidos para armazenar de maneira integrada tanto os dados textuais e temporais dos pacientes quanto as imagens dos exames médicos a que eles se submetem para ampliar o uso das imagens no auxílio ao diagnóstico. Outra ferramenta valiosa para o auxílio ao diagnóstico médico são os sistemas CAD (Computer-Aided Diagnosis), para os quais pesquisas recentes mostram que o seu uso melhora significativamente a performance dos radiologistas em detectar corretamente anomalias. Dentro deste contexto, muitos trabalhos têm buscado métodos que possam reduzir o problema do \"gap semântico\", que refere-se ao que é perdido pela descrição sucinta da imagem e o que o usuário espera recuperar/reconhecer utilizando tal descrição. A grande maioria dos sistemas CBIR (do inglês Content-based image retrieval ) utiliza características primárias (baixo nível) para descrever elementos relevantes da imagem e proporcionar recuperação baseada em conteúdo. É necessário \"fundir\" múltiplos vetores com uma caracterí?stica em um vetor composto de características que possui baixa dimensionalidade e que ainda preserve, dentro do possível, as informações necessárias para a recuperação de imagens. O objetivo deste trabalho é propor novos extratores de características, baseados nos subespaços de imagens médicas gerados por transformadas wavelets. Estas características são armazenadas em vetores de características, os quais representam numericamente as imagens e permitindo assim sua busca por semelhança utilizando o conteúdo das próprias imagens. Esses vetores serão usados em um sistema de mineração de imagens em desenvolvimento no GBdI-ICMC-USP, o StARMiner, permitindo encontrar padrões pertencentes às imagens que as levem a ser classificadas em categorias / Picture Archiving and Communication Systems (PACS) aim at storing all the patients data, including their images, time series and textual description, allowing fast and effective transfer of information among devices and workstations. Therefore, PACS can be a powerful tool on improving the decision making during a diagnosing process. The CAD (Computer-Aided Diagnosis) systems have been recently employed to improve the diagnosis confidence, and recent research shows that they can effectively raise the radiologists performance on detecting anomalies on images. Content-based image retrieval (CBIR) techniques are essential to support CAD systems, and can significantly improve the PACS applicability. CBIR works on raw level features extracted from the images to describe the most meaningful characteristics of the images following a specific criterium. Usually, it is necessary to put together several features to compose a feature vector to describe an image more precisely. Therefore, the dimensionality of the feature vector is frequently large and many features can be correlated to each other. The objective of this Master Dissertation is to build new image features, based on wavelet-generated subspaces. The features form the feature vector, which succinctly represent the images and are used to process similarity queries. The feature vectors are analyzed by the StARMiner system, under development in the GbdI-ICMC-USP, in order to find the most meaningful features to represent the images as well as to find patterns in the images that allow them to be classified into categories. The project developed was evaluated with three different image sets and the results are promising
88

Service recommendation and selection in centralized and decentralized environments

Ahmed, Mariwan January 2017 (has links)
With the increasing use of web services in everyday tasks we are entering an era of Internet of Services (IoS). Service discovery and selection in both centralized and decentralized environments have become a critical issue in the area of web services, in particular when services having similar functionality but different Quality of Service (QoS). As a result, selecting a high quality service that best suits consumer requirements from a large list of functionally equivalent services is a challenging task. In response to increasing numbers of services in the discovery and selection process, there is a corresponding increase of service consumers and a consequent diversity in Quality of Service (QoS) available. Increases in both sides leads to a diversity in the demand and supply of services, which would result in the partial match of the requirements and offers. Furthermore, it is challenging for customers to select suitable services from a large number of services that satisfy consumer functional requirements. Therefore, web service recommendation becomes an attractive solution to provide recommended services to consumers which can satisfy their requirements. In this thesis, first a service ranking and selection algorithm is proposed by considering multiple QoS requirements and allowing partially matched services to be counted as a candidate for the selection process. With the initial list of available services the approach considers those services with a partial match of consumer requirements and ranks them based on the QoS parameters, this allows the consumer to select suitable service. In addition, providing weight value for QoS parameters might not be an easy and understandable task for consumers, as a result an automatic weight calculation method has been included for consumer requirements by utilizing distance correlation between QoS parameters. The second aspect of the work in the thesis is the process of QoS based web service recommendation. With an increasing number of web services having similar functionality, it is challenging for service consumers to find out suitable web services that meet their requirements. We propose a personalised service recommendation method using the LDA topic model, which extracts latent interests of consumers and latent topics of services in the form of probability distribution. In addition, the proposed method is able to improve the accuracy of prediction of QoS properties by considering the correlation between neighbouring services and return a list of recommended services that best satisfy consumer requirements. The third part of the thesis concerns providing service discovery and selection in a decentralized environment. Service discovery approaches are often supported by centralized repositories that could suffer from single point failure, performance bottleneck, and scalability issues in large scale systems. To address these issues, we propose a context-aware service discovery and selection approach in a decentralized peer-to-peer environment. In the approach homophily similarity was used for bootstrapping and distribution of nodes. The discovery process is based on the similarity of nodes and previous interaction and behaviour of the nodes, which will help the discovery process in a dynamic environment. Our approach is not only considering service discovery, but also the selection of suitable web service by taking into account the QoS properties of the web services. The major contribution of the thesis is providing a comprehensive QoS based service recommendation and selection in centralized and decentralized environments. With the proposed approach consumers will be able to select suitable service based on their requirements. Experimental results on real world service datasets showed that proposed approaches achieved better performance and efficiency in recommendation and selection process.
89

Ensemble de agrupamentos para sistemas de recomendação baseados em conteúdo / Cluster ensemble to content-based recommender systems

Fernando Henrique da Silva Costa 05 November 2018 (has links)
O crescimento acelerado da internet proporcionou uma quantidade grande de informações acessíveis aos usuários. Ainda que tal quantidade possua algumas vantagens, os usuários que possuem pouca ou nenhuma experiência para escolher uma alternativa dentre as várias apresentadas terão dificuldades em encontrar informações (ou itens, considerando o escopo deste trabalho) úteis e que atendam às suas necessidades. Devido a esse contexto, os sistemas de recomendação foram desenvolvidos para auxiliar os usuários a encontrar itens relevantes e personalizados. Tais sistemas são divididos em diversas arquiteturas. Como exemplo estão as arquiteturas baseadas em: conteúdo, filtro colaborativo e conhecimento. Para este trabalho, a primeira arquitetura foi explorada. A arquitetura baseada em conteúdo recomenda itens ao usuário com base na similaridade desses aos itens que o usuário mostrou interesse no passado. Por consequência, essa arquitetura possui a limitação de, geralmente, realizar recomendações com baixa serendipidade, uma vez que os itens recomendados tendem a ser semelhantes àqueles observados pelo o usuário e, portanto, não apresentam novidade ou surpresa. Diante desta limitação, o aspecto de serendipidade tem destaque nas discussões apresentadas neste trabalho. Assim, o objetivo deste trabalho é minimizar o problema da baixa serendipidade das recomendações por meio da utilização da análise de similaridades parciais implementada usando ensemble de agrupamentos. Para alcançar este objetivo, estratégias de recomendação baseadas em conteúdo implementadas usando agrupamento e ensemble de agrupamento foram propostas e avaliadas neste trabalho. A avaliação contou com análises qualitativas sobre as recomendações produzidas e com um estudo com usuários. Nesse estudo, quatro estratégias de recomendação de notícias foram avaliadas, incluindo as duas propostas neste trabalhos, uma estratégia baseada em recomendação aleatória, e uma estratégia baseada em coagrupamento. As avaliações consideraram aspectos de relevância, surpresa e serendipidade de recomendações. Esse último aspecto é descrito como itens que apresentam tanto surpresa quanto relevância ao usuário. Os resultados de ambas análises mostraram a viabilidade da utilização de agrupamento como base de recomendação, uma vez que o ensemble de agrupamentos obteve resultados satisfatórios em todos os aspectos, principalmente em surpresa, enquanto a estratégia baseada em agrupamento simples obteve os melhores resultados em relevância e serendipidade / The accelerated growth of the internet has provided a large amount of information accessible to users. Although this amount of information has some advantages, users who have little or no experience in choosing one of several alternatives will find it difficulty to find useful information (or items, considering the scope of this work) that meets their needs. Due to this context, recommender systems have been developed to help users find relevant and personalized items. Such systems are divided into several architectures as content-based, collaborative filtering and knowledge-based. The first architecture was explored in this work. The content-based architecture recommends items to the user based on their similarity to items that the user has shown interest in the past. Consequently, this architecture has the limitation of generally making recommendations with low serendipity, since the recommended items tend to be similar to those observed by the user and, therefore, do not present novelty or surprise. Given this limitation, the aspect of serendipity is highlighted in the discussions presented in this work. Thus, the objective of this work is to minimize the problem of the low serendipity of the recommendations through the use of the partial similarity analysis implemented using cluster ensemble. To achieve this goal, content-based recommendation strategies implemented using clustering and cluster ensemble were proposed and evaluated. The evaluation involved qualitative analysis of the recommendations and a study with users. In such a study, four news recommendation strategies were evaluated including the two strategies proposed in this work, a strategy based on random recommendation, and a strategy based on co-clustering. The evaluations considered aspects of relevance, surprise and serendipity of recommendations. This last aspect is described as items that present both surprise and relevance to the user. The results of both analyzes showed the feasibility of using clustering as the basis of recommendation, since cluster ensemble had satisfactory results in all aspects, mainly in surprise, whereas the simple clustering-based strategy obtained the best results in relevance and serendipity
90

Tratamento de tempo e dinamicidade em dados representados em espaços métricos / Treatment of time and dynamics in dta represented in metric spaces

Bueno, Renato 15 December 2009 (has links)
Os Sistemas de Gerenciamento de Bases de Dados devem atualmente ser capazes de gerenciar dados complexos, como dados multimídia, sequências genéticas, séries temporais, além dos dados tradicionais. Em consultas em grandes coleções de dados complexos, a similaridade entre os dados é o fator mais importante, e pode ser adequadamente expressada quando esses dados são representados em espaços métricos. Independentemente do domínio de um tipo de dados, existem aplicações que devem acompanhar a evolução temporal dos elementos de dados. Porém, os Métodos de Acesso Métrico existentes consideram que os dados são imutáveis com o decorrer do tempo. Visando o tratamento do tempo e dinamicidade em dados representados em espaços métricos, o trabalho apresentado nesta tese foi desenvolvido em duas frentes principais de atividades. A primeira frente tratou da inclusão das operações de remoção e atualização em métodos de acesso métrico, e visa atender às necessidades de domínios de aplicação em que dados em espaços métricos sofram atualização frequente, independentemente de necessitarem de tratamento temporal. Desta frente de atividades também resultou um novo método de otimização de àrvores métricas, baseado no algoritmo de remoção desenvolvido. A segunda frente de atividades aborda a inclusão do conceito de evolução temporal em dados representados em espaços métricos. Para isso foi proposto o Espaço Métrico-temporal, um modelo de representação de dados que permite a comparação de elementos métricos associado a informações temporais. O modelo conta com um método para identificar as contribuições relativas das componentes métrica e temporal no cálculo da similaridade. Também foram apresentadas estratégias para análise de trajetórias de dados métricos com o decorrer do tempo, através da imersão de espaços métrico-temporais em espaços dimensionais. Por fim, foi apresentado um novo método de balanceamento de múltiplos descritores para representação de imagens, fruto de modificações no método proposto para identificar as contribuições das componentes que podem formar um espaço métrico-temporal / Nowadays, the Database Management Systems (DBMS) must be able to manage complex data, such as multimedia data, genetic sequences, temporal series, besides the traditional data. For queries on large collections of complex data, the similarity among elements is the most relevant concept, and it can be adequately expressed when data are represented in metric spaces. Regardless of the data domain, there are applications that must tracking the evolution of data over time However, the existing Metric Access Methods assume that the data elements are immutable. Aiming at both treating time and allowing changes in metric data, the work presented in this thesis consisted of two main parts. The first part addresses the inclusion of the operations for element remotion and updating in metric access methods. These operations are meant to application domains that work with metric data that changes over time, regardless of the needed to manage temporal information. A new method for metric trees optimization was also developed in this part of the work. It was based on the proposed remotion algorithm. The second part of the thesis addresses including the temporal evolution concept in data represented in metric spaces. The Metric-Temporal Space was proposed, a representation model to allow comparing elements consisting of metric data with temporal information associated. The model includes a method to identify the relative contributions of the temporal and the metric components in the final similarity calculation. Strategies for trajectory analysis of metric data over time was also presented, through the immersion of metric-temporal spaced in dimensional spaces. Finally, a new method for weighting multiple image descriptors was presented. It was derived from changes in the proposed method to identify the contributions of the components of the metric-temporal space

Page generated in 0.0715 seconds