• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 8
  • 1
  • 1
  • Tagged with
  • 23
  • 23
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Arcabouço para recuperação de imagens por conteúdo visando à percepção do usuário / Content-based image retrieval aimed at reaching user´s perception

Bugatti, Pedro Henrique 29 October 2012 (has links)
Na última década observou-se grande interesse pra o desenvolvimento de técnicas para Recuperação de Imagens Baseada em Conteúdo devido à explosão na quantidade de imagens capturadas e à necessidade de armazenamento e recuperação dessas imagens. A área médica especificamente é um exemplo que gera um grande fluxo de informações, principalmente imagens digitais para a realização de diagnósticos. Porém um problema ainda permanecia sem solução que tratava-se de como atingir a similaridade baseada na percepção do usuário, uma vez que para que se consiga uma recuperação eficaz, deve-se caracterizar e quantificar o melhor possível tal similaridade. Nesse contexto, o presente trabalho de Doutorado visou trazer novas contribuições para a área de recuperação de imagens por contúdo. Dessa forma, almejou ampliar o alcance de consultas por similaridade que atendam às expectativas do usuário. Tal abordagem deve permitir ao sistema CBIR a manutenção da semântica da consulta desejada pelo usuário. Assim, foram desenvolvidos três métodos principais. O primeiro método visou a seleção de características por demanda baseada na intenção do usuário, possibilitando dessa forma agregação de semântica ao processo de seleção de características. Já o segundo método culminou no desenvolvimento de abordagens para coleta e agragação de perfis de usuário, bem como novas formulações para quantificar a similaridade perceptual dos usuários, permitindo definir dinamicamente a função de distância que melhor se adapta à percepção de um determinado usuário. O terceiro método teve por objetivo a modificação dinâmica de funções de distância em diferentes ciclos de realimentação. Para tanto foram definidas políticas para realizar tal modificação as quais foram baseadas na junção de informações a priori da base de imagens, bem como, na percepção do usuário no processo das consultas por similaridade. Os experimentos realizados mostraram que os métodos propostos contribuíram de maneira efetiva para caracterizar e quantificar a similaridade baseada na percepção do usuário, melhorando consideravelmente a busca por conteúdo segundo as expectativas dos usuários / In the last decade techniques for content-based image retrieval (CBIR) have been intensively explored due to the increase in the amount of capttured images and the need of fast retrieval of them. The medical field is a specific example that generates a large flow of information, especially digital images employed for diagnosing. One issue that still remains unsolved deals with how to reach the perceptual similarity. That is, to achieve an effectivs retrieval, one must characterize and quantify the perceptual similarity regarding the specialist in the field. Therefore, the present thesis was conceived tofill in this gap creating a consistent support to perform similarity queries over images, maintaining the semantics of a given query desired by tyhe user, bringing new contribuitions to the content-based retrieval area. To do so, three main methods were developed. The first methods applies a novel retrieval approach that integrates techniques of feature selection and relevance feedback to preform demand-driven feature selection guided by perceptual similarity, tuning the mining process on the fly, according to the user´s intention. The second method culminated in the development of approaches for harvesting and surveillance of user profiles, as well as new formulations to quantify the perceptual similarity of users , allowing to dynamically set the distance function that best fits the perception of a given user. The third method introduces a novel approach to enhance the retrieval process through user feedback and profiling, modifying the distance function in each feedback cycle choosing the best one for each cycle according to the user expectation. The experiments showed that the proposed metods effectively contributed to capture the perceptual similarity, improving in a great extent the image retrieval according to users´expectations
12

Estimating the Cost of Mining Pollution on Water Resources: Parametric and Nonparametric Resources / Aproximando el costo de la contaminación minera sobre los recursos hídricos: metodologías paramétricas y no paramétricas

Herrera Catalán, Pedro, Millones, Oscar 10 April 2018 (has links)
This study estimates the economic costs of mining pollution on water resources for the years 2008 and 2009 based on the conceptual framework of Environmental Efficiency. This framework identifies such costs as the mining companies’ trade-off between increasing production that is saleable at market prices (desirable output) and reducing the environmental pollution that emerges from the production process (undesirable output). These economic costs were calculated from parametric and non parametric production possibility frontiers for 28 and 37 mining units in 2008 and 2009, respectively, which were under the purview of the National Campaign for Environmental Monitoring of Effluent and Water Resources, conducted by the Energy and Mining Investment Supervisory Agency (Osinergmin) in those years. The results show that the economic cost of mining pollution on water resources rose to U.S. $ 814.7 million and U.S. $ 448.8 million for 2008 and 2009, respectively. These economic costs were highly concentrated in a few mining units, within a few pollution parameters, and were also higher in mining units with average/low mineral production. Taking into consideration that at present the fine and penalty system in the mining sector is based on administrative criteria, this study proposes a System of Environmentally Efficient Sanctions based on economic criteria so as to establish a preventive mechanism for pollution. It is hoped that this mechanism will generate the necessary incentives for mining companies to address the negative externalities that emerge from their production process. / En este estudio se aproximan los costos económicos de la contaminación ambiental minera sobre los recursos hídricos para 2008 y 2009 en el marco conceptual de la Eficiencia Medioambiental, que interpreta dichos costos como el trade-off de los empresarios mineros entre incrementar su producción que es vendible a precios de mercado (output deseable) yreducir la contaminación ambiental que se desprende de su proceso productivo (output no deseable). Dichos costos económicos fueron calculados a partir de fronteras de posibilidades de producción paramétricas y no paramétricas para 28 y 37 unidades mineras en los años 2008 y 2009 respectivamente, las que estuvieron bajo el ámbito de la Campaña Nacional deMonitoreo Ambiental de Efluentes y Recursos Hídricos que realizó el Organismo Supervisor de Inversión Energía y Minería (Osinergmin) en dichos años. Los resultados indican que los costos económicos de la contaminación ambiental minera sobre los recursos hídricos ascendieron, en promedio, para los años 2008 y 2009, a US$ 814,7 millones,y US$ 448,8 millones, respectivamente. Dichos costos estuvieron altamente concentrados en pocas unidades productivas, así como en pocos parámetros de contaminación, y fueron mayores en unidades mineras con producción media/baja de minerales. Dado que en la actualidad el sistema de multas y sanciones en el sector minero se basa en criterios administrativos, el estudio propone un Sistema de Sanciones Ambientalmente Eficiente basado en criterios económicos
13

Arcabouço para recuperação de imagens por conteúdo visando à percepção do usuário / Content-based image retrieval aimed at reaching user´s perception

Pedro Henrique Bugatti 29 October 2012 (has links)
Na última década observou-se grande interesse pra o desenvolvimento de técnicas para Recuperação de Imagens Baseada em Conteúdo devido à explosão na quantidade de imagens capturadas e à necessidade de armazenamento e recuperação dessas imagens. A área médica especificamente é um exemplo que gera um grande fluxo de informações, principalmente imagens digitais para a realização de diagnósticos. Porém um problema ainda permanecia sem solução que tratava-se de como atingir a similaridade baseada na percepção do usuário, uma vez que para que se consiga uma recuperação eficaz, deve-se caracterizar e quantificar o melhor possível tal similaridade. Nesse contexto, o presente trabalho de Doutorado visou trazer novas contribuições para a área de recuperação de imagens por contúdo. Dessa forma, almejou ampliar o alcance de consultas por similaridade que atendam às expectativas do usuário. Tal abordagem deve permitir ao sistema CBIR a manutenção da semântica da consulta desejada pelo usuário. Assim, foram desenvolvidos três métodos principais. O primeiro método visou a seleção de características por demanda baseada na intenção do usuário, possibilitando dessa forma agregação de semântica ao processo de seleção de características. Já o segundo método culminou no desenvolvimento de abordagens para coleta e agragação de perfis de usuário, bem como novas formulações para quantificar a similaridade perceptual dos usuários, permitindo definir dinamicamente a função de distância que melhor se adapta à percepção de um determinado usuário. O terceiro método teve por objetivo a modificação dinâmica de funções de distância em diferentes ciclos de realimentação. Para tanto foram definidas políticas para realizar tal modificação as quais foram baseadas na junção de informações a priori da base de imagens, bem como, na percepção do usuário no processo das consultas por similaridade. Os experimentos realizados mostraram que os métodos propostos contribuíram de maneira efetiva para caracterizar e quantificar a similaridade baseada na percepção do usuário, melhorando consideravelmente a busca por conteúdo segundo as expectativas dos usuários / In the last decade techniques for content-based image retrieval (CBIR) have been intensively explored due to the increase in the amount of capttured images and the need of fast retrieval of them. The medical field is a specific example that generates a large flow of information, especially digital images employed for diagnosing. One issue that still remains unsolved deals with how to reach the perceptual similarity. That is, to achieve an effectivs retrieval, one must characterize and quantify the perceptual similarity regarding the specialist in the field. Therefore, the present thesis was conceived tofill in this gap creating a consistent support to perform similarity queries over images, maintaining the semantics of a given query desired by tyhe user, bringing new contribuitions to the content-based retrieval area. To do so, three main methods were developed. The first methods applies a novel retrieval approach that integrates techniques of feature selection and relevance feedback to preform demand-driven feature selection guided by perceptual similarity, tuning the mining process on the fly, according to the user´s intention. The second method culminated in the development of approaches for harvesting and surveillance of user profiles, as well as new formulations to quantify the perceptual similarity of users , allowing to dynamically set the distance function that best fits the perception of a given user. The third method introduces a novel approach to enhance the retrieval process through user feedback and profiling, modifying the distance function in each feedback cycle choosing the best one for each cycle according to the user expectation. The experiments showed that the proposed metods effectively contributed to capture the perceptual similarity, improving in a great extent the image retrieval according to users´expectations
14

Vyhledávání v multimodálních databázích / Multimodal Database Search

Krejčíř, Tomáš January 2009 (has links)
The field that deals with storing and effective searching of multimedia documents is called Information retrieval. This paper describes solution of effective searching in collections of shots. Multimedia documents are presented as vectors in high-dimensional space, because in such collection of documents it is easier to define semantics as well as the mechanisms of searching. The work aims at problems of similarity searching based on metric space, which uses distance functions, such as Euclidean, Chebyshev or Mahalanobis, for comparing global features and cosine or binary rating for comparing local features. Experiments on the TRECVid dataset compare implemented distance functions. Best distance function for global features appears to be Mahalanobis and for local features cosine rating.
15

Indexação de dados em domínios métricos generalizáveis / Indexing complex data in Generic Metric Domains.

Pola, Ives Renê Venturini 10 June 2005 (has links)
Os sistemas Gerenciadores de Bases de Dados (SGBDs) foram desenvolvidos para manipular domínios de dados numéricos e/ou pequenas seqüencias de caracteres (palavras) e não foram projetados prevendo a manipulação de dados complexos, como por exemplo dados multimídia. Os operadores em domínios de dados que requisitam a relação de ordem têm pouca utilidade para manipular operações que envolvem dados complexos. Uma classe de operadores que se adequa melhor para manipular esses dados são os operadores por similaridade: consulta por abrangência (``range queries') e consulta de vizinhos mais próximos (``k-nearest neighbor queries'). Embora muitos resultados já tenham sido obtidos na elaboração de algoritmos de busca por similaridade, todos eles consideram uma única função para a medida de similaridade, que deve ser universalmente aplicável a todos os pares de elementos do conjunto de dados. Este projeto propõe explorar a possibilidade de trabalhar com estruturas de dados concebidas dentro dos conceitos de dados em domínios métricos, mas que admitam o uso de uma função de distância adaptável, ou seja, que mude para determinados grupos de objetos, dependendo de algumas características universais, e assim permitindo acomodar características que sejam particulares a algumas classes de imagens e não de todo o conjunto delas, classificando as imagens em uma hierarquia de tipos, onde cada tipo está associado a uma função de distância diferente e vetores de características diferentes, todos indexados numa mesma árvore. / The DBMS were developed to manipulate data in numeric domains and short strings, not considering the manipulation of complex data, like multimidia data. The operators em data domain which requests for the total order property have no use to handle complex data. An operator class that fit well to handle this type of data are the similarity operators: range query and nearest neighbor query. Although many results have been shown in research to answer similarity queries, all use only one distance function to measure the similarity, which must be applicable to all pairs of elements of the set. The goal of this work is to explore the possibility of deal with complex data in metric domains, that uses a suitable distance function, that changes its behavior for certain groups of data, depending of some universal features, allowing them to use specific features of some classes of data, not shared for the entire set. This flexibility will allow to reduce the set of useful features of each element in the set individually, relying in the values obtainded for one or few features extracted in first place. This values will guide the others important features to extract from data.
16

Análise da influência de funções de distância para o processamento de consultas por similaridade em recuperação de imagens por conteúdo / Analysis of the influence of distance functions to answer similarity queries in content-based image retrieval.

Bugatti, Pedro Henrique 16 April 2008 (has links)
A recuperação de imagens baseada em conteúdo (Content-based Image Retrieval - CBIR) embasa-se sobre dois aspectos primordiais, um extrator de características o qual deve prover as características intrínsecas mais significativas dos dados e uma função de distância a qual quantifica a similaridade entre tais dados. O grande desafio é justamente como alcançar a melhor integração entre estes dois aspectos chaves com intuito de obter maior precisão nas consultas por similaridade. Apesar de inúmeros esforços serem continuamente despendidos para o desenvolvimento de novas técnicas de extração de características, muito pouca atenção tem sido direcionada à importância de uma adequada associação entre a função de distância e os extratores de características. A presente Dissertação de Mestrado foi concebida com o intuito de preencher esta lacuna. Para tal, foi realizada a análise do comportamento de diferentes funções de distância com relação a tipos distintos de vetores de características. Os três principais tipos de características intrínsecas às imagens foram analisados, com respeito a distribuição de cores, textura e forma. Além disso, foram propostas duas novas técnicas para realização de seleção de características com o desígnio de obter melhorias em relação à precisão das consultas por similaridade. A primeira técnica emprega regras de associação estatísticas e alcançou um ganho de até 38% na precisão, enquanto que a segunda técnica utilizando a entropia de Shannon alcançou um ganho de aproximadamente 71% ao mesmo tempo em que reduz significantemente a dimensionalidade dos vetores de características. O presente trabalho também demonstra que uma adequada utilização das funções de distância melhora efetivamente os resultados das consultas por similaridade. Conseqüentemente, desdobra novos caminhos para realçar a concepção de sistemas CBIR / The retrieval of images by visual content relies on a feature extractor to provide the most meaningful intrinsic characteristics (features) from the data, and a distance function to quantify the similarity between them. A challenge in this field supporting content-based image retrieval (CBIR) to answer similarity queries is how to best integrate these two key aspects. There are plenty of researching on algorithms for feature extraction of images. However, little attention have been paid to the importance of the use of a well-suited distance function associated to a feature extractor. This Master Dissertation was conceived to fill in this gap. Therefore, herein it was investigated the behavior of different distance functions regarding distinct feature vector types. The three main types of image features were evaluated, regarding color distribution, texture and shape. It was also proposed two new techniques to perform feature selection over the feature vectors, in order to improve the precision when answering similarity queries. The first technique employed statistical association rules and achieve up to 38% gain in precision, while the second one employing the Shannon entropy achieved 71%, while siginificantly reducing the size of the feature vector. This work also showed that the proper use of a distance function effectively improves the similarity query results. Therefore, it opens new ways to enhance the acceptance of CBIR systems
17

Modelling the Cross-Country Trafficability with Geographical Information Systems

Gumos, Aleksander Karol January 2005 (has links)
<p>The main objectives of this work were to investigate Geographical Information Systems techniques for modelling a cross-country trafficability. To accomplished stated tasks, reciprocal relationships between the soil deposits, local hydrology, geology and geomorphology were studied in relation to the study area in South-Eastern Sweden.</p><p>Growing awareness of nowadays users of GIS in general is being concentrated on understanding an importance of soil conditions changed after cross-country trafficability. Therefore, in this thesis, constructing of the Soil Knowledge Database introduced to the genuine geological soil textural classes a new, modified geotechnical division with desirable for off-road ground reasoning measurable factors, like soil permeability, capillarity or Atterberg’s consistency limits.</p><p>Digital Elevation Model, the driving force for landscape studies in the thesis, was carefully examined together with the complementary datasets of the investigated area. Testing of the elevation data was done in association to the hydrological modelling, which resulted with the Wetness Index map. The three distinguishable soil wetness conditions: dry, moist and wet, were obtained, and used consequently for creation of the static ground conditions map, a visible medium of soils susceptibility to for example machine compaction.</p><p>The work resulted with a conceptual scheme for cross-country trafficability modelling, which was put into effect while modeling in GIS. As a final outcome, by combining all processed data together, derivatives were incorporated and draped over the rendered 3D animating scene. A visually aided simulation enabled to concretized theoretical, hypothetical and experimental outcomes into one coherent model of apprised under Multicriterial Evaluation techniques standardized factor maps for ground vehicle maneuverability. Also further steps of research were proposed.</p>
18

Modelling the Cross-Country Trafficability with Geographical Information Systems

Gumos, Aleksander Karol January 2005 (has links)
The main objectives of this work were to investigate Geographical Information Systems techniques for modelling a cross-country trafficability. To accomplished stated tasks, reciprocal relationships between the soil deposits, local hydrology, geology and geomorphology were studied in relation to the study area in South-Eastern Sweden. Growing awareness of nowadays users of GIS in general is being concentrated on understanding an importance of soil conditions changed after cross-country trafficability. Therefore, in this thesis, constructing of the Soil Knowledge Database introduced to the genuine geological soil textural classes a new, modified geotechnical division with desirable for off-road ground reasoning measurable factors, like soil permeability, capillarity or Atterberg’s consistency limits. Digital Elevation Model, the driving force for landscape studies in the thesis, was carefully examined together with the complementary datasets of the investigated area. Testing of the elevation data was done in association to the hydrological modelling, which resulted with the Wetness Index map. The three distinguishable soil wetness conditions: dry, moist and wet, were obtained, and used consequently for creation of the static ground conditions map, a visible medium of soils susceptibility to for example machine compaction. The work resulted with a conceptual scheme for cross-country trafficability modelling, which was put into effect while modeling in GIS. As a final outcome, by combining all processed data together, derivatives were incorporated and draped over the rendered 3D animating scene. A visually aided simulation enabled to concretized theoretical, hypothetical and experimental outcomes into one coherent model of apprised under Multicriterial Evaluation techniques standardized factor maps for ground vehicle maneuverability. Also further steps of research were proposed.
19

Indexação de dados em domínios métricos generalizáveis / Indexing complex data in Generic Metric Domains.

Ives Renê Venturini Pola 10 June 2005 (has links)
Os sistemas Gerenciadores de Bases de Dados (SGBDs) foram desenvolvidos para manipular domínios de dados numéricos e/ou pequenas seqüencias de caracteres (palavras) e não foram projetados prevendo a manipulação de dados complexos, como por exemplo dados multimídia. Os operadores em domínios de dados que requisitam a relação de ordem têm pouca utilidade para manipular operações que envolvem dados complexos. Uma classe de operadores que se adequa melhor para manipular esses dados são os operadores por similaridade: consulta por abrangência (``range queries') e consulta de vizinhos mais próximos (``k-nearest neighbor queries'). Embora muitos resultados já tenham sido obtidos na elaboração de algoritmos de busca por similaridade, todos eles consideram uma única função para a medida de similaridade, que deve ser universalmente aplicável a todos os pares de elementos do conjunto de dados. Este projeto propõe explorar a possibilidade de trabalhar com estruturas de dados concebidas dentro dos conceitos de dados em domínios métricos, mas que admitam o uso de uma função de distância adaptável, ou seja, que mude para determinados grupos de objetos, dependendo de algumas características universais, e assim permitindo acomodar características que sejam particulares a algumas classes de imagens e não de todo o conjunto delas, classificando as imagens em uma hierarquia de tipos, onde cada tipo está associado a uma função de distância diferente e vetores de características diferentes, todos indexados numa mesma árvore. / The DBMS were developed to manipulate data in numeric domains and short strings, not considering the manipulation of complex data, like multimidia data. The operators em data domain which requests for the total order property have no use to handle complex data. An operator class that fit well to handle this type of data are the similarity operators: range query and nearest neighbor query. Although many results have been shown in research to answer similarity queries, all use only one distance function to measure the similarity, which must be applicable to all pairs of elements of the set. The goal of this work is to explore the possibility of deal with complex data in metric domains, that uses a suitable distance function, that changes its behavior for certain groups of data, depending of some universal features, allowing them to use specific features of some classes of data, not shared for the entire set. This flexibility will allow to reduce the set of useful features of each element in the set individually, relying in the values obtainded for one or few features extracted in first place. This values will guide the others important features to extract from data.
20

Gestion efficace de données et couverture dans les réseaux de capteurs sans fil / Energy efficient data handling and coverage for wireless sensor networks

Moustafa Harb, Hassan 12 July 2016 (has links)
Dans cette thèse, nous proposons des techniques de gestion de données pour économiser l’énergie dans les réseaux de capteurs périodiques basés sur l’architecture de clustering. Premièrement, nous proposons d’adapter le taux d’échantillonnage du capteur à la dynamique de la condition surveillée en utilisant le modèle de one-way ANOVA et des tests statistiques (Fisher, Tukey et Bartlett), tout en prenant en compte l’énergie résiduelle du capteur. Le deuxième objectif est d’éliminer les données redondantes générées dans chaque cluster. Au niveau du capteur, chaque capteur cherche la similarité entre les données collectées à chaque période et entre des périodes successives, en utilisant des fonctions de similarité. Au niveau du CH, nous utilisons des fonctions de distance pour permettre CH d’éliminer les ensembles de données redondantes générées par les nœuds voisins. Enfin, nous proposons deux stratégies actif/inactif pour ordonnancer les capteurs dans chaque cluster, après avoir cherché la corrélation spatio-temporelle entre les capteurs. La première stratégie est basée sur le problème de couverture des ensembles tandis que la seconde prend avantages du degré de corrélation et les énergies résiduelles de capteurs pour ordonnancer les nœuds dans chaque cluster. Pour évaluer la performance des techniques proposées, des simulations sur des données de capteurs réelles ont été menées. La performance a été analysée selon la consommation d’énergie, la latence et l’exactitude des données, et la couverture, tout en montrant comment nos techniques peuvent améliorer considérablement les performances des réseaux de capteurs. / In this thesis, we propose energy-efficient data management techniques dedicated to periodic sensor networks based on clustering architecture. First, we propose to adapt sensor sampling rate to the changing dynamics of the monitored condition using one-way ANOVA model and statistical tests (Fisher, Tukey and Bartlett), while taking into account the residual energy of sensor. The second objective is to eliminate redundant data generated in each cluster. At the sensor level, each sensor searches the similarity between readings collected at each period and among successive periods, based on the sets similarity functions. At the CH level, we use distance functions to allow CH to eliminate redundant data sets generated by neighboring nodes. Finally, we propose two sleep/active strategies for scheduling sensors in each cluster, after searching the spatio-temporal correlation between sensor nodes. The first strategy uses the set covering problem while the second one takes advantages from the correlation degree and the sensors residual energies for scheduling nodes in the cluster. To evaluate the performance of the proposed techniques, simulations on real sensor data have been conducted. We have analyzed their performances according to energy consumption, data latency and accuracy, and area coverage, and we show how our techniques can significantly improve the performance of sensor networks.

Page generated in 0.1094 seconds