• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 32
  • 22
  • 12
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 229
  • 229
  • 111
  • 41
  • 40
  • 37
  • 35
  • 34
  • 32
  • 27
  • 25
  • 24
  • 23
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Stochastic representation and analysis of rough surface topography by random fields and integral geometry – Application to the UHMWPE cup involved in total hip arthroplasty / Modélisation stochastique et analyse de topographie de surfaces rugueuses par champs aléatoire et géométrie intégrale – Application aux cupules à double mobilité pour prothèse totale de hanche

Ahmad, Ola 23 September 2013 (has links)
La topographie d'une surface se compose généralement de plusieurs échelles, depuis l'échelle macroscopique (sa géométrie physique), jusqu'aux échelles microscopiques ou atomiques appelées rugosité. L'évolution spatiale et géométrique de la rugosité fournit une description plus complète de la surface, et une interprétation physique de certains problèmes importants tels que le frottement et les mécanismes d'usure pendant le contact mécanique entre deux surfaces. La topographie d'une surface rugueuse est de nature aléatoire, ce qui traduit par des altitudes spatialement corrélées, appelées pics et vallées. La relation entre leurs densités de probabilité et leurs propriétés géométriques sont les aspects fondamentaux qui ont été développés dans cette thèse, en utilisant la théorie des champs aléatoires et la géométrie intégrale. Un modèle aléatoire approprié pour représenter une surface rugueuse a été mis en place et étudié au moyen des paramètres les plus significatifs, dont les changements influencent la géométrie des ensembles de niveaux (excursion sets) de cette surface. Les ensembles de niveaux ont été quantifiés par des fonctionnelles connues sous le nom de fonctionnelles de Minkowski, ou d'une manière équivalente sous le nom de volumes intrinsèques. Dans un premier temps, les volumes intrinsèques des ensembles de niveaux sont calculés analytiquement sur une classe de modèles mixtes, qui sont définis par la combinaison linéaire d'un champ aléatoire Gaussien et d'un champ de t-student (t-field), et ceux d'une classe de champs aléatoires asymétriques appelés skew-t. Ces volumes sont comparés et testés sur des surfaces produites par des simulations numériques. Dans un second temps, les modèles aléatoires proposés ont été appliqués sur des surfaces réelles acquises à partir d'une cupule d'UHMWPE (provenant d’une prothèse totale de hanche) avant et après les processus d'usure. Les résultats ont montré que le champ aléatoire skew-t est un modèle mieux approprié pour décrire la rugosité de surfaces usées, contrairement aux modèles adoptés dans la littérature. Une analyse statistique, basée sur le champ aléatoire skew-t, est ensuite proposée pour détecter les niveaux des pics/vallées de la surface usée et pour décrire le comportement et la fonctionnalité de la surface usée. / Surface topography is, generally, composed of many length scales starting from its physical geometry, to its microscopic or atomic scales known by roughness. The spatial and geometrical evolution of the roughness topography of engineering surfaces avail comprehensive understanding, and interpretation of many physical and engineering problems such as friction, and wear mechanisms during the mechanical contact between adjoined surfaces. Obviously, the topography of rough surfaces is of random nature. It is composed of irregular hills/valleys being spatially correlated. The relation between their densities and their geometric properties are the fundamental topics that have been developed, in this research study, using the theory of random fields and the integral geometry.An appropriate random field model of a rough surface has been defined by the most significant parameters, whose changes influence the geometry of its excursion. The excursion sets were quantified by functions known as intrinsic volumes. These functions have many physical interpretations, in practice. It is possible by deriving their analytical formula to estimate the parameters of the random field model being applied on the surface, and for statistical analysis investigation of its excursion sets. These subjects have been essentially considered in this thesis. Firstly, the intrinsic volumes of the excursion sets of a class of mixture models defined by the linear combination of Gaussian and t random fields, then for the skew-t random fields are derived analytically. They have been compared and tested on surfaces generated by simulations. In the second stage, these random fields have been applied to real surfaces measured from the UHMWPE component, involved in application of total hip implant, before and after wear simulation process. The primary results showed that the skew-t random field is more adequate, and flexible for modelling the topographic roughness. Following these arguments, a statistical analysis approach, based on the skew-t random field, is then proposed. It aims at estimating, hierarchically, the significant levels including the real hills/valleys among the uncertain measurements. The evolution of the mean area of the hills/valleys and their levels enabled describing the functional behaviour of the UHMWPE surface over wear time, and indicating the predominant wear mechanisms.
152

Segmentation of heterogeneous document images : an approach based on machine learning, connected components analysis, and texture analysis / Segmentation d'images hétérogènes de documents : une approche basée sur l'apprentissage automatique de données, l'analyse en composantes connexes et l'analyse de texture

Bonakdar Sakhi, Omid 06 December 2012 (has links)
La segmentation de page est l'une des étapes les plus importantes de l'analyse d'images de documents. Idéalement, une méthode de segmentation doit être capable de reconstituer la structure complète de toute page de document, en distinguant les zones de textes, les parties graphiques, les photographies, les croquis, les figures, les tables, etc. En dépit de nombreuses méthodes proposées à ce jour pour produire une segmentation de page correcte, les difficultés sont toujours nombreuses. Le chef de file du projet qui a rendu possible le financement de ce travail de thèse (*) utilise une chaîne de traitement complète dans laquelle les erreurs de segmentation sont corrigées manuellement. Hormis les coûts que cela représente, le résultat est subordonné au réglage de nombreux paramètres. En outre, certaines erreurs échappent parfois à la vigilance des opérateurs humains. Les résultats des méthodes de segmentation de page sont généralement acceptables sur des documents propres et bien imprimés; mais l'échec est souvent à constater lorsqu'il s'agit de segmenter des documents manuscrits, lorsque la structure de ces derniers est vague, ou lorsqu'ils contiennent des notes de marge. En outre, les tables et les publicités présentent autant de défis supplémentaires à relever pour les algorithmes de segmentation. Notre méthode traite ces problèmes. La méthode est divisée en quatre parties : - A contrario de ce qui est fait dans la plupart des méthodes de segmentation de page classiques, nous commençons par séparer les parties textuelles et graphiques de la page en utilisant un arbre de décision boosté. - Les parties textuelles et graphiques sont utilisées, avec d'autres fonctions caractéristiques, par un champ conditionnel aléatoire bidimensionnel pour séparer les colonnes de texte. - Une méthode de détection de lignes, basée sur les profils partiels de projection, est alors lancée pour détecter les lignes de texte par rapport aux frontières des zones de texte. - Enfin, une nouvelle méthode de détection de paragraphes, entraînée sur les modèles de paragraphes les plus courants, est appliquée sur les lignes de texte pour extraire les paragraphes, en s'appuyant sur l'apparence géométrique des lignes de texte et leur indentation. Notre contribution sur l'existant réside essentiellement dans l'utilisation, ou l'adaptation, d'algorithmes empruntés aux méthodes d'apprentissage automatique de données, pour résoudre les cas les plus difficiles. Nous démontrons en effet un certain nombre d'améliorations : sur la séparation des colonnes de texte lorsqu'elles sont proches l'une de l'autre~; sur le risque de fusion d'au moins deux cellules adjacentes d'une même table~; sur le risque qu'une région encadrée fusionne avec d'autres régions textuelles, en particulier les notes de marge, même lorsque ces dernières sont écrites avec une fonte proche de celle du corps du texte. L'évaluation quantitative, et la comparaison des performances de notre méthode avec des algorithmes concurrents par des métriques et des méthodologies d'évaluation reconnues, sont également fournies dans une large mesure.(*) Cette thèse a été financée par le Conseil Général de Seine-Saint-Denis, par l'intermédiaire du projet Demat-Factory, initié et conduit par SAFIG SA / Document page segmentation is one of the most crucial steps in document image analysis. It ideally aims to explain the full structure of any document page, distinguishing text zones, graphics, photographs, halftones, figures, tables, etc. Although to date, there have been made several attempts of achieving correct page segmentation results, there are still many difficulties. The leader of the project in the framework of which this PhD work has been funded (*) uses a complete processing chain in which page segmentation mistakes are manually corrected by human operators. Aside of the costs it represents, this demands tuning of a large number of parameters; moreover, some segmentation mistakes sometimes escape the vigilance of the operators. Current automated page segmentation methods are well accepted for clean printed documents; but, they often fail to separate regions in handwritten documents when the document layout structure is loosely defined or when side notes are present inside the page. Moreover, tables and advertisements bring additional challenges for region segmentation algorithms. Our method addresses these problems. The method is divided into four parts:1. Unlike most of popular page segmentation methods, we first separate text and graphics components of the page using a boosted decision tree classifier.2. The separated text and graphics components are used among other features to separate columns of text in a two-dimensional conditional random fields framework.3. A text line detection method, based on piecewise projection profiles is then applied to detect text lines with respect to text region boundaries.4. Finally, a new paragraph detection method, which is trained on the common models of paragraphs, is applied on text lines to find paragraphs based on geometric appearance of text lines and their indentations. Our contribution over existing work lies in essence in the use, or adaptation, of algorithms borrowed from machine learning literature, to solve difficult cases. Indeed, we demonstrate a number of improvements : on separating text columns when one is situated very close to the other; on preventing the contents of a cell in a table to be merged with the contents of other adjacent cells; on preventing regions inside a frame to be merged with other text regions around, especially side notes, even when the latter are written using a font similar to that the text body. Quantitative assessment, and comparison of the performances of our method with competitive algorithms using widely acknowledged metrics and evaluation methodologies, is also provided to a large extend.(*) This PhD thesis has been funded by Conseil Général de Seine-Saint-Denis, through the FUI6 project Demat-Factory, lead by Safig SA
153

Caractérisation temporelle et spectrale de champs instationnaires non gaussiens : application aux hydroliennes en milieu marin / Temporal and spectral characterization of non-stationary non-gaussian fields : application to tidal turbines in marine environment

Suptille, Mickaël 09 January 2015 (has links)
L’environnement opérationnel des pales et des structures porteuses des hydroliennes est de nature incertaine, compte tenu de la variabilité de l’écoulement (turbulence, sillage, houle, courants. . .). Ces éléments structuraux subissent donc des états de contraintes multiaxiaux complexes avec des fortes variations temporelles à caractère aléatoire. Ainsi, le dimensionnement basé sur des critères statiques déterministes apparaît insuffisant pour tenir compte de la complexité de l’histoire du chargement mécanique et de sa variabilité.Ce travail vise à établir des méthodes de dimensionnement adaptées à cette situation, pour la conception de structures hydroliennes aux risques et aux coûts maîtrisés. La démarche adoptée repose sur la description de l’écoulement et de ses grandeurs statistiques, afin de caractériser les efforts exercés sur l’hydrolienne et les contraintes mécaniques extrêmes en pied de pale. / The operating environment of tidal turbines blades and body is uncertain, due to the flow variability (turbulence,wake, tide, streams...). These structural elements then undergo strongly time-varying complex multi-axial random stress states. A design based on static and deterministic criteria thus appears insufficient to take the complexity and the variability of the mechanical loading into account. This work aims at setting sizing methods that are adapted to this situation, in order to design tidal turbines with mastered risks and costs. The proposed method lies on a statistical description of the flow, in order to characterize the load of the turbine and the extreme mechanical stresses at the blade foot.
154

Detecção de estruturas finas e ramificadas em imagens usando campos aleatórios de Markov e informação perceptual / Detection of thin and ramified structures in images using Markov random fields and perceptual information

Leite, Talita Perciano Costa 28 August 2012 (has links)
Estruturas do tipo linha/curva (line-like, curve-like), alongadas e ramificadas são comumente encontradas nos ecossistemas que conhecemos. Na biomedicina e na biociências, por exemplo, diversas aplicações podem ser observadas. Justamente por este motivo, extrair este tipo de estrutura em imagens é um constante desafio em problemas de análise de imagens. Porém, diversas dificuldades estão envolvidas neste processo. Normalmente as características espectrais e espaciais destas estruturas podem ser muito complexas e variáveis. Especificamente as mais \"finas\" são muito frágeis a qualquer tipo de processamento realizado na imagem e torna-se muito fácil a perda de informações importantes. Outro problema bastante comum é a ausência de parte das estruturas, seja por motivo de pouca resolução, ou por problemas de aquisição, ou por casos de oclusão. Este trabalho tem por objetivo explorar, descrever e desenvolver técnicas de detecção/segmentação de estruturas finas e ramificadas. Diferentes métodos são utilizados de forma combinada, buscando uma melhor representação topológica e perceptual das estruturas e, assim, melhores resultados. Grafos são usados para a representação das estruturas. Esta estrutura de dados vem sendo utilizada com sucesso na literatura na resolução de diversos problemas em processamento e análise de imagens. Devido à fragilidade do tipo de estrutura explorado, além das técnicas de processamento de imagens, princípios de visão computacional são usados. Busca-se, desta forma, obter um melhor \"entendimento perceptual\" destas estruturas na imagem. Esta informação perceptual e informações contextuais das estruturas são utilizadas em um modelo de campos aleatórios de Markov, buscando o resultado final da detecção através de um processo de otimização. Finalmente, também propomos o uso combinado de diferentes modalidades de imagens simultaneamente. Um software é resultado da implementação do arcabouço desenvolvido e o mesmo é utilizado em duas aplicações para avaliar a abordagem proposta: extração de estradas em imagens de satélite e extração de raízes em imagens de perfis de solo. Resultados do uso da abordagem proposta na extração de estradas em imagens de satélite mostram um melhor desempenho em comparação com método existente na literatura. Além disso, a técnica de fusão proposta apresenta melhora significativa de acordo com os resultados apresentados. Resultados inéditos e promissores são apresentados na extração de raízes de plantas. / Line- curve-like, elongated and ramified structures are commonly found inside many known ecosystems. In biomedicine and biosciences, for instance, different applications can be observed. Therefore, the process to extract this kind of structure is a constant challenge in image analysus problems. However, various difficulties are involved in this process. Their spectral and spatial characteristics are usually very complex and variable. Considering specifically the thinner ones, they are very \"fragile\" to any kind of process applied to the image, and then, it becomes easy the loss of crucial data. Another very common problem is the absence of part of the structures, either because of low image resolution and image acquisition problems or because of occlusion problems. This work aims to explore, describe and develop techniques for detection/segmentation of thin and ramified structures. Different methods are used in a combined way, aiming to reach a better topological and perceptual representation of the structures and, therefore, better results. Graphs are used to represent the structures. This data structure has been successfully used in the literature for the development of solutions for many image processing and analysis problems. Because of the fragility of the kind of structures we are dealing with, some computer vision principles are used besides usual image processing techniques. In doing so, we search for a better \"perceptual understanding\" of these structures in the image. This perceptual information along with contextual information about the structures are used in a Markov random field, searching for a final detection through an optimization process. Lastly, we propose the combined use of different image modalities simultaneously. A software is produced from the implementation of the developed framework and it is used in two application in order to evaluate the proposed approach: extraction of road networks from satellite images and extraction of plant roots from soil profile images. Results using the proposed approach for the extraction of road networks show a better performance if compared with an existent method from the literature. Besides that, the proposed fusion technique presents a meaningful improvement according to the presented results. Original and promising results are presented for the extraction of plant roots from soil profile images.
155

Reconhecimento de entidades nomeadas na ?rea da geologia : bacias sedimentares brasileiras

Amaral, Daniela Oliveira Ferreira do 14 September 2017 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-05-03T18:01:24Z No. of bitstreams: 1 DANIELA_OLIVEIRA_FERREIRA_DO_AMARAL_TES.pdf: 6343384 bytes, checksum: a1d91fe5b12fa5cfdedb20ec1baf5042 (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-05-14T19:20:24Z (GMT) No. of bitstreams: 1 DANIELA_OLIVEIRA_FERREIRA_DO_AMARAL_TES.pdf: 6343384 bytes, checksum: a1d91fe5b12fa5cfdedb20ec1baf5042 (MD5) / Made available in DSpace on 2018-05-14T19:35:09Z (GMT). No. of bitstreams: 1 DANIELA_OLIVEIRA_FERREIRA_DO_AMARAL_TES.pdf: 6343384 bytes, checksum: a1d91fe5b12fa5cfdedb20ec1baf5042 (MD5) Previous issue date: 2017-09-14 / The treatment of textual information has been increasingly relevant in many do- mains. One of the first tasks for extracting information from texts is the Named Entities Recognition (NER), which consists of identifying references to certain entities and finding out their classification. There are many NER domains, among them the most usual are medicine and biology. One of the challenging domains in the recognition of Named Entities (NE) is the Geology domain, which is an area lacking computational linguistic resources. This thesis proposes a method for the recognition of relevant NE in the field of Geology, specifically to the subarea of Brazilian Sedimentary Basin, in Portuguese texts. Generic and geological features were defined for the generation of a machine learning model. Among the automatic approaches to NE classification, the most prominent is the Conditional Ran- dom Fields (CRF) probabilistic model. CRF has been effectively used for word processing in natural language. To generate our model, we created GeoCorpus, a reference corpus for Geological NER, annotated by specialists. Experimental evaluations were performed to compare the proposed method with other classifiers. The best results were achieved by CRF, which shows 76,78% of Precision and 54,33% of F-Measure. / O tratamento da informa??o textual torna-se cada vez mais relevante para muitos dom?nios. Nesse sentido, uma das primeira tarefas para Extra??o de Informa??es a partir de textos ? o Reconhecimento de Entidades Nomeadas (REN), que consiste na identifica??o de refer?ncias feitas a determinadas entidades e sua classifica??o. REN compreende muitos dom?nios, entre eles os mais usuais s?o medicina e biologia. Um dos dom?nios desafiadores no reconhecimento de EN ? o de Geologia, sendo essa uma ?rea carente de recursos lingu?sticos computacionais. A presente tese prop?e um m?todo para o reconhecimento de EN relevantes no dom?nio da Geologia, sub?rea Bacia Sedimentar Brasileira, em textos da l?ngua portuguesa. Definiram-se features gen?ricas e geol?gicas para a gera??o do modelo de aprendizado. Entre as abordagens autom?ticas para classifica??o de EN, a mais proeminente ? o modelo probabil?stico Conditional Random Fields (CRF). O CRF tem sido utilizado eficazmente no processamento de textos em linguagem natural. A fim de gerar um modelo de aprendizado foi criado o GeoCorpus, um corpus de refer?ncia para REN Geol?gicas, anotado por especialistas. Avalia??es experimentais foram realizadas com o objetivo de comparar o m?todo proposto com outros classificadores. Destacam-se os melhores resultados para o CRF, o qual alcan?ou 76,78% e 54,33% em Precis?o e Medida-F.
156

Combinação de modelos de campos aleatórios markovianos para classificação contextual de imagens multiespectrais / Combining markov random field models for multispectral image contextual classification

Levada, Alexandre Luis Magalhães 05 May 2010 (has links)
Este projeto de doutorado apresenta uma nova abordagem MAP-MRF para a classificação contextual de imagens multiespectrais utilizando combinação de modelos de Campos Aleatórios Markovianos definidos em sistemas de ordens superiores. A modelagem estatística para o problema de classificação segue o paradigma Bayesiano, com a definição de um modelo Markoviano para os dados observados (Gaussian Markov Random Field multiespectral) e outro modelo para representar o conhecimento a priori (Potts). Nesse cenário, o parâmetro β do modelo de Potts atua como um parâmetro de regularização, tendo papel fundamental no compromisso entre as observações e o conhecimento a priori, de modo que seu correto ajuste é necessário para a obtenção de bons resultados. A introdução de sistemas de vizinhança de ordens superiores requer a definição de novos métodos para a estimação dos parâmetros dos modelos Markovianos. Uma das contribuições desse trabalho é justamente propor novas equações de pseudo-verossimilhança para a estimação desses parâmetros no modelo de Potts em sistemas de segunda e terceira ordens. Apesar da abordagem por máxima pseudo-verossimilhança ser amplamente utilizada e conhecida na literatura de campos aleatórios, pouco se conhece acerca da acurácia dessa estimação. Foram derivadas aproximações para a variância assintótica dos estimadores propostos, caracterizando-os completamente no caso limite, com o intuito de realizar inferências e análises quantitativas sobre os parâmetros dos modelos Markovianos. A partir da definição dos modelos e do conhecimento dos parâmetros, o próximo estágio é a classificação das imagens multiespectrais. A solução para esse problema de inferência Bayesiana é dada pelo critério de estimação MAP, onde a solução ótima é determinada maximizando a probabilidade a posteriori, o que define um problema de otimização. Como não há solução analítica para esse problema no caso de prioris Markovianas, algoritmos iterativos de otimização combinatória foram empregados para aproximar a solução ótima. Nesse trabalho, adotam-se três métodos sub-ótimos: Iterated Conditional Modes, Maximizer of the Posterior Marginals e Game Strategy Approach. Porém, é demonstrado na literatura que tais métodos convergem para máximos locais e não globais, pois são altamente dependentes de sua condição inicial. Isto motivou o desenvolvimento de uma nova abordagem para combinação de classificadores contextuais, que utiliza múltiplas inicializações simultâneas providas por diferentes classificadores estatísticos pontuais. A metodologia proposta define um framework MAP-MRF bastante robusto para solução de problemas inversos, pois permite a utilização e a integração de diferentes condições iniciais em aplicações como classificação, filtragem e restauração de imagens. Como medidas quantitativas de desempenho, são adotados o coeficiente Kappa de Cohen e o coeficiente Tau de Kendall para verificar a concordância entre as saídas dos classificadores e a verdade terrestre (amostras pré-rotuladas). Resultados obtidos mostram que a inclusão de sistemas de vizinhança de ordens superiores é de fato capaz de melhorar significativamente não apenas o desempenho da classificação como também a estimação dos parâmetros dos modelos Markovianos, reduzindo tanto o erro de estimação quanto a variância assintótica. Além disso, a combinação de classificadores contextuais através da utilização de múltiplas inicializações simultâneas melhora significativamente o desempenho da classificação se comparada com a abordagem tradicional com apenas uma inicialização. / This work presents a novel MAP-MRF approach for multispectral image contextual classification by combining higher-order Markov Random Field models. The statistical modeling follows the Bayesian paradigm, with the definition of a multispectral Gaussian Markov Random Field model for the observations and a Potts MRF model to represent the a priori knowledge. In this scenario, the Potts MRF model parameter (β) plays the role of a regularization parameter by controlling the tradeoff between the likelihood and the prior knowledge, in a way that a suitable tunning for this parameter is required for a good performance in contextual classification. The introduction of higher-order MRF models requires the specification of novel parameter estimation methods. One of the contributions of this work is the definition of novel pseudo-likelihood equations for the estimation of these MRF parameters in second and third order neighborhood systems. Despite its widely usage in practical MRF applications, little is known about the accuracy of maximum pseudo-likelihood approach. Approximations for the asymptotic variance of the proposed MPL estimators were derived, completely characterizing their behavior in the limiting case, allowing statistical inference and quantitative analysis. From the statistical modeling and having the model parameters estimated, the next step is the multispectral image classification. The solution for this Bayesian inference problem is given by the MAP criterion, where the optimal solution is obtained by maximizing the a posteriori distribution, defining an optimization problem. As there is no analytical solution for this problem in case of Markovian priors, combinatorial optimization algorithms are required to approximate the optimal solution. In this work, we use three suboptimal methods: Iterated Conditional Modes, Maximizer of the Posterior Marginals and Game Strategy Approach, a variant approach based on non-cooperative game theory. However, it has been shown that these methods converge to local maxima solutions, since they are extremelly dependent on the initial condition. This fact motivated the development of a novel approach for combination of contextual classifiers, by making use of multiple initializations at the same time, where each one of these initial conditions is provided by different pointwise pattern classifiers. The proposed methodology defines a robust MAP-MRF framework for the solution of general inverse problems since it allows the use and integration of several initial conditions in a variety of applications as image classification, denoising and restoration. To evaluate the performance of the classification results, two statistical measures are used to verify the agreement between the classifiers output and the ground truth: Cohens Kappa and Kendalls Tau coefficient. The obtained results show that the use of higher-order neighborhood systems is capable of significantly improve not only the classification performance, but also the MRF parameter estimation by reducing both the estimation error and the asymptotic variance. Additionally, the combination of contextual classifiers through the use of multiple initializations also improves the classificatoin performance, when compared to the traditional single initialization approach.
157

Model selection for discrete Markov random fields on graphs / Seleção de modelos para campos aleatórios Markovianos discretos sobre grafos

Frondana, Iara Moreira 28 June 2016 (has links)
In this thesis we propose to use a penalized maximum conditional likelihood criterion to estimate the graph of a general discrete Markov random field. We prove the almost sure convergence of the estimator of the graph in the case of a finite or countable infinite set of variables. Our method requires minimal assumptions on the probability distribution and contrary to other approaches in the literature, the usual positivity condition is not needed. We present several examples with a finite set of vertices and study the performance of the estimator on simulated data from theses examples. We also introduce an empirical procedure based on k-fold cross validation to select the best value of the constant in the estimators definition and show the application of this method in two real datasets. / Nesta tese propomos um critério de máxima verossimilhança penalizada para estimar o grafo de dependência condicional de um campo aleatório Markoviano discreto. Provamos a convergência quase certa do estimador do grafo no caso de um conjunto finito ou infinito enumerável de variáveis. Nosso método requer condições mínimas na distribuição de probabilidade e contrariamente a outras abordagens da literatura, a condição usual de positividade não é necessária. Introduzimos alguns exemplos com um conjunto finito de vértices e estudamos o desempenho do estimador em dados simulados desses exemplos. Também propomos um procedimento empírico baseado no método de validação cruzada para selecionar o melhor valor da constante na definição do estimador, e mostramos a aplicação deste procedimento em dois conjuntos de dados reais.
158

Markov Random Field Based Road Network Extraction From High Resoulution Satellite Images

Ozturk, Mahir 01 February 2013 (has links) (PDF)
Road Networks play an important role in various applications such as urban and rural planning, infrastructure planning, transportation management, vehicle navigation. Extraction of Roads from Remote Sensed satellite images for updating road database in geographical information systems (GIS) is generally done manually by a human operator. However, manual extraction of roads is time consuming and labor intensive process. In the existing literature, there are a great number of researches published for the purpose of automating the road extraction process. However, automated processes still yield some erroneous and incomplete results and human intervention is still required. The aim of this research is to propose a framework for road network extraction from high spatial resolution multi-spectral imagery (MSI) to improve the accuracy of road extraction systems. The proposed framework begins with a spectral classification using One-class Support Vector Machines (SVM) and Gaussian Mixture Models (GMM) classifiers. Spectral Classification exploits the spectral signature of road surfaces to classify road pixels. Then, an iterative template matching filter is proposed to refine spectral classification results. K-medians clustering algorithm is employed to detect candidate road centerline points. Final road network formation is achieved by Markov Random Fields. The extracted road network is evaluated against a reference dataset using a set of quality metrics.
159

Example-based Rendering of Textural Phenomena

Kwatra, Vivek 19 July 2005 (has links)
This thesis explores synthesis by example as a paradigm for rendering real-world phenomena. In particular, phenomena that can be visually described as texture are considered. We exploit, for synthesis, the self-repeating nature of the visual elements constituting these texture exemplars. Techniques for unconstrained as well as constrained/controllable synthesis of both image and video textures are presented. For unconstrained synthesis, we present two robust techniques that can perform spatio-temporal extension, editing, and merging of image as well as video textures. In one of these techniques, large patches of input texture are automatically aligned and seamless stitched with each other to generate realistic looking images and videos. The second technique is based on iterative optimization of a global energy function that measures the quality of the synthesized texture with respect to the given input exemplar. We also present a technique for controllable texture synthesis. In particular, it allows for generation of motion-controlled texture animations that follow a specified flow field. Animations synthesized in this fashion maintain the structural properties like local shape, size, and orientation of the input texture even as they move according to the specified flow. We cast this problem into an optimization framework that tries to simultaneously satisfy the two (potentially competing) objectives of similarity to the input texture and consistency with the flow field. This optimization is a simple extension of the approach used for unconstrained texture synthesis. A general framework for example-based synthesis and rendering is also presented. This framework provides a design space for constructing example-based rendering algorithms. The goal of such algorithms would be to use texture exemplars to render animations for which certain behavioral characteristics need to be controlled. Our motion-controlled texture synthesis technique is an instantiation of this framework where the characteristic being controlled is motion represented as a flow field.
160

Localization And Recognition Of Text In Digital Media

Saracoglu, Ahmet 01 November 2007 (has links) (PDF)
Textual information within digital media can be used in many areas such as, indexing and structuring of media databases, in the aid of visually impaired, translation of foreign signs and many more. This said, mainly text can be separated into two categories in digital media as, overlay-text and scene-text. In this thesis localization and recognition of video text regardless of its category in digital media is investigated. As a necessary first step, framework of a complete system is discussed. Next, a comparative analysis of feature vector and classification method pairs is presented. Furthermore, multi-part nature of text is exploited by proposing a novel Markov Random Field approach for the classification of text/non-text regions. Additionally, better localization of text is achieved by introducing bounding-box extraction method. And for the recognition of text regions, a handprint based Optical Character Recognition system is thoroughly investigated. During the investigation of text recognition, multi-hypothesis approach for the segmentation of background is proposed by incorporating k-Means clustering. Furthermore, a novel dictionary-based ranking mechanism is proposed for recognition spelling correction. And overall system is simulated on a challenging data set. Also, a through survey on scene-text localization and recognition is presented. Furthermore, challenges are identified and discussed by providing related work on them. Scene-text localization simulations on a public competition data set are also provided. Lastly, in order to improve recognition performance of scene-text on signs that are affected from perspective projection distortion, a rectification method is proposed and simulated.

Page generated in 0.1025 seconds