• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 32
  • 22
  • 12
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 229
  • 229
  • 111
  • 41
  • 40
  • 37
  • 35
  • 34
  • 32
  • 27
  • 25
  • 24
  • 23
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Modelos gaussianos geoestatísticos espaço-temporais e aplicações / Space-time geostatisticals guassian models and aplications

Alexandre Sousa da Silva 08 February 2007 (has links)
A especificação de funções de covariância espaço-temporais é uma das possíveis estratégias para modelagem de processos dos quais observações são tomadas em diferentes posições do espaço e do tempo. Tais funções podem definir processos separáveis ou não separáveis e na sua especificação deve-se garantir que são funções de covariância válidas atendendo a condição de serem positiva definidas. Entre estratégias para obtenção de tais funções estão as de Cressie e Huang (1999) e Gneiting (2002). A primeira se baseia na idéia de obter funções em um espaç de dimensão aumentada a partir de funções válidas no espaço original e necessita de operações no domínio da freqüência. Alternativamente a segunda proposta utiliza combinação de funções completamente monótonas e estritamente crescentes, evitando inversão de representações espectrais. Há ainda poucos relatos de uso e avaliações comparativas das diferentes propostas. Neste trabalho considerou-se a metodologia proposta por Gneiting, com diferentes valores do parâmetro que indica a força da interação entre o espaço e o tempo. Diferentes modelos foram aplicados à dois conjuntos de dados, um referente a estoques de peixe na costa de Portugual, e outro referente à armazenagem de água em um solo com citros. Utilizou-se a implementação no pacote RandomFields do programa R, revisando-se a metodologia e investigando-se a implementação computacional. Para os dois conjuntos de dados o modelo de covariância separável se mostrou adequado para descrever o comportamento das observações disponíveis sendo a escolha do modelo determinada por ajustes de máxima verossimilhança. / The specification of space-time covariance functions is one of the possible strategies to model processes observed at different locations and time points. Such functions can define separable and non-separable processes and must attend the condition of positivedefiniteness. Among the strategies to obtain such valid functions are the ones suggested by Cressie and Huang (1999) and by Gneiting (2002). The former is based on the idea of obtaining valid functions in a space of increased dimension from valid functions on the primary dimension and requires operations in the frequency domain. Alternatively, the latter combines increasing monotone functions avoiding the inversion of spectral representations. There are still few reports of usage and comparisons of the strategies. This work follows Gneiting?s proposals with different values for the space-time interaction parameter. Models were applied for the analysis of two real data sets, one about fish stocks in the Portuguese coast and a second on soil water storage. The implementation on the R package RandomFields was used, with methodology and computational implementation being reviewed. For both case the separable model provided a satisfactory fit, based on maximum likelihood estimation.
142

Caracterização da estrutura de dependência do genoma humano usando campos markovianos: estudo de populações mundiais e dados de SNPs / Characterization of the human genome dependence structure using Markov random fields: populations worldwide study and SNP data

Francisco José de Almeida Fernandes 01 February 2016 (has links)
A identificação de regiões cromossômicas, ou blocos de dependência dentro do genoma humano, que são transmitidas em conjunto para seus descendentes (haplótipos) tem sido um desafio e alvo de várias iniciativas de pesquisa, muitas delas utilizando dados de plataformas de marcadores moleculares do tipo SNP (Single Nucleotide Polymorphisms - SNPs), com alta densidade dentro do DNA humano. Este trabalho faz uso de uma modelagem estocástica de campos Markovianos de alcance variável, em uma amostra estratificada de diferentes populações, para encontrar blocos de SNPs, independentes entre si, estruturando assim o genoma em regiões ilhadas de dependência. Foram utilizados dados públicos de SNPs de diferentes populações mundiais (projeto HapMap), além de uma amostra da população brasileira. As regiões de dependência configuram janelas de influência as quais foram usadas para caracterizar as diferentes populações de acordo com sua ancestralidade e os resultados obtidos mostraram que as janelas da população brasileira têm, em média, tamanho maior, evidenciando a sua história recente de miscigenação. É também proposta uma otimização da função de verossimilhança do problema para obter as janelas de consenso maximais de todas as populações. Dada uma determinada janela de consenso, uma medida de distância apropriada para variáveis categóricas, é adotada para medir sua homogeneidade/heterogeneidade. Janelas homogêneas foram identificadas na região HLA (Human Leukocyte Antigen) do genoma, a qual está associada à resposta imunológica. O tamanho médio dessas janelas foi maior do que a média encontrada no restante do cromossomo, confirmando a alta dependência existente nesta região, considerada como bastante conservada na evolução humana. Finalmente, considerando a distribuição dos SNPs entre as populações nas janelas mais heterogêneas, a Análise de Correspondência foi aplicada na construção de um classificador capaz de determinar o percentual relativo de ancestralidade de um indivíduo, o qual, submetido à validação, obteve uma eficiência de 90% de acerto da população originária. / The identification of chromosome regions, or dependency blocks in the human genome, that are transmitted together to offspring (haploids) has been a challenge and object of several research initiatives, many of them using platforms of molecular markers such as SNP (Single Nucleotide Polymorphisms), with high density inside the human DNA. This work makes use of a stochastic modeling of Markov random fields, in a stratified sample of different populations, to find SNPs blocks, independent of each other, thus structuring the genome in stranded regions of dependency. Public data from different worldwide populations were used (HapMap project), beyond a Brazilian population. The dependence regions constitute windows of influence which were used to characterize the different populations according of their ancestry and the results showed that the Brazilian populations windows have, on average, a bigger size, showing their recent history of admixture. It is also proposed an optimization of likelihood function of the problem for the maximal windows of consensus from all populations. Given a particular window of consensus, a distance measure appropriated to categorical variables, it is adopted to evaluate its homogeneity/heterogeneity. Homogeneous windows were identified within region of genome called HLA (Human Leukocyte Antigen), which is associated with the immune response. The average size of these windows was bigger than the average found in the rest of the chromosome, confirming the high dependence verified in this region, considered highly conserved in the human evolution. Finally, considering the distribution of the SNPs among the populations in the most heterogeneous windows, the Correspondence Analysis was applied to build a classifier able to determine, for a given individual, the ancestry proportion from each population considered, which, submitted to a validation, obtained a 90% accuracy of the original population.
143

Limite superior sobre a probabilidade de confinamento de passeio aleatório em meio aleatório / Upper bound on the probability of confinement random walk in random environment

Vásquez Mercedes, Claudia Edith, 1989- 05 February 2013 (has links)
Orientadores: Christophe Frédéric Gallesco, Serguei Popov / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-22T17:31:03Z (GMT). No. of bitstreams: 1 VasquezMercedes_ClaudiaEdith_M.pdf: 743991 bytes, checksum: 587d04d1b7b45c75dd5eeea766258b02 (MD5) Previous issue date: 2013 / Resumo: O resumo poderá ser visualizado no texto completo da tese digital / Abstract: The abstract is available with the full electronic document / Mestrado / Estatistica / Mestra em Estatística
144

Data-driven natural language generation using statistical machine translation and discriminative learning / L'approche discriminante à la génération de la parole

Manishina, Elena 05 February 2016 (has links)
L'humanité a longtemps été passionnée par la création de machines intellectuelles qui peuvent librement intéragir avec nous dans notre langue. Tous les systèmes modernes qui communiquent directement avec l'utilisateur partagent une caractéristique commune: ils ont un système de dialogue à la base. Aujourd'hui pratiquement tous les composants d'un système de dialogue ont adopté des méthodes statistiques et les utilisent largement comme leurs modèles de base. Jusqu'à récemment la génération de langage naturel (GLN) utilisait pour la plupart des patrons/modèles codés manuellement, qui représentaient des phrases types mappées à des réalisations sémantiques particulières. C'était le cas jusqu'à ce que les approches statistiques aient envahi la communauté de recherche en systèmes de dialogue. Dans cette thèse, nous suivons cette ligne de recherche et présentons une nouvelle approche à la génération de la langue naturelle. Au cours de notre travail, nous nous concentrons sur deux aspects importants du développement des systèmes de génération: construire un générateur performant et diversifier sa production. Deux idées principales que nous défendons ici sont les suivantes: d'abord, la tâche de GLN peut être vue comme la traduction entre une langue naturelle et une représentation formelle de sens, et en second lieu, l'extension du corpus qui impliquait traditionnellement des paraphrases définies manuellement et des règles spécialisées peut être effectuée automatiquement en utilisant des méthodes automatiques d'extraction des synonymes et des paraphrases bien connues et largement utilisées. En ce qui concerne notre première idée, nous étudions la possibilité d'utiliser le cadre de la traduction automatique basé sur des modèles ngrams; nous explorons également le potentiel de l'apprentissage discriminant (notamment les champs aléatoires markoviens) appliqué à la GLN; nous construisons un système de génération qui permet l'inclusion et la combinaison des différents modèles et qui utilise un cadre de décodage efficace (automate à état fini). En ce qui concerne le second objectif, qui est l'extension du corpus, nous proposons d'élargir la taille du vocabulaire et le nombre de l'ensemble des structures syntaxiques disponibles via l'intégration des synonymes et des paraphrases. À notre connaissance, il n'y a pas eu de tentatives d'augmenter la taille du vocabulaire d'un système de GLN en incorporant les synonymes. À ce jour, la plupart d'études sur l'extension du corpus visent les paraphrases et recourent au crowdsourcing pour les obtenir, ce qui nécessite une validation supplémentaire effectuée par les développeurs du système. Nous montrons que l'extension du corpus au moyen d'extraction automatique de paraphrases et la validation automatique sont tout aussi efficaces, étant en même temps moins coûteux en termes de temps de développement et de ressources. Au cours d'expériences intermédiaires nos modèles ont montré une meilleure performance que celle obtenue par le modèle de référence basé sur les syntagmes et se sont révélés d'être plus robustes, pour le traitement des combinaisons inconnues de concepts, que le générateur à base des règles. L'évaluation humaine finale a prouvé que les modèles représent une alternative solide au générateur à base des règles / The humanity has long been passionate about creating intellectual machines that can freely communicate with us in our language. Most modern systems communicating directly with the user share one common feature: they have a dialog system (DS) at their base. As of today almost all DS components embraced statistical methods and widely use them as their core models. Until recently Natural Language Generation (NLG) component of a dialog system used primarily hand-coded generation templates, which represented model phrases in a natural language mapped to a particular semantic content. Today data-driven models are making their way into the NLG domain. In this thesis, we follow along this new line of research and present several novel data-driven approaches to natural language generation. In our work we focus on two important aspects of NLG systems development: building an efficient generator and diversifying its output. Two key ideas that we defend here are the following: first, the task of NLG can be regarded as the translation between a natural language and a formal meaning representation, and therefore, can be performed using statistical machine translation techniques, and second, corpus extension and diversification which traditionally involved manual paraphrasing and rule crafting can be performed automatically using well-known and widely used synonym and paraphrase extraction methods. Concerning our first idea, we investigate the possibility of using NGRAM translation framework and explore the potential of discriminative learning, notably Conditional Random Fields (CRF) models, as applied to NLG; we build a generation pipeline which allows for inclusion and combination of different generation models (NGRAM and CRF) and which uses an efficient decoding framework (finite-state transducers' best path search). Regarding the second objective, namely corpus extension, we propose to enlarge the system's vocabulary and the set of available syntactic structures via integrating automatically obtained synonyms and paraphrases into the training corpus. To our knowledge, there have been no attempts to increase the size of the system vocabulary by incorporating synonyms. To date most studies on corpus extension focused on paraphrasing and resorted to crowd-sourcing in order to obtain paraphrases, which then required additional manual validation often performed by system developers. We prove that automatic corpus extension by means of paraphrase extraction and validation is just as effective as crowd-sourcing, being at the same time less costly in terms of development time and resources. During intermediate experiments our generation models showed a significantly better performance than the phrase-based baseline model and appeared to be more robust in handling unknown combinations of concepts than the current in-house rule-based generator. The final human evaluation confirmed that our data-driven NLG models is a viable alternative to rule-based generators.
145

Étude des chambres réverbérantes à brassage de modes en ondes millimétriques : application à l’étude des interactions ondes-vivant / Study and design of reverberation chamber at millimeter waves : dosimetry application

Fall, Abdou Khadir 03 February 2015 (has links)
De nos jours, on assiste à l'émergence massive de nouveaux systèmes électroniques exploitant des fréquences de plus en plus élevées, particulièrement en ondes millimétriques (30-300 GHz). Il apparaît de ce fait un besoin potentiel de développement de nouveaux moyens d'essai appropriés dans le domaine millimétrique. En particulier, l'étude de la biocompatibilité de ces systèmes est clairement identifiée comme une priorité de recherche en électromagnétisme. Dans ce contexte, l'objectif de cette thèse consiste à concevoir et à évaluer les propriétés d'une chambre réverbérante à brassage de modes (CRBM) en bande Ka (26,5-40 GHz), en bande U (40-60 GHz) et en bande V (50-75 GHz). L'application visée dans cette thèse concerne la mise en place d'outils dosimétriques par caméra infrarouge en chambre réverbérante et la réalisation d'essais préliminaires sur des fantômes diélectriques à 60 GHz. Dans un premier temps, nous avons analysé numériquement le comportement statistique du champ électrique dans une cavité pré-dimensionnée. Les simulations sont réalisées à l'aide d'un outil interne de modélisation du comportement d'une CRBM basé sur la théorie des images. A l'aide du test d'ajustement statistique d'Anderson-Darling, nous avons montré que le comportement de la chambre en ondes millimétriques est en adéquation avec le modèle de Hill (champ statistiquement homogène et isotrope dans le volume de l'enceinte) . Dans un second temps, nous avons réalisé un prototype de chambre réverbérante de dimensions internes : 42,3 x 41,2 x 38,3 cm3 . Un processus de brassage par saut de fréquence est utilisé pour l'obtention de l'uniformité statistique de la densité de puissance. La chambre est équipée d'un système de positionnement fin et précis permettant l'échantillonnage spatial de la puissance sur un axe à l'intérieur de la chambre. Les accès millimétriques ont également été étudiés de sorte à réduire d'éventuelles fuites significatives. Les liaisons entre la source millimétrique et l'antenne d'émission d'une part et celles entre l'antenne de réception et l'analyseur de spectre d'autre part sont assurées par des guides d'onde. Nous avons également mis en place l'ensemble des équipements nécessaires pour le fonctionnement de la chambre (source, analyseur de spectre, mélangeur). La chambre est caractérisée dans la bande 58,5-61,5 GHz. Les résultats obtenus sont satisfaisants en termes de coefficient de qualité et de comportement statistique de la puissance mesurée dans un volume de test donné. Dans un troisième temps, nous avons modélisé puis réalisé une interface intégrée sur une des parois de la chambre pour la mesure de température par caméra infrarouge. Des mesures préliminaires sont réalisées sur un fantôme constitué essentiellement d'eau. Les résultats expérimentaux et théoriques de l'évaluation du gradient de la température sur le fantôme sont très proches. Ceci confirme que la chambre réverbérante ainsi conçue permet de soumettre l'objet sous test à une illumination statistiquement uniforme et calibrée en puissance. Un tel dispositif est un atout précieux pour des tests de compatibilité électromagnétique d'équipements électroniques dans la bande 26,5-75 GHz. Cette CRBM pourrait également permettre de réal iser des essais préliminaires dans le cadre de l'étude des interactions des ondes avec la matière vivante en millimétrique. / Nowadays, there is a massive emergence of new electronic systems operating at increasing frequencies, especially in the millimeter waves range (30-300 GHz). As a consequence, development of new appropriate test facilities in the millimeter waves range is needed. ln particular, the study of the biocompatibility of the se systems is cie arly identified as a research priority in electromagnetism. ln this context, this thesis deals with the design and the evaluation of a modestirred reverberation chamber (RC) properties in the Ka band (26.5-40 GHz), U band (40-60 GHz) and V band (50-75 GHz). The intended application in this thesis concerns the development of a dosimetric tool using an infrared camera in a reverberation chamber. Firstly, we numerically analyze the statistical behavior of the electric field in the test volume of such an RC. A numerical model based on image theory is used to simulate the cavity. With Anderson-Darling goodness-of-fit test, we show !hat the chamber behaves very weil at millimeter waves frequency in terms of statistical distribution of the field in the test volume. Secondly, a compact reverberation chamber is designed and built up, with the following internai dimensions 42.3 x 41.2 x 38.3 cm3 . The statistical uniformity of power density in the chamber volume is obtained by frequency stirring. The RC is associated with a positioning system for spatial sampling of power inside reverberation chamber. The interfaces are also studied in order to reduce any significant leakage. Waveguides are used in the transmission and reception chains to minimize losses. We have also set up ali the equipment necessary for carrying out measurements (source, spectrum analyzer, mixer). The RC is characterized in the 58.5-61.5 GHz range. The results are satisfactory in terms of the quality factor level and the statistical distribution of the power in the test volume. Thirdly, an interface is designed and integrated on one of the chamber walls for temperature measurement by an infrared camera. Preliminary measurements are performed on a phantom consisting essentially of water. Experimental results of the phantom temperature rise are in good agreement with theoretical predictions. This confirms thal the designed reverberation chamber allows to expose the deviee under test with a statistically uniform and calibrated power. Such a deviee is a valuable asse! for EMC testing of electronic equipments in the 26.5 to 60 GHz frequency range. This RC could also permit to conduct preliminary tests in the context of the millimeter waves interactions with being organisms.
146

Modélisation stochastique, en mécanique des milieux continus, de l'interphase inclusion-matrice à partir de simulations en dynamique moléculaire / Stochastic modeling, in continuum mechanics, of the inclusion-matrix interphase from molecular dynamics simulations

Le, Tien-Thinh 21 October 2015 (has links)
Dans ce travail, nous nous intéressons à la modélisation stochastique continue et à l'identification des propriétés élastiques dans la zone d'interphase présente au voisinage des hétérogénéités dans un nano composite prototypique, composé d'une matrice polymère modèle renforcée par une nano inclusion de silice. Des simulations par dynamique moléculaire (DM) sont tout d'abord conduites afin d'extraire certaines caractéristiques de conformation des chaînes proches de la surface de l'inclusion, ainsi que pour estimer, par des essais mécaniques virtuels, des réalisations du tenseur apparent associé au domaine de simulation. Sur la base des résultats obtenus, un modèle informationnel de champ aléatoire est proposé afin de modéliser les fluctuations spatiales du tenseur des rigidités dans l'interphase. Les paramètres du modèle probabiliste sont alors identifiés par la résolution séquentielle de deux problèmes d'optimisation inverses (l'un déterministe et associé au modèle moyen, l'autre stochastique et lié aux paramètres de dispersion et de corrélation spatiale) impliquant une procédure d'homogénéisation numérique. On montre en particulier que la longueur de corrélation dans la direction radiale est du même ordre de grandeur que l'épaisseur de l'interphase, indiquant ainsi la non-séparation des échelles. Enfin, la prise en compte, par un modèle de matrices aléatoires, du bruit intrinsèque généré par les simulations de DM (dans la procédure de calibration) est discutée / This work is concerned with the stochastic modeling and identification of the elastic properties in the so-called interphase region surrounding the inclusions in nanoreinforced composites. For the sake of illustration, a prototypical nanocomposite made up with a model polymer matrix filled by a silica nanoinclusion is considered. Molecular Dynamics (MD) simulations are first performed in order to get a physical insight about the local conformation of the polymer chains in the vicinity of the inclusion surface. In addition, a virtual mechanical testing procedure is proposed so as to estimate realizations of the apparent stiffness tensor associated with the MD simulation box. An information-theoretic probabilistic representation is then proposed as a surrogate model for mimicking the spatial fluctuations of the elasticity field within the interphase. The hyper parameters defining the aforementioned model are subsequently calibrated by solving, in a sequential manner, two inverse problems involving a computational homogenization scheme. The first problem, related to the mean model, is formulated in a deterministic framework, whereas the second one involves a statistical metric allowing the dispersion parameter and the spatial correlation lengths to be estimated. It is shown in particular that the spatial correlation length in the radial direction is roughly equal to the interphase thickness, hence showing that the scales under consideration are not well separated. The calibration results are finally refined by taking into account, by means of a random matrix model, the MD finite-sampling noise
147

Caracterização da estrutura de dependência do genoma humano usando campos markovianos: estudo de populações mundiais e dados de SNPs / Characterization of the human genome dependence structure using Markov random fields: populations worldwide study and SNP data

Fernandes, Francisco José de Almeida 01 February 2016 (has links)
A identificação de regiões cromossômicas, ou blocos de dependência dentro do genoma humano, que são transmitidas em conjunto para seus descendentes (haplótipos) tem sido um desafio e alvo de várias iniciativas de pesquisa, muitas delas utilizando dados de plataformas de marcadores moleculares do tipo SNP (Single Nucleotide Polymorphisms - SNPs), com alta densidade dentro do DNA humano. Este trabalho faz uso de uma modelagem estocástica de campos Markovianos de alcance variável, em uma amostra estratificada de diferentes populações, para encontrar blocos de SNPs, independentes entre si, estruturando assim o genoma em regiões ilhadas de dependência. Foram utilizados dados públicos de SNPs de diferentes populações mundiais (projeto HapMap), além de uma amostra da população brasileira. As regiões de dependência configuram janelas de influência as quais foram usadas para caracterizar as diferentes populações de acordo com sua ancestralidade e os resultados obtidos mostraram que as janelas da população brasileira têm, em média, tamanho maior, evidenciando a sua história recente de miscigenação. É também proposta uma otimização da função de verossimilhança do problema para obter as janelas de consenso maximais de todas as populações. Dada uma determinada janela de consenso, uma medida de distância apropriada para variáveis categóricas, é adotada para medir sua homogeneidade/heterogeneidade. Janelas homogêneas foram identificadas na região HLA (Human Leukocyte Antigen) do genoma, a qual está associada à resposta imunológica. O tamanho médio dessas janelas foi maior do que a média encontrada no restante do cromossomo, confirmando a alta dependência existente nesta região, considerada como bastante conservada na evolução humana. Finalmente, considerando a distribuição dos SNPs entre as populações nas janelas mais heterogêneas, a Análise de Correspondência foi aplicada na construção de um classificador capaz de determinar o percentual relativo de ancestralidade de um indivíduo, o qual, submetido à validação, obteve uma eficiência de 90% de acerto da população originária. / The identification of chromosome regions, or dependency blocks in the human genome, that are transmitted together to offspring (haploids) has been a challenge and object of several research initiatives, many of them using platforms of molecular markers such as SNP (Single Nucleotide Polymorphisms), with high density inside the human DNA. This work makes use of a stochastic modeling of Markov random fields, in a stratified sample of different populations, to find SNPs blocks, independent of each other, thus structuring the genome in stranded regions of dependency. Public data from different worldwide populations were used (HapMap project), beyond a Brazilian population. The dependence regions constitute windows of influence which were used to characterize the different populations according of their ancestry and the results showed that the Brazilian populations windows have, on average, a bigger size, showing their recent history of admixture. It is also proposed an optimization of likelihood function of the problem for the maximal windows of consensus from all populations. Given a particular window of consensus, a distance measure appropriated to categorical variables, it is adopted to evaluate its homogeneity/heterogeneity. Homogeneous windows were identified within region of genome called HLA (Human Leukocyte Antigen), which is associated with the immune response. The average size of these windows was bigger than the average found in the rest of the chromosome, confirming the high dependence verified in this region, considered highly conserved in the human evolution. Finally, considering the distribution of the SNPs among the populations in the most heterogeneous windows, the Correspondence Analysis was applied to build a classifier able to determine, for a given individual, the ancestry proportion from each population considered, which, submitted to a validation, obtained a 90% accuracy of the original population.
148

An MRF-Based Approach to Image and Video Resolution Enhancement

Vedadi, Farhang 10 1900 (has links)
<p>The main part of this thesis is concerned with detailed explanation of a newly proposed Markov random field-based de-interlacing algorithm. Previous works, assume a first or higher-order Markovian spatial inter-dependency between the pixel intensity values. In accord with the specific interpolation problem in hand, they try to approximate the Markov random field parameters using available original pixels. Then using the approximate model, they define an objective function such as energy function of the MRF to be optimized. The efficiency and accuracy of the optimization step is as important as the effectiveness of definition of the cost (objective function) as well as the MRF model.\\ \indent The major concept that distinguishes the newly proposed algorithm with the aforementioned MRF-based models is the definition of the MRF not over the intensity domain but over interpolator (interpolation method) domain. Unlike previous MRF-based models which try to estimate a two-dimensional array of pixel values, this new method estimates an MRF of interpolation function (interpolators) associated with the 2-D array of pixel intensity values.\\ \indent With some modifications, one can utilize the proposed model in different related fields such as image and video up-conversion, view interpolation and frame-rate up-conversion. To prove this potential of the proposed MRF-based model, we extend it to an image up-scaling algorithm. This algorithm uses a simplified version of the proposed MRF-based model for the purpose of image up-scaling by a factor of two in each spatial direction. Simulation results prove that the proposed model obtains competing performance results when applied in the two interpolation problems of video de-interlacing and image up-scaling.</p> / Master of Applied Science (MASc)
149

DEUM : a framework for an estimation of distribution algorithm based on Markov random fields

Shakya, Siddhartha January 2006 (has links)
Estimation of Distribution Algorithms (EDAs) belong to the class of population based optimisation algorithms. They are motivated by the idea of discovering and exploiting the interaction between variables in the solution. They estimate a probability distribution from population of solutions, and sample it to generate the next population. Many EDAs use probabilistic graphical modelling techniques for this purpose. In particular, directed graphical models (Bayesian networks) have been widely used in EDA. This thesis proposes an undirected graphical model (Markov Random Field (MRF)) approach to estimate and sample the distribution in EDAs. The interaction between variables in the solution is modelled as an undirected graph and the joint probability of a solution is factorised as a Gibbs distribution. The thesis describes a model of fitness function that approximates the energy in the Gibbs distribution, and shows how this model can be fitted to a population of solutions to estimate the parameters of the MRF. The estimated MRF is then sampled to generate the next population. This approach is applied to estimation of distribution in a general framework of an EDA, called Distribution Estimation using Markov Random Fields (DEUM). The thesis then proposes several variants of DEUM using different sampling techniques and tests their performance on a range of optimisation problems. The results show that, for most of the tested problems, the DEUM algorithms significantly outperform other EDAs, both in terms of number of fitness evaluations and the quality of the solutions found by them. There are two main explanations for the success of DEUM algorithms. Firstly, DEUM builds a model of fitness function to approximate the MRF. This contrasts with other EDAs, which build a model of selected solutions. This allows DEUM to use fitness in variation part of the evolution. Secondly, DEUM exploits the temperature coefficient in the Gibbs distribution to regulate the behaviour of the algorithm. In particular, with higher temperature, the distribution is closer to being uniform and with lower temperature it concentrates near some global optima. This gives DEUM an explicit control over the convergence of the algorithm, resulting in better optimisation.
150

Méthodes Bayésiennes pour le démélange d'images hyperspectrales / Bayesian methods for hyperspectral image unmixing

Eches, Olivier 14 October 2010 (has links)
L’imagerie hyperspectrale est très largement employée en télédétection pour diverses applications, dans le domaine civil comme dans le domaine militaire. Une image hyperspectrale est le résultat de l’acquisition d’une seule scène observée dans plusieurs longueurs d’ondes. Par conséquent, chacun des pixels constituant cette image est représenté par un vecteur de mesures (généralement des réflectances) appelé spectre. Une étape majeure dans l’analyse des données hyperspectrales consiste à identifier les composants macroscopiques (signatures) présents dans la région observée et leurs proportions correspondantes (abondances). Les dernières techniques développées pour ces analyses ne modélisent pas correctement ces images. En effet, habituellement ces techniques supposent l’existence de pixels purs dans l’image, c’est-à-dire des pixels constitué d’un seul matériau pur. Or, un pixel est rarement constitué d’éléments purs distincts l’un de l’autre. Ainsi, les estimations basées sur ces modèles peuvent tout à fait s’avérer bien loin de la réalité. Le but de cette étude est de proposer de nouveaux algorithmes d’estimation à l’aide d’un modèle plus adapté aux propriétés intrinsèques des images hyperspectrales. Les paramètres inconnus du modèle sont ainsi déduits dans un cadre Bayésien. L’utilisation de méthodes de Monte Carlo par Chaînes de Markov (MCMC) permet de surmonter les difficultés liées aux calculs complexes de ces méthodes d’estimation. / Hyperspectral imagery has been widely used in remote sensing for various civilian and military applications. A hyperspectral image is acquired when a same scene is observed at different wavelengths. Consequently, each pixel of such image is represented as a vector of measurements (reflectances) called spectrum. One major step in the analysis of hyperspectral data consists of identifying the macroscopic components (signatures) that are present in the sensored scene and the corresponding proportions (concentrations). The latest techniques developed for this analysis do not properly model these images. Indeed, these techniques usually assume the existence of pure pixels in the image, i.e. pixels containing a single pure material. However, a pixel is rarely composed of pure spectrally elements, distinct from each other. Thus, such models could lead to weak estimation performance. The aim of this thesis is to propose new estimation algorithms with the help of a model that is better suited to the intrinsic properties of hyperspectral images. The unknown model parameters are then infered within a Bayesian framework. The use of Markov Chain Monte Carlo (MCMC) methods allows one to overcome the difficulties related to the computational complexity of these inference methods.

Page generated in 0.3617 seconds