• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 78
  • 33
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 276
  • 276
  • 74
  • 49
  • 38
  • 37
  • 35
  • 30
  • 29
  • 29
  • 28
  • 28
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Uma introdução aos grandes desvios

Müller, Gustavo Henrique January 2016 (has links)
Nesta dissertação de mestrado, vamos apresentar uma prova para os grandes desvios para variáveis aleatórias independentes e identicamente distribuídas com todos os momentos finitos e para a medida empírica de cadeias de Markov com espaço de estados finito e tempo discreto. Além disso, abordaremos os teoremas de Sanov e Gärtner-Ellis. / In this master thesis it is presented a proof of the large deviations for independent and identically distributed random variables with all finite moments and for the empirical measure of Markov chains with finite state space and with discrete time. Moreover, we address the theorems of Sanov and of Gartner-Ellis.
92

Operador de Rulle para cadeias de Markov a tempo Contínuo

Busato, Luisa Bürgel January 2018 (has links)
Este trabalho divide-se em três partes. Na primeira parte fazemos uma breve descrição de cadeias de Markov a tempo discreto e tempo contínuo. Na segunda parte, seguindo o artigo [5], introduzimos o formalismo termodinâmico no espaço de Bernoulli com símbolos dados em um espaço métrico compacto, generalizando a teoria usual onde o espaço de estados é finito. Após, seguindo o artigo [1], introduziremos uma versão do Operador de Ruelle para cadeias de Markov a tempo contínuo. Ainda, a partir de uma função V que funcionará como uma perturbação, definiremos um operador de Ruelle modificado e, para este operador, mostraremos a existência de uma auto-função e uma auto-medida. / This work is divided in three parts. In the first one, we give a brief description of Markov chains in both discrete time and continuous time. In the second one, following the article [5], we introduce the thermodynamic formalism in the Bernoulli space with symbols in a compact metric space, generalizing the usual theory, where the space of states is finite. Then, following the article [1], we will introduce a version of Ruelle Opemtor for Markov chains in continuous time. Also, using a V function, which will be seen as a perturbation, we will define a modified Ruelle operator and, for this operator, we will show the existence of a eigenfunction and a eigenmeasure.
93

Epidemic Dynamics of Metapopulation Models

January 2014 (has links)
abstract: Mathematical modeling of infectious diseases can help public health officials to make decisions related to the mitigation of epidemic outbreaks. However, over or under estimations of the morbidity of any infectious disease can be problematic. Therefore, public health officials can always make use of better models to study the potential implication of their decisions and strategies prior to their implementation. Previous work focuses on the mechanisms underlying the different epidemic waves observed in Mexico during the novel swine origin influenza H1N1 pandemic of 2009 and showed extensions of classical models in epidemiology by adding temporal variations in different parameters that are likely to change during the time course of an epidemic, such as, the influence of media, social distancing, school closures, and how vaccination policies may affect different aspects of the dynamics of an epidemic. This current work further examines the influence of different factors considering the randomness of events by adding stochastic processes to meta-population models. I present three different approaches to compare different stochastic methods by considering discrete and continuous time. For the continuous time stochastic modeling approach I consider the continuous-time Markov chain process using forward Kolmogorov equations, for the discrete time stochastic modeling I consider stochastic differential equations using Wiener's increment and Poisson point increments, and also I consider the discrete-time Markov chain process. These first two stochastic modeling approaches will be presented in a one city and two city epidemic models using, as a base, our deterministic model. The last one will be discussed briefly on a one city SIS and SIR-type model. / Dissertation/Thesis / Ph.D. Applied Mathematics for the Life and Social Sciences 2014
94

Inférence bayésienne dans les modèles de croissance de plantes pour la prévision et la caractérisation des incertitudes / Bayesian inference in plant growth models for prediction and uncertainty assessment

Chen, Yuting 27 June 2014 (has links)
La croissance des plantes en interaction avec l'environnement peut être décrite par des modèles mathématiques. Ceux-ci présentent des perspectives prometteuses pour un nombre considérable d'applications telles que la prévision des rendements ou l'expérimentation virtuelle dans le contexte de la sélection variétale. Dans cette thèse, nous nous intéressons aux différentes solutions capables d'améliorer les capacités prédictives des modèles de croissance de plantes, en particulier grâce à des méthodes statistiques avancées. Notre contribution se résume en quatre parties.Tout d'abord, nous proposons un nouveau modèle de culture (Log-Normal Allocation and Senescence ; LNAS). Entièrement construit dans un cadre probabiliste, il décrit seulement les processus écophysiologiques essentiels au bilan de la biomasse végétale afin de contourner les problèmes d'identification et d'accentuer l'évaluation des incertitudes. Ensuite, nous étudions en détail le paramétrage du modèle. Dans le cadre Bayésien, nous mettons en œuvre des méthodes Monte-Carlo Séquentielles (SMC) et des méthodes de Monte-Carlo par Chaînes de Markov (MCMC) afin de répondre aux difficultés soulevées lors du paramétrage des modèles de croissance de plantes, caractérisés par des équations dynamiques non-linéaires, des données rares et un nombre important de paramètres. Dans les cas où la distribution a priori est peu informative, voire non-informative, nous proposons une version itérative des méthodes SMC et MCMC, approche équivalente à une variante stochastique d'un algorithme de type Espérance-Maximisation, dans le but de valoriser les données d'observation tout en préservant la robustesse des méthodes Bayésiennes. En troisième lieu, nous soumettons une méthode d'assimilation des données en trois étapes pour résoudre le problème de prévision du modèle. Une première étape d'analyse de sensibilité permet d'identifier les paramètres les plus influents afin d'élaborer une version plus robuste de modèle par la méthode de sélection de modèles à l'aide de critères appropriés. Ces paramètres sélectionnés sont par la suite estimés en portant une attention particulière à l'évaluation des incertitudes. La distribution a posteriori ainsi obtenue est considérée comme information a priori pour l'étape de prévision, dans laquelle une méthode du type SMC telle que le filtrage par noyau de convolution (CPF) est employée afin d'effectuer l'assimilation de données. Dans cette étape, les estimations des états cachés et des paramètres sont mis à jour dans l'objectif d'améliorer la précision de la prévision et de réduire l'incertitude associée. Finalement, d'un point de vue applicatif, la méthodologie proposée est mise en œuvre et évaluée avec deux modèles de croissance de plantes, le modèle LNAS pour la betterave sucrière et le modèle STICS pour le blé d'hiver. Quelques pistes d'utilisation de la méthode pour l'amélioration du design expérimental sont également étudiées, dans le but d'améliorer la qualité de la prévision. Les applications aux données expérimentales réelles montrent des performances prédictives encourageantes, ce qui ouvre la voie à des outils d'aide à la décision en agriculture. / Plant growth models aim to describe plant development and functional processes in interaction with the environment. They offer promising perspectives for many applications, such as yield prediction for decision support or virtual experimentation inthe context of breeding. This PhD focuses on the solutions to enhance plant growth model predictive capacity with an emphasis on advanced statistical methods. Our contributions can be summarized in four parts. Firstly, from a model design perspective, the Log-Normal Allocation and Senescence (LNAS) crop model is proposed. It describes only the essential ecophysiological processes for biomass budget in a probabilistic framework, so as to avoid identification problems and to accentuate uncertainty assessment in model prediction. Secondly, a thorough research is conducted regarding model parameterization. In a Bayesian framework, both Sequential Monte Carlo (SMC) methods and Markov chain Monte Carlo (MCMC) based methods are investigated to address the parameterization issues in the context of plant growth models, which are frequently characterized by nonlinear dynamics, scarce data and a large number of parameters. Particularly, whenthe prior distribution is non-informative, with the objective to put more emphasis on the observation data while preserving the robustness of Bayesian methods, an iterative version of the SMC and MCMC methods is introduced. It can be regarded as a stochastic variant of an EM type algorithm. Thirdly, a three-step data assimilation approach is proposed to address model prediction issues. The most influential parameters are first identified by global sensitivity analysis and chosen by model selection. Subsequently, the model calibration is performed with special attention paid to the uncertainty assessment. The posterior distribution obtained from this estimation step is consequently considered as prior information for the prediction step, in which a SMC-based on-line estimation method such as Convolution Particle Filtering (CPF) is employed to perform data assimilation. Both state and parameter estimates are updated with the purpose of improving theprediction accuracy and reducing the associated uncertainty. Finally, from an application point of view, the proposed methodology is implemented and evaluated with two crop models, the LNAS model for sugar beet and the STICS model for winter wheat. Some indications are also given on the experimental design to optimize the quality of predictions. The applications to real case scenarios show encouraging predictive performances and open the way to potential tools for yield prediction in agriculture.
95

Filtragem via metodos de Monte Carlo para processos lineares com saltos Markovianos

Moises, Gustavo Vinicius Lourenço 12 February 2005 (has links)
Orientador: João Bosco Ribeiro do Val / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-06T12:35:26Z (GMT). No. of bitstreams: 1 Moises_GustavoViniciusLourenco_M.pdf: 6246087 bytes, checksum: c9946249f751bda598372a7979e3a189 (MD5) Previous issue date: 2005 / Resumo: Esta dissertação possui como tema a filtragem via Métodos de Monte Carlo para Cadeia de Markov. Através do estudo e da análise dos algoritmos de amostragem estocástica aliados às implementações numéricas, foi desenvolvida uma metodologia para avaliar e comparar as diversas técnicas de filtragem encontrados na literatura. Aplicações associando a filtragem recursiva ao controle via horizonte retrocedente também foram utilizadas para verificar o desempenho e a estabilidade do conjunto filtro/controle / Abstract: The dissertation's theme is the filtering problem via Markov Chain Monte Carlo methods. Combining the estudy and the analysis of the stochastic sampling algorithms with numerical implementations, we developted a methodology to evaluate and compare several filters in literature. Aplications of recursive filtering in association with receding horizon control tecniques were used to verify the finality and stability of the filter/control combination / Mestrado / Automação / Mestre em Engenharia Elétrica
96

Aplicações da álgebra linear nas cadeias de Markov / Applications of linear algebra in Markov chains

Silva, Carlos Eduardo Vitória da 11 April 2013 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-10-30T18:45:23Z No. of bitstreams: 2 Dissertação - Carlos Eduardo Vitória da Silva - 2013.pdf: 1162244 bytes, checksum: d2966939f025f381680dcb9ce82d76ac (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-10-31T09:36:29Z (GMT) No. of bitstreams: 2 Dissertação - Carlos Eduardo Vitória da Silva - 2013.pdf: 1162244 bytes, checksum: d2966939f025f381680dcb9ce82d76ac (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-10-31T09:36:29Z (GMT). No. of bitstreams: 2 Dissertação - Carlos Eduardo Vitória da Silva - 2013.pdf: 1162244 bytes, checksum: d2966939f025f381680dcb9ce82d76ac (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-04-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The theory of linear algebra and matrices and systems particularly are linear math topics that can be applied not only within mathematics itself, but also in various other areas of human knowledge, such as physics, chemistry, biology, all engineering, psychology, economy, transportation, administration, statistics and probability, etc... The Markov chains are used to solve certain problems in the theory of probability. Applications of Markov chains in these problems, depend directly on the theory of matrices and linear systems. In this work we use the techniques of Markov Chains to solve three problems of probability, in three distinct areas. One in genetics, other in psychology and the other in the area of mass transit in a transit system. All work is developed with the intention that a high school student can read and understand the solutions of three problems presented. / A teoria da álgebra linear e particularmente matrizes e sistemas lineares são tópicos de matemática que podem ser aplicados não só dentro da própria matemática, mas também em várias outras áreas do conhecimento humano, como física, química, biologia, todas as engenharias, psicologia, economia, transporte, administração, estat ística e probabilidade, etc. As Cadeias de Markov são usadas para resolver certos problemas dentro da teoria das probabilidades. As aplicações das Cadeias de Markov nesses problemas, dependem diretamente da teoria das matrizes e sistemas lineares. Neste trabalho usamos as técnicas das Cadeias de Markov para resolver três problemas de probabilidades, em três áreas distintas. Um na área da genética, outro na área da psicologia e o outro na área de transporte de massa em um sistema de trânsito. Todo o trabalho é desenvolvido com a intenção de que um estudante do ensino médio possa ler e entender as soluções dos três problemas apresentados.
97

Modelagem de Tráfego em Redes PLC (Powerline Communications) Utilizando Cadeias de Markov / Traffic Modeling in PLC Network (Powerline Communications) Using Markov Chains

SANTOS, Christiane Borges 24 November 2009 (has links)
Made available in DSpace on 2014-07-29T15:08:23Z (GMT). No. of bitstreams: 1 Dissertacao Christiane Santos EEC.pdf: 970611 bytes, checksum: a2b7f2cf34500ad6f1d66ffdf3bf9a69 (MD5) Previous issue date: 2009-11-24 / This work is motivated by a growing interest in the power lines' applicability as an alternative means of propagation for communication signals, and presents an analysis of VoIP's (Voice over IP) traffic and data transfer using BPL / PLC (PowerLine Broadband / PowerLine Communication) network. We describe the main characteristics of the BPL / PLC and HomePlug standard. As the physical transmission technology used by the BPL / PLC for data transfer is hostile, and it was not developed for this purpose, traffic modeling can be useful for planning and design these networks. A model is proposed based on MMFM (Markov Modulated Fluid Models) to characterize the traffic data and VoIP into PLC networks. Simulations and comparisons were made with other models such as Poisson and MMPP (Markov Modulated Poisson Process). The results were obtained by experiments in low-voltage PLC networks (indoor environment), using a 4,3MHz to 20,9MHz bandwidth / Este trabalho é motivado por um crescente interesse na aplicabilidade das linhas de energia como meio alternativo de propagação de sinais de comunicação, e apresenta uma análise do tráfego VoIP (Voice over IP) e da transferência de dados utilizando a rede BPL/PLC (Broadband powerLine/ PowerLine Communication). São descritas as principais características da tecnologia BPL/PLC e do padrão Homeplug. Como o meio físico de transmissão utilizado pela tecnologia BPL/PLC para transferência de dados é hostil, visto que não foi desenvolvido para esta finalidade, a modelagem de tráfego pode ser útil para o planejamento e dimensionamento dessas redes. É proposto um modelo baseado no MMFM (Markov Modulated Fluid Models) para caracterizar o tráfego de dados e de VoIP em redes PLC. Simulações e comparações foram realizadas com outros modelos como Poisson e o MMPP (Markov Modulated Poisson Process). Os resultados foram obtidos através de experiências realizadas em redes PLC de baixa tensão (ambiente indoor), utilizando uma largura de faixa entre 4,3MHz a 20,9MHz
98

Complexidade e tomada de decisão / Complexity of decision-making in human agents

Eduardo Sangiorgio Dobay 11 November 2014 (has links)
Neste trabalho foi elaborada uma estrutura de modelos probabilísticos simples que pudessem descrever o processo de tomada de decisão de agentes humanos que são confrontados com a tarefa de prever elementos de uma sequência aleatória gerada por uma cadeia de Markov de memória L. Essa estrutura partiu de uma abordagem bayesiana em que o agente infere uma distribuição de probabilidades a partir de uma série de observações da sequência e de suas próprias respostas, considerando que o agente tenha uma memória de tamanho K. Como resultado da abordagem bayesiana, o agente adota uma estratégia ótima que consiste na perseveração na alternativa mais provável dado o histórico das últimas tentativas; por conta disso e de observações experimentais de que humanos tendem a adotar nesse tipo de problema estratégias sub-ótimas, por exemplo a de pareamento de probabilidades (probability matching), foram desenvolvidas variações sobre esse modelo que tentassem descrever mais de perto o comportamento adotado por humanos. Nesse sentido, foram adotadas as variáveis de troca de resposta (possível ação tomada pelo agente) e de recompensa (possível resultado da ação) na formulação do modelo e foram adicionados parâmetros, inspirados em modelos de ação dopaminérgica, que permitissem um desvio da estratégia ótima resultante da abordagem bayesiana. Os modelos construídos nessa estrutura foram simulados computacionalmente para diversos valores dos parâmetros, incluindo as memórias K e L do agente e da cadeia de Markov, respectivamente. Através de análises de correlação, esses resultados foram comparados aos dados experimentais, de um grupo de pesquisa do Instituto de Ciências Biomédicas da USP, referentes a tarefas de tomada de decisão envolvendo pessoas de diversas faixas etárias (de 3 a 73 anos) e cadeias de Markov de memórias 0, 1 e 2. Nessa comparação, concluiu-se que as diferenças entre grupos etários no experimento podem ser explicadas em nossa modelagem através da variação da memória K do agente crianças de até 5 anos mostram um limite K = 1, e as de até 12 anos mostram um limite K = 2 e da variação de um parâmetro de reforço de aprendizado dependendo do grupo e da situação de decisão à qual os indivíduos eram expostos, o valor ajustado desse parâmetro variou de 10% para baixo até 30% para cima do seu valor original de acordo com a abordagem bayesiana. / In this work we developed a simple probabilistic modeling framework that could describe the process of decision making in human agents that are presented with the task of predicting elements of a random sequence generated by a Markov chain with memory L. Such framework arised from a Bayesian approach in which the agent infers a probability distribution from a series of observations on the sequence and on its own answers, and considers that the agent\'s memory has length K. As a result of the Bayesian approach, the agent adopts an optimal strategy that consists in perseveration of the most likely alternative given the history of the last few trials; because of that and of experimental evidence that humans tend, in such kinds of problems, to adopt suboptimal strategies such as probability matching, variations on that model were developed in an attempt to have a closer description of the behavior adopted by humans. In that sense, the `shift\' (possible action taken by the agent on its response) and `reward\' (possible result of the action) variables were adopted in the formulation of the model, and parameters inspired by models of dopaminergic action were added to allow deviation from the optimal strategy that resulted from the Bayesian approach. The models developed in that framework were computationally simulated for many values of the parameters, including the agent\'s and the Markov chain\'s memory lengths K and L respectively. Through correlation analysis these results were compared to experimental data, from a research group from the Biomedical Science Institute at USP, regarding decision making tasks that involved people of various ages (3 to 73 years old) and Markov chains of orders 0, 1 and 2. In this comparison it was concluded that the differences between age groups in the experiment can be explained in our modeling through variation of the agent\'s memory length K children up to 5 years old exhibited a limitation of K = 1, and those up to 12 years old were limited to K = 2 and through variation of a learning reinforcement parameter depending on the group and the decision situation to which the candidates were exposed, the fitted value for that parameter ranged from 10% below to 30% above its original value according to the Bayesian approach.
99

Estudo do desenvolvimento de estratégias decisionais em escolhas binárias repetidas. / Decisional strategies in binary choice tasks from childhood to senescence.

Camila Gomes Victorino 04 September 2012 (has links)
Estudos têm mostrado que nem sempre indivíduos maximizam seus ganhos. Quando confrontados a uma sequência binária, cuja recompensa aparece mais em uma das alternativas, ao invés de perseverarem no lado de maior aparecimento, os voluntários adultos escolhem um lado tantas vezes quanto esse lado apresente a recompensa. Foi relatado, entretanto, que crianças perseverariam no lado mais freqüente, maximizando. Estudos relatam a possibilidade de adultos não perseverarem, porque procurariam padrões na sequência; procurou-se realizar quatro experimentos com sequências sem e com padrões (cadeias de Markov), de modo que se pudesse observar e comparar as estratégias de decisão das faixas etárias para padrão ou sem ele. Os resultados mostraram que existe uma tendência à perseveração, com o envelhecimento, e não o contrário, em detrimento da possibilidade de assimilar padrões. Eles também mostram que a assimilação de padrões se desenvolve gradualmente e decai com o envelhecimento, invalidando a ideia de que a não-maximização seja apenas fruto da busca por padrões. / Studies have shown subjects dont maximize their profits all the time. When confronted with binary sequences, with rewards that show up more in one alternative than another, adult volunteers choose one side as the number of rewards the side shows; instead of maximizing in the side that shows more rewards. However, it was related children maximize. Studies assert to the possibility that adults dont maximize because they are searching for a pattern in the sequence. Four experiments were made with sequences with and without a pattern, so that we could observe and compare the decision making strategies between ages for patterns and non-patterns. Results show a tendency to maximization with aging and not the contrary, how was related, and to the detriment of pattern assimilation possibilities; also results show the pattern finding develops gradually with growth and worse with aging. In this way, searching for patterns cant be the only explanation for the non-maximization behavior.
100

Cadeias de Markov: uma aula para alunos do ensino médio

Rodrigues, Welton Carlos 09 August 2013 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2016-08-17T15:42:43Z No. of bitstreams: 1 weltoncarlosrodrigues.pdf: 758210 bytes, checksum: 9c15820809534ca120157f528ea72c27 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-08-18T11:48:29Z (GMT) No. of bitstreams: 1 weltoncarlosrodrigues.pdf: 758210 bytes, checksum: 9c15820809534ca120157f528ea72c27 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-08-18T11:53:32Z (GMT) No. of bitstreams: 1 weltoncarlosrodrigues.pdf: 758210 bytes, checksum: 9c15820809534ca120157f528ea72c27 (MD5) / Made available in DSpace on 2016-08-18T11:53:32Z (GMT). No. of bitstreams: 1 weltoncarlosrodrigues.pdf: 758210 bytes, checksum: 9c15820809534ca120157f528ea72c27 (MD5) Previous issue date: 2013-08-09 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Esta dissertação tem como objetivo principal apresentar os conceitos básicos das cadeias de Markov, uma teoria pouco explorada no ensino básico e que é bastante útil na tomada de decisões futuras. Como esses processos de Markov utilizam dois importantes conteúdos de matemática, probabilidades e matrizes, permite-se também um complemento para esses estudos. / This master thesis’ main objective is to present the basic concepts of Markov chains, a theory underexplored on basic education, which is a very useful instrument on taking decisions. The study of Markov processes also helps students deepen their understanding of matrices and probabilities.

Page generated in 0.06 seconds