• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 108
  • 78
  • 33
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 74
  • 49
  • 38
  • 37
  • 35
  • 29
  • 29
  • 29
  • 28
  • 28
  • 27
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A continuous-time Markov chain approach for trinomial-outcome longitudinal data : an extension for multiple covariates.

Mhoon, Kendra Brown. Moyé, Lemuel A., Mullen, Patricia D., Vernon, Sally W., January 2008 (has links)
Thesis (Ph. D.)--University of Texas Health Science Center at Houston, School of Public Health, 2008. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 1086. Adviser: Wenyaw Chan. Includes bibliographical references.
32

Topics in Random Walks

Montgomery, Aaron 03 October 2013 (has links)
We study a family of random walks defined on certain Euclidean lattices that are related to incidence matrices of balanced incomplete block designs. We estimate the return probability of these random walks and use it to determine the asymptotics of the number of balanced incomplete block design matrices. We also consider the problem of collisions of independent simple random walks on graphs. We prove some new results in the collision problem, improve some existing ones, and provide counterexamples to illustrate the complexity of the problem.
33

Aplicações das Cadeias de Markov para o Ensino Médio / Applications of Markov Chain for High School

Delatorre, Hugo Tadeu [UNESP] 22 January 2016 (has links)
Submitted by HUGO TADEU DELATORRE null (hugomath@uol.com.br) on 2016-02-21T19:31:03Z No. of bitstreams: 1 Aplicação das Cadeias de Markov no Ensino Médio - Hugo Tadeu Delatorre.pdf: 1086069 bytes, checksum: 6440bb8fa43488471cc064085e427f5e (MD5) / Approved for entry into archive by Ana Paula Grisoto (grisotoana@reitoria.unesp.br) on 2016-02-23T13:42:56Z (GMT) No. of bitstreams: 1 delatorre_ht_me_sjrp.pdf: 1086069 bytes, checksum: 6440bb8fa43488471cc064085e427f5e (MD5) / Made available in DSpace on 2016-02-23T13:42:56Z (GMT). No. of bitstreams: 1 delatorre_ht_me_sjrp.pdf: 1086069 bytes, checksum: 6440bb8fa43488471cc064085e427f5e (MD5) Previous issue date: 2016-01-22 / As Cadeias de Markov são um instrumento poderosíssimo para previsão de eventos do futuro baseado apenas em um passado relativamente recente. São muito utilizadas nas situações mais diversas como na bolsa de valores, na fidelização de clientes, previsão do tempo, dentre outras. O objetivo principal deste trabalho é mostrar, em nível de Ensino Médio algumas interessantes aplicações, bem como estimular os alunos à pesquisa, coleta e processamento de dados, mostrando-nos o quanto a Matemática está presente no cotidiano de cada um deles. / The Markov Chains are a powerful tool for predicting future events based only on a relatively recent past. They are widely used in different situations such as on the stock exchange in customer loyalty, weather, among others. The main objective of this work is to show high school level in some interesting applications, as well as encourage students to research, collecting and processing data, showing us how mathematics is present in the daily life of each one of them.
34

Modelo de precificação de ativos por cadeias de Markov / Asset Pricing Model by Markov Chains

Hashioka, Jean Akio Shida 15 June 2018 (has links)
Submitted by Jean Akio Shida Hashioka (jeanhashioka@hotmail.com) on 2018-07-15T18:07:38Z No. of bitstreams: 1 JEAN AKIO SHIDA HASHIOKA - DISSERTAÇÃO.pdf: 1481230 bytes, checksum: 0f68b8e12c0a1f82eeaf421be75f5c17 (MD5) / Approved for entry into archive by Elza Mitiko Sato null (elzasato@ibilce.unesp.br) on 2018-07-17T13:30:41Z (GMT) No. of bitstreams: 1 hashioka_jas_me_sjrp.pdf: 1481230 bytes, checksum: 0f68b8e12c0a1f82eeaf421be75f5c17 (MD5) / Made available in DSpace on 2018-07-17T13:30:41Z (GMT). No. of bitstreams: 1 hashioka_jas_me_sjrp.pdf: 1481230 bytes, checksum: 0f68b8e12c0a1f82eeaf421be75f5c17 (MD5) Previous issue date: 2018-06-15 / Este trabalho consiste em apresentar a abordagem das cadeias de Markov como ferramenta auxiliadora na prática docente da matemática no Ensino Médio, tornando o processo mais tangível à realidade dos alunos. A contextualização dos conteúdos de matrizes, sistemas lineares e probabilidade poderá ser feita com exemplos práticos do cotidiano, considerando o meio social em que vivem os estudantes, resgatando assim o desejo pela aprendizagem e pelas aplicações da matemática. Espera-se desta forma maior receptividade da disciplina por parte dos discentes e, potencialmente, melhor resposta ao aprendizado pretendido. Assim sendo, o estudo aborda, num primeiro momento, a convergência de distribuição de probabilidade de uma cadeia de Markov de dois estados por meio de limites no in nito de uma função de probabilidade. Desta primeira cadeia de Markov de dois estados, é elaborado um roteiro de aula a ser abordado como exemplo a ser trabalhado em sala de aula relacionado às probabilidades de um time de futebol vencer as suas próximas partidas. Prosseguindo, observa-se a aplicabilidade das cadeias de Markov para calcular a distribuição de probabilidades de um jogador estar perdido em diferentes salas de um labirinto para cada tentativa de encontrar a saída. A m de evidenciar outro exemplo de aplicação das cadeias de Markov, há a construção de um modelo de preci cação de ativos com o objetivo de prever os preços de algumas ações de empresas negociadas na BM&FBOVESPA, a bolsa de valores do Brasil. Tal modelo de preci cação de ativos mostrou-se adequado estatisticamente como ferramenta de análise e cálculo dos retornos médios esperados de alguns dos ativos estudados. Por meio do conteúdo apresentado neste estudo, espera-se contribuir com o aprofundamento de alguns recursos e conceitos para a prática docente com aulas sobre cadeias de Markov no Ensino Médio. Esses aspectos direcionam esta pesquisa para um relevante processo de desenvolvimento do raciocínio, senso crítico e tomada de decisões em situações progressivamente mais complexas vividas pelos alunos. / This work presents the Markov chain approach as a useful tool in the teaching practice of mathematics in High School, making the process more tangible to the students' reality. The contextualization of matrix contents, linear systems and probability can be done with practical examples of daily life, considering the social environment in which students live, thus recovering the desire for learning and the applications of mathematics. It is expected in this way more receptivity of the discipline on the part of the students and, potentially, better response to the intended learning. Thus, the study addresses, rst, the convergence of probability distribution of a two-state Markov chain by means of in nite limits of a probability function. From this rst Markov chain of two states, a lesson script is elaborated to be approached as example to be worked in classroom related to the probabilities of a soccer team to win its next matches. Proceeding, we observe the applicability of Markov chains to calculate the probability distribution of a player being lost in di erent rooms of a maze for each attempt to nd the exit. In order to highlight another example of the application of the Markov chains, an asset pricing model is designed to predict the prices of some shares of companies traded on BM&FBOVESPA, the Brazilian stock exchange. Such an asset pricing model proved to be statistically adequate as a tool for analysis and calculation of the expected average returns of some of the assets studied. Through the content presented in this study, it is hoped to contribute with the deepening of some resources and concepts for the teaching practice with classes on Markov chains in High School. These aspects direct this research to a relevant process of development of reasoning, critical sense and decision making in progressively more complex situations experienced by students.
35

An automated approach for systems performance and dependability improvement through sensitivity analysis of Markov chains

de Souza Matos Júnior, Rubens 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T15:58:19Z (GMT). No. of bitstreams: 2 arquivo3464_1.pdf: 2672787 bytes, checksum: 9bee33c2153182c2ce64b9027453243a (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Sistemas computacionais estão em constante evolução para satisfazer crescimentos na demanda, ou novas exigências dos usuários. A administração desses sistemas requer decisões que sejam capazes de prover o nível mais alto nas métricas de desempenho e dependabilidade, com mudanças mínimas `a configuração existente. É comum realizar análises de desempenho, confiabilidade, disponibilidade e performabilidade de sistemas através de modelos analíticos, e as cadeias de Markov representam um dos formalismos matemáticos mais utilizados, permitindo estimar algumas métricas de interesse, dado um conjunto de parâmetros de entrada. No entanto, a análise de sensibilidade, quando feita, é executada simplesmente variando o conjunto de parâmetros dentro de suas faixas de valores e resolvendo repetidamente o modelo escolhido. A análise de sensibilidade diferencial permite a quem está modelando encontrar gargalos de uma maneira mais sistemática e eficiente. Este trabalho apresenta uma abordagem automatizada para análise de sensibilidade, e almeja guiar a melhoria de sistemas computacionais. A abordagem proposta é capaz de acelerar o processo de tomada de decisão, no que se refere a optimização de ajustes de hardware e software, além da aquisição e substituição de componentes. Tal metodologia usa as cadeias de Markov como técnica de modelagem formal, e a análise de sensibilidade desses modelos, preenchendo algumas lacunas encontradas na literatura sobre análise de sensibilidade. Por fim, a análise de sensibilidade de alguns sistemas distribuídos selecionados, conduzida neste trabalho, destaca gargalos nestes sistemas e fornece exemplos da acurácia da metodologia proposta, assim como ilustra sua aplicabilidade
36

Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems.

Levin, Pavel January 2012 (has links)
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.
37

Path Properties of Rare Events

Collingwood, Jesse January 2015 (has links)
Simulation of rare events can be costly with respect to time and computational resources. For certain processes it may be more efficient to begin at the rare event and simulate a kind of reversal of the process. This approach is particularly well suited to reversible Markov processes, but holds much more generally. This more general result is formulated precisely in the language of stationary point processes, proven, and applied to some examples. An interesting question is whether this technique can be applied to Markov processes which are substochastic, i.e. processes which may die if a graveyard state is ever reached. First, some of the theory of substochastic processes is developed; in particular a slightly surprising result about the rate of convergence of the distribution pi(n) at time n of the process conditioned to stay alive to the quasi-stationary distribution, or Yaglom limit, is proved. This result is then verified with some illustrative examples. Next, it is demonstrated with an explicit example that on infinite state spaces the reversal approach to analyzing both the rate of convergence to the Yaglom limit and the likely path of rare events can fail due to transience.
38

Limite do fluído para o grafo aleatório de Erdos-Rényi / Fluid limit for the Erdos-Rényi random graph

Fabio Marcellus Lima Sá Makiyama Lopes 23 April 2010 (has links)
Neste trabalho, aplicamos o algoritmo Breadth-First Search para encontrar o tamanho de uma componente conectada no grafo aleatório de Erdos-Rényi. Uma cadeia de Markov é obtida deste procedimento. Apresentamos alguns resultados bem conhecidos sobre o comportamento dessa cadeia de Markov. Combinamos alguns destes resultados para obter uma proposição sobre a probabilidade da componente atingir um determinado tamanho e um resultado de convergência do estado da cadeia neste instante. Posteriormente, aplicamos o teorema de convergência de Darling (2002) a sequência de cadeias de Markov reescaladas e indexadas por N, o número de vértices do grafo, para mostrar que as trajetórias dessas cadeias convergem uniformemente em probabilidade para a solução de uma equação diferencial ordinária. Deste resultado segue a bem conhecida lei fraca dos grandes números para a componente gigante do grafo aleatório de Erdos-Rényi, no caso supercrítico. Além disso, obtemos o limite do fluído para um modelo epidêmico que é uma extensão daquele proposto em Kurtz et al. (2008). / In this work, we apply the Breadth-First Search algorithm to find the size of a connected component of the Erdos-Rényi random graph. A Markov chain is obtained of this procedure. We present some well-known results about the behavior of this Markov chain, and combine some of these results to obtain a proposition about the probability that the component reaches a certain size and a convergence result about the state of the chain at that time. Next, we apply the convergence theorem of Darling (2002) to the sequence of rescaled Markov chains indexed by N, the number of vertices of the graph, to show that the trajectories of these chains converge uniformly in probability to the solution of an ordinary dierential equation. From the latter result follows the well-known weak law of large numbers of the giant component of the Erdos-Renyi random graph, in the supercritical case. Moreover, we obtain the uid limit for an epidemic model which is an extension of that proposed in Kurtz et al. (2008).
39

Optimal input design for nonlinear dynamical systems : a graph-theory approach

Valenzuela Pacheco, Patricio E. January 2014 (has links)
Optimal input design concerns the design of an input sequence to maximize the information retrieved from an experiment. The design of the input sequence is performed by optimizing a cost function related to the intended model application. Several approaches to input design have been proposed, with results mainly on linear models. Under the linear assumption of the model structure, the input design problem can be solved in the frequency domain, where the corresponding spectrum is optimized subject to power constraints. However, the optimization of the input spectrum using frequency domain techniques cannot include time-domain amplitude constraints, which could arise due to practical or safety reasons. In this thesis, a new input design method for nonlinear models is introduced. The method considers the optimization of an input sequence as a realization of the stationary Markov process with finite memory. Assuming a finite set of possible values for the input, the feasible set of stationary processes can be described using graph theory, where de Bruijn graphs can be employed to describe the process. By using de Bruijn graphs, we can express any element in the set of stationary processes as a convex combination of the measures associated with the extreme points of the set. Therefore, by a suitable choice of the cost function, the resulting optimization problem is convex even for nonlinear models. In addition, since the input is restricted to a finite set of values, the proposed input design method can naturally handle amplitude constraints. The thesis considers a theoretical discussion of the proposed input design method for identification of nonlinear output error and nonlinear state space models. In addition, this thesis includes practical applications of the method to solve problems arising in wireless communications, where an estimate of the communication channel with quantized data is required, and application oriented closed-loop experiment design, where quality constraints on the identified parameters must be satisfied when performing the identification step. / <p>QC 20141110</p>
40

Analysis of Case Histories by Markov Chains Using Juvenile Court Data of State of Utah

Uh, Soo-Hong 01 May 1973 (has links)
The purpose of this paper is to analyze juvenile court data using Markov Chains. A computer program was generalized with a single array orientation for analyzing realizations of a Markov Chain to the kth order within machine limitations. The data used in this paper were gathered by the Juvenile Court of the State of Utah for administrative purposes and limited to District II. The results from the paper, "Statistical Inference About Markov Chains" by Anderson and Goodman, were applied for testing hypotheses. The paper is divided into five chapters: introduction, statistical background, methodology, analysis and summary, conclusions.

Page generated in 0.0372 seconds