• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 108
  • 78
  • 33
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 275
  • 275
  • 74
  • 49
  • 38
  • 37
  • 35
  • 29
  • 29
  • 29
  • 28
  • 28
  • 27
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Limite do fluído para o grafo aleatório de Erdos-Rényi / Fluid limit for the Erdos-Rényi random graph

Lopes, Fabio Marcellus Lima Sá Makiyama 23 April 2010 (has links)
Neste trabalho, aplicamos o algoritmo Breadth-First Search para encontrar o tamanho de uma componente conectada no grafo aleatório de Erdos-Rényi. Uma cadeia de Markov é obtida deste procedimento. Apresentamos alguns resultados bem conhecidos sobre o comportamento dessa cadeia de Markov. Combinamos alguns destes resultados para obter uma proposição sobre a probabilidade da componente atingir um determinado tamanho e um resultado de convergência do estado da cadeia neste instante. Posteriormente, aplicamos o teorema de convergência de Darling (2002) a sequência de cadeias de Markov reescaladas e indexadas por N, o número de vértices do grafo, para mostrar que as trajetórias dessas cadeias convergem uniformemente em probabilidade para a solução de uma equação diferencial ordinária. Deste resultado segue a bem conhecida lei fraca dos grandes números para a componente gigante do grafo aleatório de Erdos-Rényi, no caso supercrítico. Além disso, obtemos o limite do fluído para um modelo epidêmico que é uma extensão daquele proposto em Kurtz et al. (2008). / In this work, we apply the Breadth-First Search algorithm to find the size of a connected component of the Erdos-Rényi random graph. A Markov chain is obtained of this procedure. We present some well-known results about the behavior of this Markov chain, and combine some of these results to obtain a proposition about the probability that the component reaches a certain size and a convergence result about the state of the chain at that time. Next, we apply the convergence theorem of Darling (2002) to the sequence of rescaled Markov chains indexed by N, the number of vertices of the graph, to show that the trajectories of these chains converge uniformly in probability to the solution of an ordinary dierential equation. From the latter result follows the well-known weak law of large numbers of the giant component of the Erdos-Renyi random graph, in the supercritical case. Moreover, we obtain the uid limit for an epidemic model which is an extension of that proposed in Kurtz et al. (2008).
22

Continued Fractions and their Interpretations

Hanusa, Christopher 01 April 2001 (has links)
The Fibonacci Numbers are one of the most intriguing sequences in mathematics. I present generalizations of this well known sequence. Using combinatorial proofs, I derive closed form expressions for these generalizations. Then using Markov Chains, I derive a second closed form expression for these numbers which is a generalization of Binet’s formula for Fibonacci Numbers. I expand further and determine the generalization of Binet’s formula for any kth order linear recurrence.
23

Long-Range Dependence of Markov Processes

Carpio, Kristine Joy Espiritu, kjecarpio@lycos.com January 2006 (has links)
Long-range dependence in discrete and continuous time Markov chains over a countable state space is defined via embedded renewal processes brought about by visits to a fixed state. In the discrete time chain, solidarity properties are obtained and long-range dependence of functionals are examined. On the other hand, the study of LRD of continuous time chains is defined via the number of visits in a given time interval. Long-range dependence of Markov chains over a non-countable state space is also carried out through positive Harris chains. Embedded renewal processes in these chains exist via visits to sets of states called proper atoms. Examples of these chains are presented, with particular attention given to long-range dependent Markov chains in single-server queues, namely, the waiting times of GI/G/1 queues and queue lengths at departure epochs in M/G/1 queues. The presence of long-range dependence in these processes is dependent on the moment index of the lifetime distribution of the service times. The Hurst indexes are obtained under certain conditions on the distribution function of the service times and the structure of the correlations. These processes of waiting times and queue sizes are also examined in a range of M/P/2 queues via simulation (here, P denotes a Pareto distribution).
24

Monotonicity and complete monotonicity for continuous-time Markov chains

Dai Pra, Paolo, Louis, Pierre-Yves, Minelli, Ida January 2006 (has links)
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent.<br> However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time. / Nous étudions les notions de monotonie et de monotonie complète pour les processus de Markov (ou chaînes de Markov à temps continu) prenant leurs valeurs dans un espace partiellement ordonné. Ces deux notions ne sont pas équivalentes, comme c'est le cas lorsque le temps est discret. Cependant, nous établissons que pour certains ensembles partiellement ordonnés, l'équivalence a lieu en temps continu bien que n'étant pas vraie en temps discret.
25

Perfect Sampling of Vervaat Perpetuities

Williams, Robert Tristan 01 January 2013 (has links)
This paper focuses on the issue of sampling directly from the stationary distribution of Vervaat perpetuities. It improves upon an algorithm for perfect sampling first presented by Fill & Huber by implementing both a faster multigamma coupler and a moving value of Xmax to increase the chance of unification. For beta = 1 we are able to reduce the expected steps for a sample by 22%, and at just beta = 3 we lower the expected time by over 80%. These improvements allow us to sample in reasonable time from perpetuities with much higher values of beta than was previously possible.
26

Computing Most Probable Sequences of State Transitions in Continuous-time Markov Systems.

Levin, Pavel 22 June 2012 (has links)
Continuous-time Markov chains (CTMC's) form a convenient mathematical framework for analyzing random systems across many different disciplines. A specific research problem that is often of interest is to try to predict maximum probability sequences of state transitions given initial or boundary conditions. This work shows how to solve this problem exactly through an efficient dynamic programming algorithm. We demonstrate our approach through two different applications - ranking mutational pathways of HIV virus based on their probabilities, and determining the most probable failure sequences in complex fault-tolerant engineering systems. Even though CTMC's have been used extensively to realistically model many types of complex processes, it is often a standard practice to eventually simplify the model in order to perform the state evolution analysis. As we show here, simplifying approaches can lead to inaccurate and often misleading solutions. Therefore we expect our algorithm to find a wide range of applications across different domains.
27

Adaptive Bandwidth Allocation for Wired and Wireless WiMAX Networks

Huang, Kai-chen 09 July 2008 (has links)
In this thesis, we consider a network environment which consists of wired Internet and a wireless broadband network (WiMAX); data from wired or wireless network are all conveyed through WiMAX links to its destination. In order to promise the quality of real-time traffic and allow more transmission opportunity for other traffic types, we propose an Adaptive Bandwidth Allocation (ABA) algorithm for BS to adequately allocate bandwidth. Our ABA algorithm would first reserve required minimum bandwidth for high-priority traffic, such as video streaming. By allocating minimum bandwidth to real-time traffic, the delay time constraint can be satisfied. Other traffic types, such as non-real-time, which have no real-time requirement, may gain extra bandwidth to improve their throughput. For best-effort traffic, the remaining bandwidth can be used to avoid any possible starvation. We build four-dimension Markov chains to evaluate the performance of the proposed ABA algorithm. In the analytical model, we first divide transmission on WiMAX into upload and download phases, and analyze the ABA performance by using Poisson process to generate traffic. At last, by comparing to a previous work, we observe the impacts of different traffic parameters on WiMAX network performance in terms of average delay time, average throughput, and average packet-drop ratio.
28

The normal kernel coupler : an adaptive Markov Chain Monte Carlo method for efficiently sampling from multi-modal distributions /

Warnes, Gregory R. January 2000 (has links)
Thesis (Ph. D.)--University of Washington, 2000. / Vita. Includes bibliographical references (p. 105-112).
29

A Bayesian approach to estimating heterogeneous spatial covariances /

Damian, Doris. January 2002 (has links)
Thesis (Ph. D.)--University of Washington, 2002. / Vita. Includes bibliographical references (p. 1226-131).
30

Modeling a non-homogeneous Markov process via time transformation /

Hubbard, Rebecca Allana. January 2007 (has links)
Thesis (Ph. D.)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 177-191).

Page generated in 0.1025 seconds